id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
18228310
pes2o/s2orc
v3-fos-license
The New Ekpyrotic Ghost The new ekpyrotic scenario attempts to solve the singularity problem by involving violation of the null energy condition in a model which combines the ekpyrotic/cyclic scenario with the ghost condensate theory and the curvaton mechanism of production of adiabatic perturbations of metric. The Lagrangian of this theory, as well as of the ghost condensate model, contains a term with higher derivatives, which was added to the theory to stabilize its vacuum state. We found that this term may affect the dynamics of the cosmological evolution. Moreover, after a proper quantization, this term results in the existence of a new ghost field with negative energy, which leads to a catastrophic vacuum instability. We explain why one cannot treat this dangerous term as a correction valid only at small energies and momenta below some UV cut-off, and demonstrate the problems arising when one attempts to construct a UV completion of this theory. Introduction: Inflation versus Ekpyrosis After more than 25 years of its development, inflationary theory gradually becomes a standard cosmological paradigm. It solves many difficult cosmological problems and makes several predictions, which are in a very good agreement with observational data. There were many attempts to propose an alternative to inflation. In general, this could be a very healthy tendency. If one of these attempts will succeed, it will be of great importance. If none of them are successful, it will be an additional demonstration of the advantages of inflationary cosmology. However, since the stakes are high, we are witnessing a growing number of premature announcements of success in developing an alternative cosmological theory. An instructive example is given by the ekpyrotic scenario [1]. The authors of this scenario claimed that it can solve all cosmological problems without using the stage of inflation. However, the original ekpyrotic scenario did not work. It is sufficient to say that the large mass and entropy of the universe remained unexplained, instead of solving the homogeneity problem this scenario only made it worse, and instead of the big bang expected in [1], there was a big crunch [2,3]. Soon after that, the ekpyrotic scenario was replaced by the cyclic scenario, which used an infinite number of periods of expansion and contraction of the universe [4]. Unfortunately, the origin of the scalar field potential required in this model, as well as in [1], remains unclear, and the very existence of the cycles postulated in [4] has not been demonstrated. When this scenario was analyzed using the particular potential given in [4] and taking into account the effect of particle production in the early universe, a very different cosmological regime was found [5,6]. The most difficult of the problems facing this scenario is the problem of the cosmological singularity. Originally there was a hope that the cosmological singularity problem will be solved in the context of string theory, but despite the attempts of the best experts in string theory, this problem remains unsolved [7,8,9]. Recently there were some developments in the analysis of this problem using the AdS/CFT correspondence [10], but the results rely on certain conjectures and apply only to five dimensional space. As the authors admit, "precise calculations are currently beyond reach" for the physically interesting four dimensional space-time. This issue was previously studied in [11], where it was concluded that "In our study of the field theory evolution, we find no evidence for a bounce from a big crunch to a big bang." In this paper we will discuss the recent development of this theory, called 'the new ekpyrotic scenario' [12,13,14,15], which created a new wave of interest in the ekpyrotic/cyclic ideas. This is a rather complicated scenario, which attempts to solve the singularity problem by involving violation of the null energy condition (NEC) in a model which combines the ekpyrotic scenario [1] with the ghost condensate theory [16] and the curvaton mechanism of production of adiabatic perturbations of metric [17,18]. Usually the NEC violation leads to a vacuum instability, but the authors of [12,13,14,15] argued that the instability occurs only near the bounce, so it does not have enough time to fully develop. The instability is supposed to be dampened by higher derivative terms of the type −( φ) 2 (the sign is important, see below), which were added to the action of the ghost condensate in [16]. These terms are absolutely essential in the new ekpyrotic theory for stabilization of the vacuum against the gradient and Jeans instabilities near the bounce. However, these terms are quite problematic. Soon after introducing them, the authors of the ghost condensate theory, as well as several others, took a step back and argued that these terms cannot appear in any consistent theory, that the ghost condensate theory is ultraviolet-incomplete, that theories of this type lead to violation of the second law of thermodynamics, allow construction of a perpetuum mobile of the 2nd kind, and therefore they are incompatible with basic gravitational principles [19,20,21,22]. These arguments did not discourage the authors of the new ekpyrotic theory and those who followed it, so we decided to analyze the situation in a more detailed way. First of all, we found that the higher derivative terms were only partially taken into account in the investigation of perturbations, and were ignored in the investigation of the cosmological evolution in [12,13,14,15]. Therefore the existence of consistent and stable bouncing solutions postulated in the new ekpyrotic scenario required an additional investigation. We report the results of this investigation in Section 6. More importantly, we found that these additional terms lead to the existence of new ghosts, which have not been discussed in the ghost condensate theory and in the new ekpyrotic scenario [12,13,14,15,16]. In order to distinguish these ghosts from the relatively harmless condensed ghosts of the ghost condensate theory, we will call them ekpyrotic ghosts, even though, as we will show, they are already present in the ghost condensate theory. These ghosts lead to a catastrophic vacuum instability, quite independently of the cosmological evolution. In other words, the new ekpyrotic scenario, as well as the ghost condensate theory, appears to be physically inconsistent. But since the new ekpyrotic scenario, as different from the ghost condensate model, claims to solve the fundamental singularity problem by justifying the bounce solution, the existence of the ekpyrotic ghosts presents a much more serious problem for the new ekpyrotic scenario with such an ambitious goal. We describe this problem in Sections 2, 3, 4, 5, and 7. Finally, in Appendix we discuss certain attempts to save the new ekpyrotic scenario. One of such attempts is to say that this scenario is just an effective field theory which is valid only for sufficiently small values of frequencies and momenta. But then, of course, one cannot claim that this theory solves the singularity problem until its consistent UV completion with a stable vacuum is constructed. For example, we will show that if one simply ignores the higher derivative terms for frequencies and momenta above a certain cutoff, then the new ekpyrotic scenario fails to work because of the vacuum instability which is even much stronger than the ghost-related instability. We will also describe a possible procedure which may provide a consistent UV completion of the theory with higher derivative terms of the type +( φ) 2 . Then we explain why this procedure fails for the ghost condensate and the new ekpyrotic theory where the sign of the higher derivative term must be negative. Ghost condensate and new ekpyrosis: The basic scenario The full description of the new ekpyrotic scenario is pretty involved. It includes two fields, one of which, φ, is responsible for the ekpyrotic collapse, and another one, χ, is responsible for generation of isocurvature perturbations, which eventually should be converted to adiabatic perturbations. Both fields must have quite complicated potentials, which can be found e.g. in [15]. For the purposes of our discussion it is sufficient to consider a simplified model containing only one field, φ. The simplest version of this scenario can be written as follows: where X = (∂φ) 2 2m 4 is dimensionless. P (X) is a dimensionless function which has a minimum at X = 0. The first two terms in this theory represent the theory of a ghost condensate, the last one is the ekpyrotic potential. This potential is very small and very flat at large φ, so for large φ this theory is reduced to the ghost condensate model of [16]. The ghost condensate state corresponds to the minimum of P (X). Without loss of generality one may assume that this minimum occurs at X = 1/2, i.e. at ∂ i φ = 0,φ = −m 2 , so that φ = −m 2 t. As a simplest example, one can consider a function which looks as follows in the vicinity of its minimum: The term − 1 2 φ M ′ 2 was added to the Lagrangian in [16] for stabilization of the fluctuations of the field φ in the vicinity of the background solution φ(t) = −m 2 t; more about it later. This theory was represented in several different ways in [16,12,13,14,15], where a set of parameters such as K andM = M 2 /M ′ was introduced. The parameter K can always be absorbed in a redefinition of M; in our notation, K = 1. The equation for the homogeneous background can be represented as follows: where we introduced the notation The meaning of this notation will be apparent soon. The complete equation describing the dependence on the spatial coordinates is Instead of solving these equations, the authors of [12,13,14,15] analyzed (though not solved) equation (3) ignoring the higher derivative term ∂ t (φ + 3Hφ)/m 2 g , assuming that it is small. Then they analyzed equation (5), applying it to perturbations, ignoring the term ∂ t ( φ)/m 2 g , but keeping the term ∂ i ( φ)/m 2 g , assuming that it is large. Our goal is to see what happens if one performs an investigation in a self-consistent way. In order to do this, let us temporarily assume that the higher derivative term is absent, which corresponds to the limit m g → ∞. In this case our equation for φ reduces to the equation used in [12,13,14,15] ∂ t [a 3 P, Xφ ] = −a 3 V, φ m 4 /M 4 . One of the Einstein equations, in the same approximation, iṡ where ε is the energy density and p is the pressure. (We are using the system of units where The null energy condition (NEC) requires that ε + p ≥ 0, andḢ ≤ 0. Therefore a collapsing universe with H < 0 cannot bounce back unless NEC is violated. It implies that the bounce can be possible only if P, X becomes negative, P, X < 0, i.e. the field X should become smaller than 1/2. It is convenient to represent the general solution for φ(t) as where π 0 (t) satisfies equationπ In this case one can show that the perturbations of the field π(x i , t) have the following spectrum at small values of P, X : This means that P, X plays in this equation the same role as the square of the speed of sound. For small P, X , one has c 2 s = P, X . The ghost condensate point P, X = 0, which separates the region where NEC is satisfied and the region where it is violated, is the point where the perturbations are frozen. The real disaster happens when one crosses this border and goes to the region with P, X < 0, which corresponds to c 2 s < 0. In this area the NEC is violated, and, simultaneously, perturbations start growing exponentially, This is a disastrous gradient instability, which is much worse than the usual tachyonic instability. The tachyonic instability develops as exp( √ m 2 − k 2 t), so its rate is limited by the tachyonic mass, and it occurs only for k 2 < m 2 . Meanwhile the instability (12) occurs at all momenta k, and the rate of its development exponentially grows with the growth of k. This makes it abundantly clear how dangerous it is to violate the null energy condition. That is why it was necessary to add higher derivative terms of the type of − 1 to the ghost condensate Lagrangian [16]: The hope was that such terms could provide at least some partial protection by changing the dispersion relation. Since we are interested mostly in the high frequency effects corresponding to the rapidly developing instability, let us ignore for a while the gravitational effects, which can be achieved by taking a(t) = 1, H = 0. In this case, the effective Lagrangian for perturbations π of the field φ in a vicinity of the minimum of P (X) (i.e. for small |P, X |) is The equation of motion for the field π is At small frequencies ω, which is the case analyzed in [16], the dispersion relation corresponding to this equation looks as follows: This equation implies that the instability occurs only in some limited range of momenta k, which can be made small if the parameter m g is sufficiently small and, therefore, the higher derivative therm is sufficiently large. This is the one of the main assumptions of the new ekpyrotic scenario: If the violation of the NEC occurs only during a limited time near the bounce from the singularity, one can suppress the instability by adding a sufficiently large term − 1 (This term must have negative sign, because otherwise it does not protect us from the gradient instability. This will be important for the discussion in Appendix.) Note that one cannot simply add the higher derivative term and take it into account only up to some cut-off ω 2 , k 2 < Λ 2 . For example, if we "turn on" this term only at k 2 < Λ 2 , it is not going to save us from the gradient instability which occurs at ω 2 = P, X k 2 for all unlimitedly large k in the region where the NEC is violated and P, X < 0. There are several different problems associated with this scenario. First of all, in order to tame the instability during the bounce one should add a sufficiently large term − 1 2 φ M ′ 2 , which leads to the emergence of the term 1 2m 2 g ( π) 2 in the equation for π. But if this term is large, then one should not discard it in the equations for the homogeneous scalar field and in the Einstein equations, as it was done in [12,13,14,15]. The second problem is associated with the way the higher derivative terms were treated in [16,12,13,14,15]. The dispersion relation studied there was incomplete. The full dispersion relation for the perturbations in the theory (13), (14) is This equation coincides with eq. (15) in the limit of small ω studied in [16,12,13,14,15]. However, this equation has two different branches of solutions, which we will present, for simplicity, for the case P, X = 0 corresponding to the minimum of the ghost condensate potential P (X): where At high momenta, for k 2 ≫ m 2 g , the spectrum for all 4 solutions is nearly the same ω ≈ ±|k| . At small momenta, for k 2 ≪ m 2 g , one has two types of solutions: The lower frequency solution, which was found in [16], is But there is also another, higher frequency solution, The reason for the existence of an additional branch of solutions is very simple. Equation for the field φ in the presence of the term with the higher derivatives is of the fourth order. To specify its solutions it is not sufficient to know the initial conditions for the field and its first derivative, one must know also the initial conditions for the second and the third derivatives. As a result, a single equation describes two different degrees of freedom. To find a proper interpretation of these degrees of freedom, one must perform their quantization. This will be done in the next two sections. As we will show in these sections, the lower frequency solution corresponds to normal particles with positive energy ω = +ω 1 (k), whereas the higher frequency solution corresponds to ekpyrotic ghosts with negative energy −ω 2 (k). The quantity −m g has the meaning of the ghost mass: it is given by the energy ω = −ω 2 (k) at k = 0 and it is negative. Hamiltonian quantization We see that our equations for ω have two sets of solutions, corresponding to states with positive and negative energy. As we will see now, some of them correspond to normal particles, some of them are ghosts. We will find below that the Hamiltonian based on the classical Lagrangian in eq. (13) is The expressions for ω 1 and ω 2 will be presented below for the case of generic c 2 s , for c 2 s = 0 they are given in eqs. (18) and (19). Both ω 1 and ω 2 are positive, therefore a † k and a k are creation/annihilation operators of normal particles whereas c † k and c k are creation/annihilation operators of ghosts. We will perform the quantization starting with the Lagrangian in eq. (13), with an arbitrary speed of sound, c 2 s = P, X . The case c s = 1 is the Lorentz invariant Lagrangian. The case c 2 s = P, X = 0 is the case considered in the previous section and appropriate to the ghost condensate and the new ekpyrotic scenario at the minimum of P (X). By rescaling the field π → M 2 m 2 π we have This is the no-gravity theory considered in the previous section. Note that the ghost condensate set up is already build in, the negative kinetic term for the original ghost is eliminated by the condensate. The existence of higher derivatives was only considered in [12,13,14,15,16] as a 'cure' for the problem of stabilizing the system after the original ghost condensation. As we argued in the previous section, this 'cure' brings in a new ghost, which remained unnoticed in [12,13,14,15,16]. In this section, as well as in the next one, we will present a detailed derivation of this result. Because of the presence of higher derivatives in the Lagrangian, the Hamiltonian quantization of this theory is somewhat nontrivial. It can be performed by the method invented by Ostrogradski [23]. Thus we start with the rescaled eq. (13) The equation of motion for the field π is If the Lagrangian depends on the field π and on its first and second time derivatives, the general procedure is the following. Starting with L = L(π,π,π), one defines 2 canonical degrees of freedom, (q 1 , p 1 ) and (q 2 , p 2 ): The canonical Hamiltonian is The canonical Hamiltonian equations of motion, are standard; they exactly reproduce the Lagrangian equation of motion (26). The quantization procedure requires promoting the Poisson brackets to commutators which allows to identify the spectrum. There are many known examples of the Ostrogradski procedure of derivation of the canonical Hamiltonian, see for example [24,25]. The Hamiltonian density constructed by the Ostrogradski procedure for the Lagrangian (25) is The next step in quantization is to consider the ansatz for the solution of classical equations of motion in the form where We impose the Poisson brackets and promote them to commutators of the type This quantization condition requires to promote the solution of the classical equation (31) to the quantum operator form, where and we impose normal commutation relation both on particles with creation and annihilation operators a † and a and ghosts, c † and c: Here and The Hamiltonian operator acquires a very simple form (38) Here the infinite term C is equal to (2π) 3 2 (ω 1 − ω 2 )δ 3 (0). It represents the infinite shift of the vacuum energy due to the sum of all modes of the zero-point energies and is usually neglected in quantum field theory. Apart from this infinite c-number this is an expression promised in eq. (23). We now define the vacuum state |0 as the state which is annihilated both by the particle as well as by the ghosts annihilation operators, a k |0 = c k |0 = 0. Thus the energy operator acting on a state of a particle has positive value and on a state of a ghost has the negative value 1 This confirms the physical picture outlined at the end of the previous section. Lagrangian quantization The advantage of the Hamiltonian method is that it gives an unambiguous definition of the quantum-mechanical energy operator, which is negative for ghosts. This is most important for our subsequent analysis of the vacuum instability in the new ekpyrotic scenario. However, it is also quite instructive to explain the existence of the ghost field in new ekpyrotic scenario using the Lagrangian approach. The Lagrangian formulation is very convenient for coupling of the model to gravity. Using Lagrangian multiplier, one can rewrite eq. (24) as Variation with respect to B gives λ = B m 2 g . After substituting λ in (40) and skipping total derivative we obtain Introducing new variables σ, ξ according to and substituting π = σ − ξ, B = π = m 2 g ξ in (41) we obtain In the case c 2 s = 1 we have two decoupled scalar fields: massive with negative kinetic energy and massless with positive kinetic energy. For the homogeneous field (k = 0 mode) the Lagrangian does not depend on c 2 s and is reduced to The relation between the k = 0 mode of the original field π, the normal field σ and the ghost field ξ is When c 2 s = 1 and k = 0 these fields still couple. To diagonalize the Lagrangian in eq. (24) and decouple the oscillators we have to go to normal coordinates, similar to the case of the classical mechanics of coupled harmonic oscillators. For that we need to solve the eigenvalue problem and find the eigenfrequencies of the oscillators. Let us consider the modes with the wavenumbers k. For such modes we can perform the following change of variables where ω 1 , ω 2 are defined in eqs. (36), (37). In the special case of c 2 s = 0 the answers for ω 1 , ω 2 simplify and are shown in eqs. (18) and (19). After a change of variables we find for these modes in the momentum spaceL The modes of σ are normal, and the modes of ξ are ghosts. Using this Lagrangian, one can easily confirm the final result of the Hamiltonian quantization given in the previous section. 2 2 After we finished this paper, we learned that the Lagrangian quantization of the ghost condensate scenario was earlier performed by Aref'eva and Volovich for the case c 2 s = P, X = 0 [26], and they also concluded that this scenario suffers from the existence of ghosts. When our works overlap, our results agree with each other. We use the Lagrangian approach mainly to have an alternative derivation of the results of the Hamiltonian quantization. The Hamiltonian approach clearly establishes the energy operator and the sign of its eigenvalues, which is necessary to have an unambiguous proof that the energy of the ghosts is indeed negative and the ghosts do not disappear at non-vanishing c 2 s = P, X = 0 when the ekpyrotic universe is out of the ghost condensate minimum. The classical mode σ k is associated with creation/annihilation operators a † k , a k of normal particles after quantization and the classical mode ξ k is associated with creation/annihilation operators of ghosts particles c † k , c k after quantization. A quantization of the theory in eq. (47) leads to the Hamiltonian in eq. (23). Energy-momentum tensor and equations of motion First, we will compute the energy-momentum tensor (EMT) of the Lagrangian (42) using the Noether procedure: where η µ ν is Minkowski metric. We find The energy density is For the homogeneous field (k = 0 mode), the energy density can be split into two parts, i.e. a normal field part and an ekpyrotic ghost field part: Thus the energy of the ghost field ξ is negative. Up to now we have turned off gravity. In the presence of gravity, the energy-momentum tensor of the full Lagrangian (1) in Sec. 2 is calculated by varying the action with respect to the metric: where From this, for a homogeneous, spatially flat FRW space time we have the energy density and the pressure Note that in the homogeneous case in the absence of gravity the ekpyrotic ghost field ξ as defined in eq. (45) is directly proportional to the field Y : The closed equations of motion, which we used for our numerical analysis, are obtained as follows:φ Here X =φ 2 /2m 4 . In these equations the higher derivative corrections appear in the terms containing the derivatives of Y . The last of these equations shows that Y → 0 in the limit M ′ → ∞ (i.e. m g → ∞), and then the dynamics reduces to one with no higher derivative corrections. The closed equations of motion for π coupled to gravity are obtained by expanding (55) and linearizing with respect to π and Y . Then we geẗ On reality of the bounce and reality of ghosts Using the equations derived above, we performed an analytical and numerical investigation of the possibility of the bounce in the new ekpyrotic scenario. We will not present all of the details of this investigation here since it contains a lot of material which may distract the reader from the main conclusion of our paper, discussed in the next section: Because of the existence of the ghosts, this theory suffers from a catastrophic vacuum instability. If this is correct, any analysis of classical dynamics has very limited significance. However, we will briefly discuss our main findings here, just to compare them with the expectations expressed in [12,13,14,15]. Our investigation was based on the particular scenario discussed in [13,15] because no explicit form of the full ekpyrotic potential was presented in [12,14]. The authors of [13,15] presented the full ekpyrotic potential, but they did not fully verify the validity of their scenario, even in the absence of the higher derivative terms. Before discussing our results taking into account higher derivatives, let us remember several constraints on the model parameters which were derived in [12,13,14,15]. We will represent these constraints in terms of the ghost condensate mass instead of the parameter M ′ , for K = 1. In this case the stability condition (7.19) in [13] (see also [12,14]) reads: It was assumed in [13] that the bounce should occur very quickly, during the time ∆t |H 0 | −1 ∼ 1/ p|V min |. Here H 0 is the Hubble constant at the end of the ekpyrotic state, just before it start decreasing during the bounce, p ∼ 10 −2 , and V min is the value of the ekpyrotic potential in its minimum. During the bounce one can estimate ∆H ∼ |H 0 | ∼Ḣ∆t Ḣ |H 0 | because we assume, following [13], that ∆t < |H 0 | −1 , and we assume an approximately linear change of H from −|H 0 | to |H 0 |. This means thatḢ |H 0 | |H 0 |. In this case the previous inequalities become quite restrictive, This set of inequalities requires that the stable bounce is not generic; it can occur only for a fine-tuned value of the ghost mass, The method of derivation of these conditions required an additional condition to be satisfied, |H 0 | ∼ p|V min | ≪ M 2 , see Eqs. (8.8) and (8.17) of Ref. [13]. This condition is satisfied for Whereas the condition (59) seems necessary in order to avoid the development of the gravitational instability and the gradient instability during the bounce for m g ≫ M 2 , it is not sufficient, simply because the very existence of the bounce may require m g to be very much different from Figure 1: The "new ekpyrotic potential," see Fig. 3 in [13] and Fig. 6 in [15]. The cosmological evolution in this model results in a universe with a permanently growing rate of expansion after the bounce, which is unacceptable. . Indeed, our investigation of the cosmological evolution in this model shows that generically the bounce does not appear at all, or one encounters a singular behavior ofφ because of the vanishing of the term P, X +2XP, XX in (55), or one finds an unstable bounce, or the bounce ends up with an unlimited growth of the Hubble constant, like in the Big Rip scenario [27]. Finding a proper potential leading to a desirable cosmological evolution requires a lot of fine-tuning, in addition to the fine-tuning already described in [13,15]. For example, the bounce in the model with the "new ekpyrotic potential" described in [13,15] and shown in Fig. 1 results in a universe with a permanently growing rate of expansion after the bounce, which would be absolutely different from our universe. To avoid this disaster, one must bend the potential, to make it approaching the value corresponding to the present value of the cosmological constant, see Fig. 2. This bending should not be too sharp, and it should not begin too early, since otherwise the universe bounces back and ends up in the singularity. Fig. 3 shows the bouncing solution in the theory with this potential. Figure 2: An improved potential which leads to a bounce followed by a normal cosmological evolution. We do not know whether this extremely fine-tuned potential can be derived from any realistic theory. Our calculations clearly demonstrate the reality of the ekpyrotic ghosts, see Fig. 4, which shows the behavior of the ghost-related field Y = M 2 m 2 ξ near the bounce. The oscillations shown in Fig. 4 represent the ghost matter with negative energy, which was generated during the ekpyrotic collapse. We started with initial conditions Y =Ẏ = 0, i.e. in the vacuum without ghosts, and yet the ghost-related field Y emerged dynamically. It oscillates with the frequency which is much higher than the rate of the change of the average value of the field φ. This shows that the ekpyrotic ghost is not just a mathematical construct or a figment of imagination, but a real field. We have found that the amplitude of the oscillations of the ghost field is very sensitive to the choice of initial conditions; it may be negligibly small or very large. Therefore in the investigation of the cosmological dynamics one should not simply consider the universe filled with scalar fields or scalar particles. The universe generically will contain normal particles and ghost particles and fields with negative energy. The ghost particles will interact with normal particles in a very unusual way: particles and ghosts will run after each other with ever growing speed. This regime is possible because when the normal particles gain energy, the ghosts loose energy, so the acceleration regime is consistent with energy conservation. This unusual instability, which is very similar to the process to be considered in the next section, can make it especially difficult to solve the homogeneity problem in this scenario. Ghosts, singularity and vacuum instability It was not the goal of the previous section to prove that the ghosts do not allow one to solve the singularity problem. They may or may not spoil the bounce in the new ekpyrotic scenario. However, in general, if one is allowed to introduce ghosts, then the solution of the singularity problem becomes nearly trivial, and it does not require the ekpyrotic scenario or the ghost condensate. Indeed, let us consider a simple model describing a flat collapsing universe which contains a dust of heavy non-relativistic particles with initial energy density ρ M , and a gas of ultra-relativistic ghosts with initial energy density −ρ g < 0. Suppose that at the initial moment t = 0, when the scale factor of the universe was equal to a(0) = 1, the energy density was dominated by energy density of normal particles, ρ M − ρ g > 0. The absolute value of the ghost energy density in the collapsing universe grows faster than the energy of the non-relativistic matter. The Friedmann equation describing a collapsing universe is In the beginning of the cosmological evolution, the universe is collapsing, but when the scale factor shrinks to a bounce = ρg ρ M , the Hubble constant vanishes, and the universe bounces back, thus avoiding the singularity. Thus nothing can be easier than solving the singularity problem once we invoke ghosts to help us in this endeavor, unless we are worried about the gravitational instability problem mentioned in the previous section. Other examples of the situations when ghosts save us from the singularity can be found, e.g. in [28], where the authors not only study a way to avoid the singularity with the help of ghosts, but even investigate the evolution of metric perturbations during the bounce. So what can be wrong with it? A long time ago, an obvious answer would be that theories with ghosts lead to negative probabilities, violate unitarity and therefore do not make any sense whatsoever. Later on, it was realized that if one treats ghosts as particles with negative energy, then problems with unitarity are replaced by the problem of vacuum stability due to interactions between ghosts and normal particles with positive energy, see, e.g. [29,30,31,32,33,34,35]. Indeed, unless the ghosts are hidden in another universe [30], nothing can forbid creation of pairs of ghosts and normal particles under the condition that their total momentum and energy vanish. Since the total energy of ghosts is negative, this condition is easy to satisfy. There are many channels of vacuum decay; the simplest and absolutely unavoidable one is due to the universal gravitational interaction between ghosts and all other particles, e.g. photons. An example of this interaction was considered in [32], see Fig. 5. Nothing can forbid this process because it does not require any energy input: the positive energy of normal particles can be compensated by the negative energy of ghosts. An investigation of the rate of the vacuum decay in this process leads to a double-divergent result. First of all, there is a power-law divergence because nothing forbids creation of particles with indefinitely large energy. In addition, there is also a quadratic divergence in the integral over velocity [32,34]. This leads to a catastrophic vacuum decay. Of course, one can always argue that such processes are impossible or suppressed because of some kind of cutoff in momentum space, or further corrections, or non-local interactions. However, the necessity of introducing such a cut-off, or additional corrections to corrections, after introducing the higher derivative terms which were supposed to work as a cutoff in the first place, adds a lot to the already very high price of proposing an alternative to inflation: First it was the ekpyrotic theory, then the ghost condensate and curvatons, and finally -ekpyrotic ghosts with negative energy which lead to a catastrophic vacuum instability. And if we are ready to introduce an ultraviolet cutoff in momentum space, which corresponds to a small-scale cutoff in space-time, then why would we even worry about the singularity problem, which is supposed to occur on an infinitesimally small space-time scale? In fact, this problem was already emphasized by the authors of the new ekpyrotic scenario, who wrote [13]: "But ghosts have disastrous consequences for the viability of the theory. In order to regulate the rate of vacuum decay one must invoke explicit Lorentz breaking at some low scale [32]. In any case there is no sense in which a theory with ghosts can be thought as an effective theory, since the ghost instability is present all the way to the UV cut-off of the theory." We have nothing to add to this characterization of their own model. Appendix. Exorcising ghosts? After this paper was submitted, one of the authors of the new ekpyrotic scenario argued [36] that, according to [37], ghosts can be removed by field redefinitions and adding other degrees of freedom in the effective UV theory [37]. Let us reproduce this argument and explain why it does not apply to the ghost condensate theory and to the new ekpyrotic scenario. Refs. [36,37] considered a normal massless scalar field φ with Lagrangian density in (−, +, +, +) signature. 3 where a = ±1, and V int is a self-interaction term. This theory is similar to the ghost condensate/new ekpyrotic theory in the case a = −1, c s = 1, see eqs. (1) and (24). The sign of a is crucially important: the term + 1 2m 2 g ( φ) 2 would not protect this theory against the gradient instability in the region with the NEC violation. Note that in notation of [36,37], m g = Λ, which could suggest that the ghost mass is a UV cut-off, and therefore there are no dangerous excitations with energies and momenta higher than m g . However, this interpretation of the theory (62) would be misleading. Upon a correct quantization, this theory can be represented as a theory of two fields without the higher derivative non-renormalizable term a 2m 2 g ( φ) 2 , see Eq. (42). One can introduce the UV cut-off Λ when regularizing Feynman diagrams in this theory, but there is absolutely no reason to identify it with m g ; in fact, the UV cut-off which appears in the regularization procedure is supposed to be arbitrarily large, so the perturbations with frequencies greater than m g should not be forbidden. Moreover, as we already explained in Section 2, one cannot take the higher derivative term into account only up to some cut-off ω 2 , k 2 < Λ 2 . If, for example, we "turn on" this term only at k 2 < Λ 2 , it is not going to protect us from the gradient instability, which occurs at ω 2 = P, X k 2 for all indefinitely large k in the region where the NEC is violated and P, X < 0. Note that this instability grows stronger for greater values of momenta k. Therefore if one wants to prove that the new ekpyrotic scenario does not lead to instabilities, one must verify it for all values of momenta. Checking it for ω 2 , k 2 < m 2 g is insufficient. Our results imply that if one investigates this model exactly in the way it is written now (i.e. with the term − 1 2m 2 g ( φ) 2 ), it does suffer from vacuum instability, and if we discard the higher derivative term at momenta greater that some cut-off, the instability becomes even worse. Is there any other way to save the new ekpyrotic scenario? One could argue [36,37] that the term a 2m 2 g ( φ) 2 is just the first term in a sum of many higher derivative terms in an effective theory, which can be obtained by integration of high energy degrees of freedom of some extended physically consistent theory. In other words, one may conjecture that the theory can be made UV complete, and after that the problem with ghosts disappears. However, not every theory with higher derivatives can be UV completed. In particular, the possibility to do it may depend on the sign of the higher derivative term [19]. According to [37], the theory (62) is plagued by ghosts independently of the sign of the higher derivative term in the Lagrangian. One can show it by introducing an auxiliary scalar field χ and a new Lagrangian which reduces exactly to L once χ is integrated out. L ′ is diagonalized by the substitution φ = φ ′ − aχ: which clearly signals the presence of a ghost: χ has a wrong-sign kinetic term. Then the authors of [37] identified χ as a tachyon for a = −1, suggesting that in this case χ has exponentially growing modes. However, this is not the case: due to the opposite sign of the kinetic term for the χ-field, the tachyon is at a = +1, not at a = −1. Indeed, because of the flip of the sign of the kinetic term for the field χ, its equation of motion has a solution χ ∼ e ±i(ωt− k For the field with the normal sign of the kinetic term, the negative mass squared would mean exponentially growing modes. But the flip of the sign of the kinetic term performed together with the flip of the sign of the mass term does not lead to exponentially growing modes [29,30]. Based on the misidentification of the negative mass of the field with the wrong kinetic terms as a tachyon, the authors choose to continue with the a = +1 case in eq. (62). Starting from this point, their arguments are no longer related to the ghost condensate theory and the new ekpyrotic theory, where a = −1. We will return to the case a = −1 shortly. For the a = 1 case they argued that the situation is not as bad as it could seem. They proposed to use the scalar field theory eq. (62) at energies below m g , and postulated that some new degree of freedom enters at k > m g and takes care of the ghost instability. The authors describe this effect by adding a term −(∂χ) 2 to construct the high energy Lagrangian. For V int = 0 they postulate and use the shift φ =φ − χ to get a simple form of a UV theory. This trick reverses the sign of the kinetic term of the field χ, and the ghost magically converts into a perfectly healthy scalar with mass m g : One may question validity of this procedure, but let us try to justify it by looking at the final result. Consider equations of motion for χ from eq. (66) and solve them by iteration in the approximation when ≪ m 2 g : Now replace χ in eq. (66) by its expression in terms of φ as given in eq. (68). The result is our original Lagrangian (62), plus some additional higher derivative terms, which are small at | | ≪ m 2 g , i.e. at |ω 2 − k 2 | ≪ m 2 g . Thus one may conclude that, for a = 1, the theory (62), which has tachyonic ghosts, may be interpreted as a low energy approximation of the UV consistent theory (67). Now let us return to the ghost condensate/new ekpyrotic case. To avoid gradient instabilities in the ekpyrotic scenario, the sign of the higher derivative term in eq. (62) has to be negative, a = −1, see eq. (1) and also eq. (13) and the discussion below it. This means that one should start with eq. (62) with a = −1. This theory is not tachyonic, but, as we demonstrated by performing its Hamiltonian quantization, it has ghosts, particles with negative energy, in its spectrum. Can we improve the situation by the method used above? Let us start with the same formula, L ′ − (∂χ) 2 , as in a = +1 case: and replace χ by the iterative solution of its equation of motion Thus, up to the terms which are small at | | ≪ m 2 g , the theory with the Lagrangian (69) does reproduce the model (62) with a = −1, up to higher order corrections in | |/m 2 g . The theory (69) can be also written as L a=−1 where φ =φ + χ. The sign of the kinetic term of both fields is normal, but the mass term still has the wrong sign, which leads to the tachyonic instability δχ ∼ exp m 2 g − k 2 t. Therefore the cure for the ghost instability proposed in [36,37] does not work for the case a = −1 of the ghost condensate/ekpyrotic scenario. Moreover, the procedure described above is valid only for |ω 2 − k 2 | ≪ m 2 g . Meanwhile the gradient instability of the ekpyrotic theory in the regime of null energy condition violation (c 2 s < 0) is most dangerous in the limit k 2 → ∞, where this procedure does not apply, see (12). This agrees with the general negative conclusion of Refs. [19,20,21,22] with respect to the theories of this type. In this Appendix we analyzed the Lorentz-invariant theory (62) because the argument given in [36,37] was formulated in this context. The generalization of our results for the ghost condensate/new ekpyrotic case is straightforward. Indeed, our results directly follow from the correlation between the sign of the higher derivative term in (63) and the sign of the mass squared term in (64). One can easily verify that this correlation is valid independently of the value of c 2 s , i.e. at all stages of the ghost condensate/new ekpyrotic scenario. To conclude, our statement that the ghost condensate theory and the new ekpyrotic scenario imply the existence of ghosts is valid for the currently available versions of these theories, as they are presented in the literature. In this Appendix we explained why the recent attempts to make the theories with higher derivatives physically consistent [36,37] do not apply to the ghost condensate theory and the new ekpyrotic scenario. One can always hope that one can save the new ekpyrotic scenario and provide a UV completion of this theory in some other way, but similarly one can always hope that the problem of the cosmological singularity will be solved in some other way. Until it is done, one should not claim that the problem is already solved.
2008-03-26T15:46:10.000Z
2007-12-13T00:00:00.000
{ "year": 2007, "sha1": "c004ef6ac20d09789d9e7dd84697c57e4f594e45", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "c004ef6ac20d09789d9e7dd84697c57e4f594e45", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
10662252
pes2o/s2orc
v3-fos-license
Supporting information Inorganic Anion Regulates the Phase Transition in Two Organic Cation Salts Containing [(4-Nitroanilinium)(18-crown-6)]+ Supramolecules Inorganic Anions Regulate the Phase Transition in Two Organic Cation Salts Containing [(4-Nitroanilinium)(18-crown-6)]+ Supramolecules Introduction From both theoretical and applied viewpoints, phase-transition compounds are popular and valuable in increasing the general understanding of structure-property relations and exploring functional materials with novel physical properties [1][2][3][4][5][6][7][8][9][10].Phase-transition crystalline materials are usually accompanied by an abrupt change in some physical properties around the transition temperature.These materials show great applications in molecular sensors, switches, and data storage devices [11][12][13][14][15].In particular, energy harvesting occurs during phase changes; thus, phase-transition materials may serve as energy-saving materials.Designing special inorganic-organic hybrid compounds with molecular dielectrics is an effective method for synthesizing ideal phase-transition materials [16][17][18][19]. Phase-transition inorganic-organic hybrid compounds display novel crystal structures and interesting physical properties, including ferroelectric, dielectric, optical, and piezoelectric properties [20][21][22][23][24].Among them, 15-crown-5 or 18-crown-6 are good candidates, owing to their variable conformation, such as [(RNH 3 )(18-crown-6)][A], where R is an alkyl or aryl group, and A is an anion.The driving force of these phase-transition compounds can be ascribed to the motional changes in the R-NH 3 + guest cation (R = aryl group) or/and anionic units.The use of the R group as a molecular rotor or pendulum unit produces desirable properties [25][26][27][28][29].As a result, the motion of organic ammonium cations changes their dynamic state.This alteration further leads to dielectric changes and ferroelectricity through the phase transitions between the disordered high-temperature phase (HTP) and ordered low-temperature phase (LTP) [30][31][32][33].On the contrary, the asymmetric unit of the supramolecular adduct also contains counter anions.These anions are primarily tetrahedral ions, such as BF 4 − , ClO 4 − , and IO 4 − .These anions easily change position with varied temperatures and weak interactions, because of their higher symmetry and relatively small volume [34,35].In fact, inorganic anions (such as HSO 4 − ) have rarely been explored.Given these findings, in this work we report the syntheses of (4-HNA)(18-crown-6)(HSO 4 ) ( 1) and (4-HNA) 2 (18-crown-6) 2 (PF 6 ) 2 (CH 3 OH) (2) to determine other suitable geometric anions that can regulate a potential phase transition.The structures, phase transitions, and dielectric properties of the two anions are revealed in Scheme 1. Crystal Structure of 1 The crystal structure of 1 was characterized at different temperatures by X-ray diffraction to confirm whether the phase transition was associated with structural changes.Compound 1 crystallized in the monoclinic space group P21/c with a = 10.450( 6 Crystal Structure of 1 The crystal structure of 1 was characterized at different temperatures by X-ray diffraction to confirm whether the phase transition was associated with structural changes.Compound 1 crystallized in the monoclinic space group P2 1).The crystal structures of 1 are similar between the LTP and HTP structures.This result reveals that the asymmetric unit was composed of one cationic [(4-HNA)(18-crown-6)] + moiety and one anionic HSO 4 − (Figure 1a).The 4-HNA cations were connected to the 18-crown-6 ring to form a supramolecular rotator-stator structure through N-H• S1).The π-plane of the 4-HNA cation was nearly perpendicular to the mean plane of the oxygen atoms.The N 1 atom of the 4-HNA cations was located higher than the best plane of the oxygen atoms of the crown ring, rather than in the nesting position (Figure 1a).The dihedral angles between the aromatic ring and the crown ether ring were 92.48 • (100 K) and 93.47 • (296 K).In Figure 1b The crystal structures of 1 are similar between the LTP and HTP structures.This result reveals that the asymmetric unit was composed of one cationic [(4-HNA)(18-crown-6)] + moiety and one anionic HSO4 -(Figure 1a).The 4-HNA cations were connected to the 18-crown-6 ring to form a supramolecular rotator-stator structure through N-H⋯O interactions between the -NH3 + group and the six O (O1, O2, O3, O4, O5, and O6) atoms of 18-crown-6.The average hydrogen-bonding N-O distances of 2.874 and 2.888 Å at 100 K and 296 K, respectively, were almost the same to those of the standard NH3 ﹢ ⋯O distance for the crown ether molecular-based system (Table S1).The π-plane of the 4-HNA cation was nearly perpendicular to the mean plane of the oxygen atoms.The N1 atom of the 4-HNA cations was located higher than the best plane of the oxygen atoms of the crown ring, rather than in the nesting position (Figure 1a).The dihedral angles between the aromatic ring and the crown ether ring were 92.48° (100 K) and 93.47° (296 K).In Figure 1b, the packing diagram of complex 1 is located along the a+c axis.The packing diagram for the LTP (100 K) indicates that the dimer structure of the two HSO4 -anions is linked by O-H⋯O hydrogen bonds that fill in the space to form four [(4-HNA)(18-crown-6)] + supramolecular cations.In addition, the O-H⋯O hydrogen-bonding interaction becomes stronger than the N-H⋯O hydrogen bonds with bond distances of 2.647 Å and 2.657 Å at 100 K and 296 K, respectively.This finding indicates that the movement of H protons was more difficult between the donor (O11) and acceptor atoms (O10).The most significant differences between the structures at 296 K and 100 K were the distances of the two crown ether rings.Compared with the distances at 100 K, those at 296 K changed from 4.208 Å to 4.063 Å. C-H⋯π interactions were noted in the [(4-HNA)(18-crown-6)] + complex cations.The aromatic rings formed two C-H⋯π interactions with distances of 3.195 Å at 100 K and 3.267 Å at 296 K (Figure 2). Crystal Structure of 2 The crystal structure of 2 was analyzed in HTP (296 K) and LTP (100 K) forms.When the HSO4 - anion was replaced with the PF6 -anion, compound 2 at both 296 K and 100 K crystallized in the monoclinic system with the same space group P21/c.Although the temperature changed, the space group of crystal 2 remained unchanged.Hence, no structural symmetry breaking occurred in this Crystal Structure of 2 The crystal structure of 2 was analyzed in HTP (296 K) and LTP (100 K) forms.When the HSO 4 − anion was replaced with the PF 6 − anion, compound 2 at both 296 K and 100 K crystallized in the monoclinic system with the same space group P2 1 /c.Although the temperature changed, the space group of crystal 2 remained unchanged.Hence, no structural symmetry breaking occurred in this temperature range.Crystallographic data and details on the collection and refinement for 296 K and 100 K are listed in Table 1. In contrast to the ordered HSO 4 − anions in 1, the PF 6 − anions in 2 were disordered at both 100 K and 296 K.The molecular structure of 2 with atomic labeling is shown in Figure 3a.The disordered PF 6 − anion and methanol molecules filled the structure between neighboring supramolecular cations (Figure 3b). Crystal Structure of 2 The crystal structure of 2 was analyzed in HTP (296 K) and LTP (100 K) forms.When the HSO4 - anion was replaced with the PF6 -anion, compound 2 at both 296 K and 100 K crystallized in the monoclinic system with the same space group P21/c.Although the temperature changed, the space group of crystal 2 remained unchanged.Hence, no structural symmetry breaking occurred in this temperature range.Crystallographic data and details on the collection and refinement for 296 K and 100 K are listed in Table 1. The asymmetric unit of compound 2 consisted of two independent [(4-HNA)(18-crown-6)] + supramolecular cations, two PF6 -anions, and one methanol molecule at 296 K and 100 K (Figure 3a).In contrast to the ordered HSO4 -anions in 1, the PF6 − anions in 2 were disordered at both 100 K and 296 K.The molecular structure of 2 with atomic labeling is shown in Figure 3a.The disordered PF6 - anion and methanol molecules filled the structure between neighboring supramolecular cations (Figure 3b).The most striking structural features in the LTP and HTP forms were the distances of the two crown ether rings.The distances of crystal 2 from the neighboring [(4-HNA)(18-crown-6)] + supramolecular cations were 8.877 Å (100 K) and 9.082 Å (296 K) (Figure 4).Interestingly, it was found that obvious disorder phenomena occur in the HTP phase of the -NO2 group.The O14, O15, O16, O17 atoms of -NO2 group in supramolecular cations are distinctly disordered and occupy two sites (O14A, O14B, O15A, O15B, O16A, O16B, O17A, O17B).The occupation factors of oxygen atoms of -NO2 group are displayed in Table S2, suggesting that biased -NO2 group orientation was achieved in compound 2 (Fig 4b).The -NH3 ﹢ group resided in a perching position, and attained a configuration similar to that of crystal 1; the group was linked by the oxygen atoms of the crown ethers through The most striking structural features in the LTP and HTP forms were the distances of the two crown ether rings.The distances of crystal 2 from the neighboring [(4-HNA)(18-crown-6)] + supramolecular cations were 8.877 Å (100 K) and 9.082 Å (296 K) (Figure 4).Interestingly, it was found that obvious disorder phenomena occur in the HTP phase of the -NO S2, suggesting that biased -NO 2 group orientation was achieved in compound 2 (Figure 4b).The -NH 3 + group resided in a perching position, and attained a configuration similar to that of crystal 1; the group was linked by the oxygen atoms of the crown ethers through the six N-H• • • O hydrogen bonds.Apparent hydrogen bonding interactions occurred between the nitrogen and oxygen atoms with bond lengths of 2.821-2.956Å and 2.816-2.966Å at 100 K and 296 K, respectively (Table S3).In crystal 2, the distance between two adjacent crown ether rings was nearly twice as long as that in crystal 1 at 100 K and 296 K (Figure 4).In crystal 1, the dimer structure of two HSO 4 − anions occupied a larger space volume, resulting in a relatively closer packing pattern for the [(4-HNA)(18-crown-6)] + supramolecular cation.Furthermore, no C-H• • • π interactions existed in the supramolecular cations.However, weak π• • • π interactions were found between the aromatic rings of the [(4-HNA)(18-crown-6)] + supramolecular cations.These interactions stabilized the crystal packing and formed the alternated inorganic-organic hybrid structure in the bc plane (Figure S1). nearly twice as long as that in crystal 1 at 100 K and 296 K (Figure 4).In crystal 1, the dimer structure of two HSO4 − anions occupied a larger space volume, resulting in a relatively closer packing pattern for the [(4-HNA)(18-crown-6)] + supramolecular cation.Furthermore, no C-H⋯π interactions existed in the supramolecular cations.However, weak π⋯π interactions were found between the aromatic rings of the [(4-HNA)(18-crown-6)] + supramolecular cations.These interactions stabilized the crystal packing and formed the alternated inorganic-organic hybrid structure in the bc plane (Figure S1). Differential Scanning Calorimetry Differential scanning calorimetry (DSC) is commonly used to detect whether a compound displays a phase transition triggered by temperature.This approach is also used to confirm the existence of a heat anomaly during heating and cooling.When a compound undergoes a structural phase transition with a thermal entropy change, reversible heat anomalies are detected during heating and cooling.In the DSC spectrum obtained from crystal 1, a main endothermic peak and a main exothermic peak were observed at heating of 257 K and cooling of 252 K, respectively, with a 5 K hysteresis width (Figure 5a).These exothermic and endothermic peaks clearly reveal the occurrence of a reversible phase transition.The entropy change (∆S) of the phase transition was too low to be estimated from the DSC.The wide heat hysteresis and peak shapes reflect the characteristics of a first-order phase transition.The driving force of the phase transition was confirmed by evaluating the crystal structure across varying temperatures.For example, apparent differences were observed for the distances of two crown ether rings, with values of 4.063 Å at 100 K and 4.208 Å at 296 K. Additionally, the dihedral angle between the aromatic ring and the crown ring slightly changed from 92.48° in the LTP form to 93.47° in the HTP form. Compared with the DSC measurements for crystal 1, those for crystal 2 were conducted within the temperature range 240-320 K, and revealed two anomalies at 266 K (heating) and 261 K (cooling) (Figure 5b).The thermal hysteresis was about 5 K, and the entropy change (∆S) of the phase transition was also too low to be estimated from the DSC.The almost non-existent hysteresis and small heat anomalies clearly reflect the continuous characteristic of the phase transition and thus effectively show features of a second-order phase transition.The distances of neighboring [(4-HNA)(18-crown-6)] + supramolecular cations were 8.877 Å (100 K) and 9.082 Å (296 K) for crystal 2. For these reasons, crystals 1 and 2 both revealed reversible phase transitions despite containing different anions.This finding implies that the phase transition may have originated from the [(4-HNA)(18-crown-6)] + supramolecular cations, a finding observed in similar compounds. Differential Scanning Calorimetry Differential scanning calorimetry (DSC) is commonly used to detect whether a compound displays a phase transition triggered by temperature.This approach is also used to confirm the existence of a heat anomaly during heating and cooling.When a compound undergoes a structural phase transition with a thermal entropy change, reversible heat anomalies are detected during heating and cooling.In the DSC spectrum obtained from crystal 1, a main endothermic peak and a main exothermic peak were observed at heating of 257 K and cooling of 252 K, respectively, with a 5 K hysteresis width (Figure 5a).These exothermic and endothermic peaks clearly reveal the occurrence of a reversible phase transition.The entropy change (∆S) of the phase transition was too low to be estimated from the DSC.The wide heat hysteresis and peak shapes reflect the characteristics of a first-order phase transition.The driving force of the phase transition was confirmed by evaluating the crystal structure across varying temperatures.For example, apparent differences were observed for the distances of two crown ether rings, with values of 4.063 Å at 100 K and 4.208 Å at 296 K. Additionally, the dihedral angle between the aromatic ring and the crown ring slightly changed from 92.48 • in the LTP form to 93.47 • in the HTP form. Dielectric Property The variable-temperature dielectric response is another common method for studying phase transitions, especially at relatively high frequency ranges.This method is useful for identifying phase transitions.The temperature-dependent dielectric constant of the powder samples of crystals 1 and 2 were presented at four selected frequencies, namely, 5 KHz, 10 KHz, 100 KHz, and 1 MHz.Strong and significant dielectric anomalies were observed around the Tc.For crystal 1, the dielectric constant slowly increased with rising temperature below 250 K (Figure 6a).Interestingly, when the temperature neared 255 K, the dielectric constant sharply increased to a maximum value of 22.5 at 5 KHz.Then, the dielectric constant abruptly decreased and achieved a minimum value of 11.91 at about 265 K.The sharp peak-like dielectric anomaly further confirms the phase transition in 1, which is consistent with the DSC result. For crystal 2, the dielectric constant slowly increased, with an abrupt slope at around 265 K during heating (Figure 6b).The maximum dielectric constant value was about 8.14 at 5 KHz; this value corresponds to a high dielectric state.The dielectric constant of 2 achieved a sudden rapid Compared with the DSC measurements for crystal 1, those for crystal 2 were conducted within the temperature range 240-320 K, and revealed two anomalies at 266 K (heating) and 261 K (cooling) (Figure 5b).The thermal hysteresis was about 5 K, and the entropy change (∆S) of the phase transition was also too low to be estimated from the DSC.The almost non-existent hysteresis and small heat anomalies clearly reflect the continuous characteristic of the phase transition and thus effectively show features of a second-order phase transition.The distances of neighboring [(4-HNA)(18-crown-6)] + supramolecular cations were 8.877 Å (100 K) and 9.082 Å (296 K) for crystal 2. For these reasons, crystals Crystals 2017, 7, 224 6 of 9 1 and 2 both revealed reversible phase transitions despite containing different anions.This finding implies that the phase transition may have originated from the [(4-HNA)(18-crown-6)] + supramolecular cations, a finding observed in similar compounds. Dielectric Property The variable-temperature dielectric response is another common method for studying phase transitions, especially at relatively high frequency ranges.This method is useful for identifying phase transitions.The temperature-dependent dielectric constant of the powder samples of crystals 1 and 2 were presented at four selected frequencies, namely, 5 KHz, 10 KHz, 100 KHz, and 1 MHz.Strong and significant dielectric anomalies were observed around the T c .For crystal 1, the dielectric constant slowly increased with rising temperature below 250 K (Figure 6a).Interestingly, when the temperature neared 255 K, the dielectric constant sharply increased to a maximum value of 22.5 at 5 KHz.Then, the dielectric constant abruptly decreased and achieved a minimum value of 11.91 at about 265 K.The sharp peak-like dielectric anomaly further confirms the phase transition in 1, which is consistent with the DSC result. For crystal 2, the dielectric constant slowly increased, with an abrupt slope at around 265 K during heating (Figure 6b).The maximum dielectric constant value was about 8.14 at 5 KHz; this value corresponds to a high dielectric state.The dielectric constant of 2 achieved a sudden rapid increase at room temperature.Compound 2 attained small DSC and dielectric anomalies at about 265 K; this pattern is a feature of a second-order phase transition.The relatively weak dielectric anomaly of crystal 2 relative to that of 1 is presented in the temperature ranges.This finding is probably due to the unchanged electric polarizations of ions and molecules in the crystal lattice. The temperature-dependent dielectric response of compound 1 and 2 both display one sharp peak-like anomaly upon heating at 255 K and 265 K, respectively.However, there is no distinct dielectric anomaly for 1 and 2 upon cooling.For the cooling and heating processes, the DSC curves for the two crystals exhibit relatively weak exothermic and endothermic peaks.Phase transition occurs in compound 1 and 2, thus leading to dielectric anomalies with the increase in temperature.Because crystal 1 and 2 have undergone phase transitions of temperature variation once with the structural changes, there is no distinct dielectric response with the decrease in temperature. The variable-temperature crystal structures of 1 and 2 reveal that the phase transition and dielectric anomaly in the same temperature range may be caused by the structural interactions and different crystal packing patterns in the LTP and HTP forms.Crystals 1 and 2 achieved significant differences at 296 K and 100 K, i.e., the distances between the two adjacent crown rings of 2 were 8.877 Å (100 K) and 9.082 Å (296 K).These distances were larger than the values of 4.063 Å (100 K) and 4.208 Å (296 K) in 1.This finding suggests that the inorganic anion tunes the crystal and hence affects the phase-transition points and types. Materials and Instrument The chemicals and solvents employed in this work were commercially obtained as chemically pure and used without any further purification.Infrared (IR) spectra were obtained using an Affinity-1 spectrophotometer and KBr pellets in the 4000-400 cm −1 region.Thermogravimetric Materials and Instrument The chemicals and solvents employed in this work were commercially obtained as chemically pure and used without any further purification.Infrared (IR) spectra were obtained using an Affinity-1 spectrophotometer and KBr pellets in the 4000-400 cm −1 region.Thermogravimetric analyses (TGA) were performed with a SHIMADZU DTG-60 thermal analyzer in a nitrogen atmosphere from room temperature to 800 K with a heating rate of 10 K/min using aluminum crucibles (Figures S2 and S3).Elemental analyses were conducted using a Vario EL Elementar Analysensysteme GmbH at the TRW Research Collaboration Center, YanZhou, Shandong.DSC measurements were performed by heating and cooling the samples (16 mg) within the temperature range 210-280 K on a TA Q2000 DSC instrument under nitrogen.Meanwhile, the crystal dielectric constants were measured with a TH2828 Precision LCR meter within the frequency range of 500 Hz-1 MHz, applied voltage of 1.0 V, and temperature sweeping rate of approximately 2 K/min. X-ray single crystal diffraction: X-ray diffraction experiments were conducted on crystals 1 and 2 using a Bruker CCD diffractometer with Moka radiation (λ = 0.71073 Å) at 100 K and 296 K.The structures of 1 and 2 were elucidated by direct methods and refined by the full-matrix method based on F 2 using the SHELXL 97 software package.All non-hydrogen atoms were anisotropically refined, and the positions of all hydrogen atoms were geometrically generated.CCDC:1529205 (100 K), CCDC:1529207 (296 K) for 1 and CCDC:1529231 (100 K), CCDC:1529232 (296 K) for 2 contain the supplementary crystallographic data for this paper.These data can be obtained free of charge from The Cambridge Crystallographic Data Center via www.ccdc.cam.ac.uk/data_request/cif. Preparation of Compound (4-HNA)(18-crown-6)(HSO 4 ) (1) Yellow crystals 1 were obtained by evaporating an alcohol solution containing 4-nitroaniline (20 mg), H 2 SO 4 (50 mg), and 18-crown-6 (200 mg) at room temperature for 5 days to a 52% yield (based on 4-nitroaniline).The calculated percentage compositions of C, H, and N for C 18 H 32 N 2 O 12 S were 43.19% C, 6.44% H, and 5.60% N, respectively, whereas the measured percentage compositions were 43.11%, 6.42% H, and 5.76% N, respectively.The IR spectrum of the single crystal 1 is given in Figure S4.Single crystals of 2 were prepared by slowly evaporating a mixture of HPF 6 (50 mg), 4-nitroaniline (20 mg), and 18-crown-6 (200 mg) in methanol solution (50 mL).The methanol solution was allowed to stand for approximately 5 days under room temperature.The single crystals of salt 2 were colorless transparent crystals obtained at 58% yield.The calculated percentage compositions of C, H, and N for C 37 H 66 F 12 N 4 O 17 P 2 were 39.37% C, 5.89% H, and 4.96% N, respectively, whereas the measured percentage compositions were 39.35% C, 5.87% H, and 4.89% N, respectively.The IR spectrum (KBr) of the single crystal 2 is presented in Figure S5. Conclusions Two inorganic-organic hybrid crystals based on the [(4-HNA)(18-crown-6)] + supramolecular cation were reported in this study.Variable-temperature crystal structure analyses and thermal measurements (DSC) showed that crystals 1 and 2 exhibited similar crystal packings and underwent reversible phase transitions at 255 K and 265 K, respectively.The dielectric anomalies of crystals 1 and 2 in the temperature-frequency ranges further confirmed the existence of such phase transitions.These results indicated that the phase transitions may be caused by the [(4-HNA)(18-crown-6)] + supramolecular cations.The inorganic anions (PF 6 − and HSO 4 − ) played an important role in the crystal packing and regulated the phase-transition points and types. Figure 1 . Figure 1.Crystal structure of crystal 1 at 100 K. (a) Asymmetric unit for crystal 1, dashed lines indicate hydrogen bonds; (b) Unit cell of crystal 1 viewed along the a + c axis.Most hydrogen atoms on carbon atoms are omitted for clarity. Figure 1 . 10 Figure 2 . Figure 1.Crystal structure of crystal 1 at 100 K. (a) Asymmetric unit for crystal 1, dashed lines indicate hydrogen bonds; (b) Unit cell of crystal 1 viewed along the a + c axis.Most hydrogen atoms on carbon atoms are omitted for clarity.Crystals 2017, 7, 224 4 of 10 Figure 2 . Figure 2. Arrangement of the supramolecular cations in 1 at 100 K (a) and 296 K (b) viewed in the a+c plane; the dashed lines show the C-H• • • π interactions. Figure 2 . Figure 2. Arrangement of the supramolecular cations in 1 at 100 K (a) and 296 K (b) viewed in the a+c plane; the dashed lines show the C-H⋯π interactions. Figure 3 . Figure 3. Asymmetric unit (a) and packing diagram (b) of crystal 2 viewing along the a-axis at 100 K and 296 K.Most of the hydrogen atoms on the carbon atoms are omitted for clarity. Figure 3 . Figure 3. Asymmetric unit (a) and packing diagram (b) of crystal 2 viewing along the a-axis at 100 K and 296 K.Most of the hydrogen atoms on the carbon atoms are omitted for clarity. 2 group.The O 14 , O 15 , O 16 , O 17 atoms of -NO 2 group in supramolecular cations are distinctly disordered and occupy two sites (O 14A , O 14B , O 15A , O 15B , O 16A , O 16B , O 17A , O 17B ).The occupation factors of oxygen atoms of -NO 2 group are displayed in Table Figure 4 . Figure 4. Arrangement of the supramolecular cations in 2 at 100 K (a) and 296 K (b) viewed along the a-axis; the dashed lines show the distance of the adjacent two crown ether rings. Figure 4 . Figure 4. Arrangement of the supramolecular cations in 2 at 100 K (a) and 296 K (b) viewed along the a-axis; the dashed lines show the distance of the adjacent two crown ether rings. Figure 5 . Figure 5. DSC curves of crystal 1 (a) and crystal 2 (b) in a heating-cooling cycle. Figure 5 . Figure 5. DSC curves of crystal 1 (a) and crystal 2 (b) in a heating-cooling cycle. Figure 6 . Figure 6.Dielectric properties for crystal 1 (a) and 2 (b) measured as a function of temperature under a frequency of 5 KHz to 1 MHz. Figure 6 . Figure 6.Dielectric properties for crystal 1 (a) and 2 (b) measured as a function of temperature under a frequency of 5 KHz to 1 MHz. Table 1 . Crystallographic data and structural refinement details for 1 and 2.
2017-07-30T13:44:26.231Z
2017-07-15T00:00:00.000
{ "year": 2017, "sha1": "465efe82163e0dc2ad620fb2a216d208d07a308f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4352/7/7/224/pdf?version=1500622578", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "465efe82163e0dc2ad620fb2a216d208d07a308f", "s2fieldsofstudy": [ "Chemistry", "Materials Science" ], "extfieldsofstudy": [ "Chemistry" ] }
14552447
pes2o/s2orc
v3-fos-license
Why and How the Old Neuroleptic Thioridazine Cures the XDR-TB Patient This mini-review provides the entire experimental history of the development of the old neuroleptic thioridazine (TZ) for therapy of antibiotic resistant pulmonary tuberculosis infections. TZ is effective when used in combination with antibiotics to which the initial Mycobacterium tuberculosis was resistant. Under proper cardiac evaluation procedures, the use of TZ is safe and does not produce known cardiopathy such as prolongation of QT interval. Because TZ is cheap, it should be considered for therapy of XDR and TDR-Mtb patients in economically disadvantaged countries. Introduction Tuberculosis is mainly an intracellular infection of the macrophage that is part of the alveolar unit of the lung. The infection is caused by Mycobacterium tuberculosis (Mtb) which is a steadfast human pathogen. As per the World Health Organisation (WHO), two billion of the World's inhabitants are infected by Mtb [1]. However, active TB which is the progression of the intracellular infection to an OPEN ACCESS extracellular status (organism breaks out of its intracellular prison) and is the infectious phase of the disease, takes place in roughly 5 to 10% of the globally infected. Unfortunately, the vast majority of active TB cases occur in India [2], China [3], and other low income areas of the globe; with some exceptions (for example Portugal [4] and Latvia [5]), active disease is far less common in Western Europe and the USA [6]. The frequency of progression of a TB infection to active TB status is markedly increased with co-infection with HIV or presentation of AIDS [7], and in the absence of major pathology, advanced age that is accompanied by a decrease of immunocompetence accounts for about 5% of all active TB. The mortality that TB exerts world-wide was in excess of 2 million for 2012 and is expected to rise. [1]. Nevertheless, given that epidemics of influenza kill as many as 10 million in a given year [8], TB is not a major killer and need not be a killer at all if the infection caused by antibiotic susceptible strains of Mtb is treated effectively with the two most effective antibiotics isoniazid (INH) and rifampicin (Rif) [9]. However, as elegantly pointed out by Zarir Udwadia [10], the development of multi-drug resistance (MDR) (resistance to INH and Rif) and its progression to more antibiotic resistant infections such as extensively drug resistant TB (XDR-TB) and now totally drug resistant TB (TDR-TB) is mainly due to incompetent therapy, but not all drug resistant infections are due to incompetent therapy, but rather to the very long period of therapy, often exceeding 15 continuous months and during this time, the normal rates of spontaneous mutations of targets of INH and Rif result in the accumulation of mutations [11] which render the infection immune to INH and Rif, and to other antibiotics as well as is the case for XDR-TB (resistance to INH, Rif, any fluoroquinolone and of the injectable TB drugs streptomycin, kanamycin and amikacin [12]. However, it must also be stated that not all drug resistance is due to mutations but rather to the over-expression of efflux pumps that extrude drugs before they reach their intended targets [13] or to the down-regulation of porins which limit the amount of drug that penetrates the cell envelope of the Mtb strain [14]. Therapy of antibiotic susceptible TB infections is effective when administered correctly, especially when it is associated with Directly Observed Treatments (DOTS) programmes [15]. However, therapy of the MDR-TB patient is problematic and extols high mortality, especially when the patient is co-infected with HIV or presents with AIDS [7]. Therapy of XDR-TB is very problematic with major mortality rates [16] and therapy TDR-TB, as defined by its name, almost always results in death [17]. At the present time, with the exception of adjunct use of the old neuroleptic thioridazine [18,19], there are no safe and effective drugs for therapy of the XDR and TDR-TB patient. It is the purpose of this mini-review to present the whole story of how thioridazine (TZ) was developed for the therapy of MDR-TB infections and why it is an effective drug for adjunct use with antibiotics to which the offending organism was initially resistant. Because of the dual mechanisms of action described in this mini review, any mutational response by the offending organism is irrelevant as would be the case for any drug that is to directly affect the survival of the organism. Phenothiazines Phenothiazines are heterocyclic compounds whose structure is best exemplified by the dye methylene blue (MB, Figure 1). MB was studied intensively by the German physician-chemist Paul Erhlich in the 1890s and shown to have anti-malarial and antibacterial properties [20]. However, after the demonstration by Bodoni that when the dye was given to humans or other mammals, it would calm them [21], interest in the dye as a potential lead compound for the synthesis of a true neuroleptic took precedence over its antimicrobial properties. Nevertheless, it took half a century for the synthesis of the first commercially available colourless neuroleptic chlorpromazine (CPZ, Figure 2) by the French chemist Charpentier [22] and soon after its release as Largactil by Rhone Poulenc Inc. in the middle 1950s, it was used world-wide for the control of psychosis. As a consequence of this extensive use, the activity of CPZ against a wide gamut of microorganisms was studied [23]. Moreover, because of the plethora of negative side effects it produced, the study of the biological effects CPZ has generated almost 20,000 published studies listed by PubMed, second only to aspirin (over 50,000). Among the important biological properties of CPZ are its in vitro and in vivo inhibition of intracellular pathogens such as leishmania [24,25], trypanosomes [26,27], amoebae [28][29][30][31], in vitro inhibition of cancer cells [32,33], induces of apoptosis of cancer cells [34,35], inhibits efflux pumps of multi-drug resistant bacteria [36,37], causes the elimination of plasmids that carry antibiotic resistant genes from important pathogenic bacteria [38][39][40], inhibits enzymes when studied [41], and many other cellular activities that are too numerous to mention. Nevertheless, as recently reviewed [23], the activities noted for CPZ lie in the side chains of the molecule [23] and have guided the evolution of this phenothiazine as an antimalarial agent [42]. The activities of CPZ, relevant to the theme of this mini review, lie in their anti-microbial properties and these are discussed and traced in the development of thioridazine as an anti-tubercular drug in the sections that follow. Anti-Tubercular Activity of CPZ and Other Phenothiazines Within a year after the introduction of CPZ for therapy of psychosis, CPZ [43] and other phenothiazine derivatives [44] were observed to improve therapy of pulmonary tuberculosis. These reports were followed with many demonstrations that indeed CPZ could be used as an adjunct for rapidly curing the TB patient [45][46][47][48]. However, these studies took place at the time that INH and rifampicin had been introduced as effective therapeutic agents for management of tuberculosis. Moreover, the side effects from CPZ were numerous and significant [49], hence, why use it if other less noxious compounds were presently available. Nevertheless, interest in CPZ as an anti-tubercular agent remained as evident from in vitro studies conducted during the next three decades [50]. However, the concentrations that produced in vitro inhibition of replication of Mtb were extremely high (ranged from 15 to 25 mg/L) [51][52][53] and well beyond the maximum safe serum level (0.5 mg/L) achieved with chronic therapy of the psychotic patient. The demonstration that a concentration of CPZ in the medium that was within clinical reach could effectively kill intracellular Mtb by non killing human macrophages [54], sparked interest in the potential that CPZ offered for therapy of tuberculosis; especially multi-drug resistant infections. Moreover it explained why therapeutic doses of CPZ could effectively cure tuberculosis infections noted during the first decade of CPZ use world-wide. During the 1980s New York City experienced a quadrupling of the new cases of active TB infections with more than half of these presenting with an MDR phenotype [55]. The need for effective anti-tubercular drugs was urgent, but none were in the pipeline due to little interest of pharmaceutical companies. Moreover, the problems that CPZ posed were still insurmountable. However, because thioridazine (TZ, Figure 3) is an effective neuroleptic with fewer significant negative side effects, the in vitro activity of TZ against a panel of Mtb strains that were resistant to as many as five antibiotics was examined and the phenothiazine was shown to be as effective in the inhibition of replication of Mtb as CPZ [56]. Nevertheless, the concentration of TZ needed to inhibit replication was of the order of 30 mg/L and therefore, clinically irrelevant. The demonstration that a concentration of TZ in the medium (0.1 mg/L) which was lower than that used for chronic therapy of the psychotic patient caused the killing of intracellular Mtb by non-killing macrophages [57], was soon followed by studies that showed that TZ could cure the mouse infected with antibiotic susceptible Mtb [58] as well as by an MDR Mtb strain [59]. The latter study also showed, as was previously demonstrated in vitro [60], that TZ could enhance the in vivo effectiveness of INH and Rif when used as therapy of the MDR Mtb infected mouse [59]. Proof in support that TZ cures extensively resistant TB infections was recently presented [61]. This latter retrospective study was performed on 17 non-AIDS pulmonary adult patients with XDR-TB admitted to a referral treatment centre for infectious diseases in Buenos Aires, Argentina from 2,002 through 2,008. A combination of linezolid, moxifloxacin and thioridazine was applied in the treatment of 12 patients. Thioridazine was initially administered at a daily dose of 25 mg for two weeks, thereafter the dose was increased to 25 mg weekly until it reached 200 mg/day under strict cardiac monitoring in order to survey for eventual cardiac adverse events. Eleven patients met the recovery criteria with more than two years of follow-up after treatment completion. Thioridazine was discontinued in one patient with pancytopenia and in another with allergic dermatitis. Although cardiac adverse effects have been reported previously, no prolongation of QT interval or any other heart complication was observed. Another recent study showed that employing a similar dose schedule, but limiting the final dose to 75 mg/day, improved the quality of life of XDR TB patients, i.e., patients regained their appetite, put on weight, night sweats were reduced or obviated, the anxiety produced by the infection was obviated, etc. [62] and these authors have therefore recommended that TZ be considered as a salvage drug for therapy of antibiotic non-responsive XDR TB patients whose prognosis is serious [62]. This study, as was the case with the Abbate et al. study [61] also showed that none of the patients presented with any evidence of prolonged QTc intervals or with any other cardiopathy [62]. Clearly, TZ merits serious consideration as an adjunct for therapy of pulmonary TB infections that do not respond to available drugs. Thioridazine is a racemic compound with two enantiomers. The Role of TZ as an Inhibitor of Mtb Efflux Pumps Multi-drug resistance of Mtb can be due to accumulated mutations of antibiotic targets, to downregulation of porins [14] and to over-expression of efflux pumps that excrete two or more unrelated classes of antibiotics [13]. Although the degree that efflux pumps contribute to the multi-drug phenotype of MDR, XDR and TDR Mtb strains is not known, if we can extrapolate from other studies, it is certain that a major fraction of multi-drug resistance is due the over-expression of its efflux pumps [13,14,63,64]. Among the efflux pump genes that respond to INH exposure by over-expressing their total mRNA, are mmpL7, p55, efpA, mmr, Rv1258c and Rv2459 [13]. The activity of each of these genes is reduced by exposure to TZ [13]. TZ also inhibits the products of each of these genes, and these inhibitions render the MDR Mtb strain susceptible to antibiotics to which it was initially resistant [13,14,63,64]. Moreover, TZ has been shown to inhibit the expression of many essential genes of Mtb [65,66] and can also kill dormant Mtb [67,68]. Although not yet proven, it is quite certain that since TZ is concentrated by lysosomes [69][70][71], the in vivo activity of TZ that leads, at least in part, to the cure of the Mtb infected mouse [58,59], must be due to the concentrated effect of the compound within the phagolysosome that has entrapped the bacterium, as previously supported by prior studies [71]. Moreover, the activity of thioridazine has also been shown to enhance the killing of intracellular antibiotic susceptible and MDR-Mtb [57] and XDR-Mtb [72][73][74][75][76][77] by non-killing human macrophages. The Mechanism by Which TZ Enhances the Killing of Intracellular Mtb The pulmonary macrophage, unlike other macrophages such as the neutrophil, has little killing activity of its own. Consequently, the entrapped Mtb within the phagolysosome vacuole remains viable for many decades. The mechanism by which TZ enhances killing of intracellular Mtb has been postulated to be due to the TZ inhibition of potassium ions from the phagolysosomal vacuole [72][73][74][75][76]. The retention of potassium ions promotes the acidification of the phagolysosomal vacuole which in turn activates the inert hydrolases resulting in the degradation of the entrapped Mtb organism [72][73][74][75][76]. This enhanced killing by non-killing macrophages results in a totally new concept for the therapy of pulmonary TB as well as other intracellular infections mentioned in this review. This new concept rather than targeting the intracellular organism, targets the inert hydrolytic system of the macrophage. Consequently, this effect by-passes any mutational response of the entrapped Mtb as would be the case for other drugs that target the organism itself. Conclusions TZ has been shown and confirmed to have in vitro, ex vivo and in vivo activity against all encountered strains of Mtb. TZ has been shown to cure the XDR-TB patient when used in combination with antibiotics to which the strain was initially resistant. TZ has also been shown to vastly improve the quality of life of the XDR-TB patient. The application of TZ for therapy of the XDR TB patient when proper evaluation of cardiac function is undertaken, produces no cardiopathy. TZ is cheap and certainly affordable by low income countries. Consequently, TZ is recommended for therapy of antibiotic unresponsive XDR-TB patients. Moreover, because of the dual mechanism of action, TZ is expected to produce similar cures in the TDR-TB patient. For countries such as India, it must be considered now. Nevertheless, because the concentration of thioridazine needed to inhibit the extracellular Mycobacterium tuberculosis infection exceeds the safe limits of its clinical use, so it cannot be used to treat a tuberculosis infection that is extracellular or in the process of dissemination to other sites of the body.
2016-05-12T22:15:10.714Z
2012-09-01T00:00:00.000
{ "year": 2012, "sha1": "19fddd5211aff008d30dba10ed9527f4fb2ba6c7", "oa_license": "CCBY", "oa_url": "http://www.mdpi.com/1424-8247/5/9/1021/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "19fddd5211aff008d30dba10ed9527f4fb2ba6c7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
15774783
pes2o/s2orc
v3-fos-license
Mean-Field Analysis of Antiferromagnetic Three-State Potts Model with Next Nearest Neighbor Interaction The three-state Potts model with antiferromagnetic nearest-neighbor (n.n.) and ferromagnetic next-nearest-neighbor (n.n.n) interaction is investigated within a mean-field theory. We find that the phase-diagram contains two kind of ordered phases, so-called BSS phase and PSS phase, separated by a discontinuous phase transition line. Order-disorder transition is continuous for the weak n.n.n. interaction and becomes discontinuous transition when the n.n.n. interaction is increased. We show that the multicritical point where the order-disorder transition becomes discontinuous is indeed a tricritical point. I. INTRODUCTION Antiferromagnetic three-state Potts model has interesting properties. It is described by the following Hamiltonian: where < ij > indicates summation over nearest neighbor pairs, and s i = 1, 2, 3 denotes three-state Potts spin on the i'th site. In antiferromagnetic case (J > 0), the neighboring two spins prefer to take different values. Typical ground state configuration of the model on the square lattice is depicted in Fig.(1). One can change the state of certain spin in Fig.(1) from "2" to "3", or vice versa, without any energy cost. One half of all the spins are such "semi-free" spins, therefore ground state is infinitely degenerated. It should be noted that if we divide the lattice in Fig.(1) into two interpenetrating sublattices, there are only "1" spins on one sublattice, while random mixture of "2" spins and "3" spins is present on another sublattice. For the model on simple cubic lattice, it is known that long-range order is realized at finite temperature [1,2] in spite of the high degeneracy of ground-state which usually leads to the disordered state at all temperature range [3]. Recently this model (on the simple cubic lattice) has been studied intensively concerning two interests; one on the nature of the order-disorder transition and another on the nature of ordered phase. As for the nature of ordered phase, it is known that so-called broken-sublattice-symmetry (BSS) phase is realized at a sufficiently low temperature [4,10], in which one of the sublattices is dominated by one spin state, while the second sublattice is dominated by the mixture of the remaining two spin states. Recently several different claims have been made about the nature of the ordered phase at the region just below the transition temperature. Lapinskas and Rosengren concluded that so-called permutationally-symmetric-sublattice (PSS) phase, in which each two sublattices are dominated by one spin state, is realized at a very narrow temperature range just below the transition point, based on cluster-variation method analysis [11,12] and Monte Carlo simulation [13]. On the other hand, Kolesik and Suzuki have performed Monte Carlo simulations and observed a rotationally symmetric phase at a certain range near the transition point, finding no evidence of the PSS phase [14]. Now we consider the effect of next-nearest-neighbor (n.n.n.) ferromagnetic interaction. Let us consider the following Hamiltonian: where the first and the second summation runs over all nearest neighbor and next-nearest neighbor pairs on the simple cubic lattice, respectively. We assume that J > 0 and γ ≥ 0. The lattice consists of two sublattices, which we refer as A and B. The first term in (2) represents the inter-sublattice antiferromagnetic interaction and the second one represents the intra-sublattice ferromagnetic interaction, which resolves the high degeneracy of ground state. Thus the n.n.n. interaction affects the nature of the ordered phase because the BSS phase costs energy proportional to γJ compared to the PSS phase. This effect for the models on the square lattice has been studied by several methods [15][16][17]. In three dimensions, we expect another effect produced by a strong n.n.n. interaction; one can consider γ → ∞ limit as two independent systems of the ferromagnetic q = 3 Potts model, which undergoes the first-order phase transition in three dimensions [18]. Thus the order-disorder transition is expected to become discontinuous as γ becomes large. Mean-field theory gives qualitatively correct answer for both γ = 0(AF) and γ → ∞ (F) cases in three or greater dimensions, namely the BSS phase is realized below the continuous phase transition point in the AF case, and discontinuous phase transition occurs in the F case. We investigate the intermediate region using mean-field theory in the following sections. II. MEAN-FIELD ANALYSIS To perform a mean-field calculation, we use a method of variational free energy which is equivalent to equation of self-consistency [19]. Let us consider the following mean-field Hamiltonian: where h is is a mean-field acting on the i'th spin. Boltzmann probability factor for the system described by H 0 is denoted by P 0 ({s i }), which is a product of probability factors of individual spins: We minimize the following variational free energy (divided by the total number of spins N) with respect to h is : The symbols < · · · > 0 and S 0 denotes respectively the expectation value and the entropy associated with the probability distribution P 0 ({s i }): Minimization problem with respect to h is is equivalent to that with respect to p i (s) under following constraints: Translational symmetry assures that the free energy (6) is minimized when: Obviously, C sα coincides with the expectation value of the concentration of s'th spin state on the sublattice α, calculated with the probability distribution P 0 {s i }. They are under following constraints: Then < H > 0 and S 0 are expressed with C sα : where z 1 , z 2 denotes the coordination number of n.n. and n.n.n. sites respectively (for the simple cubic lattice, z 1 = 6 and z 2 = 12). Furthermore, we define two-component sublattice magnetization similar as Ono used [20]: Then the three quantities C 1α ,C 2α , and C 3α can be expressed by two quantities x α and y α owing to the constraint C 1α + C 2α + C 3α = 1. The two-component sublattice magnetization (x α , y α ) carry an irreducible, unitary representation of the permutation group of the three spin states {1, 2, 3}. Owing to the constraints C sα ≥ 0, the sublattice magnetization (x α , y α ) is restricted to take a value within a regular triangle on the x α −y α plane (Fig.2). Three vertexes of the triangle correspond to completely ordered state (C 1α = 1,C 2α = 0,C 3α = 0 etc.) and the point (0, 0) corresponds to completely We have minimized the free energy F 0 with respect to x A ,y A ,x B , and y B numerically using the following iteration: where ∆ is a small, positive quantity. The iteration (20) is repeated until F 0 converge to some minima. Several values were used as an initial value X (0) α and Y (0) α to find absolute minimum value of F 0 out of all minima. Finally we obtained a phase diagram of two parameters T /J and γ (Fig.(3)). The orderdisorder transition is continuous for γ ≤ 3/2 and discontinuous for γ > 3/2. There are two Banavar and Wu have studied the same model as (2) with q = 3, 4 using the mean-field theory [21] and concluded that the PSS phase does not appear. Our result disagree with theirs. III. EFFECTIVE FREE ENERGY FORM Since the numerical method (20) becomes less precise near the critical line, we expand the mean-field free energy F 0 in powers of the order-parameter to investigate the critical behavior. Firstly we define ferromagnetic and antiferromagnetic order-parameters: Relevant quantities to the order-disorder transition are x AF and y AF , which carry an irreducible representation of a group P 3 (permutation of three spin states) ×P 2 (permutation of two sublattices), isomorphic to the group c 6v (point group of a regular hexagon). Indeed, the allowed range of the antiferromagnetic order-parameter (x AF , y AF ) is a regular hexagon in the x AF − y AF plane. Then we introduce polar coordinates of ferromagnetic and antiferromagnetic orderparameter: The PSS phase and the BSS phase can be distinguished by the direction of the antiferromagnetic order-parameter θ AF ; the value of θ AF expected in the PSS and the BSS phase is kπ/3 and (k + 1/2)π/3 (k = 0, 1, 2, 3, 4, 5), respectively. Thus a quantity cos 6θ AF is a relevant quantity to the PSS-BSS phase transition [14]. The value of cos 6θ AF is −1 in the BSS phase, while cos 6θ AF = 1 in the PSS phase. Now we use R AF ,θ AF ,R F , and θ F as independent variables of the free energy F 0 and trace out R F and θ F in order to obtain an effective free energy form which is expressed by R AF and θ AF only. whereR F andθ F gives minimum value of F 0 for fixed R AF and θ AF . Location of minimã R F andθ F is determined by solving the following equations: Since we cannot solve (24) and (25) explicitly, we expandR F andθ F in powers of R AF which is small around the critical line. Firstly we expand (24) and (25) in powers of both R F and R AF . Lowest order terms follow: Equations (26) and (27) indicate thatR F ∼ O(R 2 AF ) and cos(2θ AF +θ F ) ∼ O(R 2 AF ), so we assume thatR F andθ F can be expanded as below: The coefficients can be determined by letting (28) and (29) into equations (24) and (25), leading to the following result: . . . Finally the effective free energy form is obtained by letting (28) and (29) into (23). Note that only the terms which is invariant under the transformations of c 6v are present in F AF , and they can be written as R 6n+2m AF cos(6nθ AF ) , (n, m ≥ 0). We have calculated the effective free energy F AF up to sixth order term as below: where The critical line is the region A 2 = 0 and A 4 ≥ 0, which corresponds to the region t = 2 + 4γ , γ ≤ 3/2. The order-disorder transition is discontinuous for γ > 3/2 and (γ, t) = (3/2, 8) is a tricritical point where A 2 and A 4 vanish simultaneously [22]. The six-fold anisotropy term B 6 R 6 AF cos 6θ AF is the origin of the BSS and the PSS phase; positive B 6 corresponds to the BSS phase and negative B 6 to the PSS phase. Presence of higher order anisotropic terms such as R 12 AF cos 12θ AF allows a occurrence of minima of F AF at θ AF = kπ/6, which corresponds to neither the PSS nor the BSS phase. However, the positions of minima remain at θ AF = kπ/6 if the coefficients of such higher order anisotropic terms are sufficiently small. The following example helps understanding above issue: a positions of minima and maxima of a function f (θ) = cos θ + a cos 2θ remain at θ = kπ, as long as |a| ≤ 1/4. On the critical line t = 2 + 4γ, B 6 is expressed as a function of γ: As for the nature of the ordered phase, we have shown that the PSS phase is realized at the strong n.n.n. coupling region, as a result of the competition of the two kind of interactions, while the BSS phase is realized at the weak n.n.n. coupling region. However, it should be noted that mean-field type analysis like this work may not give collect information, as pointed in Ref. [14], about the six-fold anisotropy term which becomes relatively small compared to the order-parameter fluctuation near the critical line. So further study, such as Monte Carlo simulations, may be needed to clarify the nature of the ordered phase just below the critical line. Acknowledgment The author is indebted to S. Hikami for reading the manuscript. This work was supported by a Grant-in-Aid for Scientific Research by the Ministry of Education, Science and Culture.
2014-10-01T00:00:00.000Z
1996-05-23T00:00:00.000
{ "year": 1996, "sha1": "78465d491fd26c6d5657ae17395ec30712c52453", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/9605148", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "78465d491fd26c6d5657ae17395ec30712c52453", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
258148022
pes2o/s2orc
v3-fos-license
Antenatal care utilization on low birth weight children among women with high-risk births Background Low birth weight (LBW) is a major public health problem in Indonesia, and is a leading cause of neonatal mortality. Adequate antenatal care (ANC) utilization would help to prevent the incidence of LBW babies. This study aims to examine the association between ANC utilization and LBW children among women with high-risk birth criteria. High-risk birth criteria consisted of 4T which were too young (mother’s age <20 years old), too old (mother’s age >35 years old), too close (age gap between children <2 years), and too many (number of children >2 children). Methods This study utilized calendar data from the women’s module from the 2017 Indonesia Demographic and Health Survey (IDHS), with the unit of analysis only the last birth of women of childbearing age (15–49), which numbered 16,627 women. From this number, analysis was done by separating the criteria for women with high-risk birth. Multivariate logistic regression analyses were employed to assess the impact of ANC and socio-demographic factors on LBW among women with high-risk birth criteria. Results This study revealed that only among women with too many children criteria (>2 children), adequate ANC utilization was significantly associated with LBW of children, even after controlling for a range of socio-demographic factors (p < 0.05). In all four women criteria, preterm birth was more likely to have LBW than those infants who were born normally (above and equal to 2500 grams) (p < 0.001). Conclusions According to WHO, qualified ANC standards have not been fully implemented, including in the case of ANC visits of at least eight times, and it is hoped that ANC with health workers at health facilities can be increased. There is also a need for increased monitoring of pregnant women with a high risk of 4T to keep doing ANC visits to reduce LBW births. Introduction One of the focuses of the National Mid-Term Development Plan (RPJMN) in 2020-2024 was reducing the maternal mortality rate (MMR) and infant mortality rate (IMR). 1 Neonatal mortality rate (NMR) and IMR are indicators of child mortality, this figure shows an improvement since 1990.NMR decreased from 20 per 1,000 live births in 2002 to 15 per 1,000 live births in 2017 and IMR from 35 per 1,000 live births in 2002 to 24 per 1,000 live births in 2017. 2 However, this figure still has not reached the 2024 target, where NMR is expected to decrease to 10 per 1,000 live births and IMR to 16 per 1,000 live births. 1 The main causes of neonatal death in developing countries include low birth weight (LBW) and premature birth.Data showed that LBW and premature births were 19% in 2016. 3Babies with LBW are defined by WHO as babies born less than 2,500 grams regardless of gestational age. 4 In Indonesia, the percentage of LBW has decreased slowly, from 11.2% in 2000 to 10.2% in 2012 then to 10.0% in 2015. 4bies with LBW have a higher risk of stunting, low intelligence (IQ), and death in the first 28 days of life. 4,5In addition, the risk of death at the age of under 1 year is 17 times greater than that of infants with normal birth weight. 6In adulthood, infants with LBW are at risk for obesity, heart disease, and diabetes. 4W can be caused by premature birth (<37 weeks), babies with small gestational age (SGA), or a combination of both. 4,7The lower the gestational age, the lower the baby's birth weight automatically because physiologically and anatomically the fetal organs have not grown and developed perfectly, and the risk of illness and death will increase. 8emature births and fetuses that fail to thrive in the womb are influenced by four maternal factors, namely maternal malnutrition, maternal health problems during pregnancy, maternal characteristics, and other factors. 4In addition, obstetric factors such as maternal age, both too young and too old, significantly affect LBW. 9,102][13][14][15] This risk can be prevented or minimized by performing qualified antenatal care (ANC).[18] Since 2016, WHO has recommended pregnant women to have a minimum of eight pregnancy check-ups. 19WHO provides guidance for pregnant women to have a healthy pregnancy (positive pregnancy) through five interventions and 19 recommendations as well as several recommendations for specific cases. Since 2020, it is agreed in Indonesia for pregnant women to make ANC visits at least six times, with at least two contacts with doctors in the first trimester to screen for risk factors/pregnancy complications; and in the third trimester for one-time delivery risk factor screening.Based on IDHS in 2017, National Family Planning stated that the coverage number of ANC visits (>4 times) in Indonesia was 90.6% and as many as 75% carried out pregnancy checks by health workers. 2 The difference between this study and other similar studies is to look at the effect of ANC on women with 4T with the incidence of LBW.Therefore, this study aimed to determine the effect of ANC on women with 4T on the incidence of LBW.The hypothesis that is built is that qualified ANC in women of childbearing age with 4T reduces the risk of LBW events. REVISED Amendments from Version 1 For the qualified ANC, we have explained the definition in the methods, and one of the criteria for qualified ANC is visiting at least eight ANC.There were only five recommendations from WHO that allow for analysis qualified ANC in this study, namely getting iron, getting bacteria in the urine, getting tetanus toxoid (TT) injections during pregnancy, visiting at least eight ANC, and screening of smoking history. Besides LBW, we also concerned about ANC in this article so we still analyzing NA-LBW because we can get more information in ANC.We have made several adjustments regarding your comments which are improving our article, especially in the literature and conclusion.However, the comments reviewers are really helpful.We appreciate it and already revised based on the comments. Study design This study used the 2017 Indonesian Demographic and Health Survey (IDHS) calendar data source, the women of childbearing age module.This study is mostly retrospective data, which requires each respondent to report their experience in ANC at the time of pregnancy and birth history.This study analyzed 49,627 women of childbearing age (15-49 years) with a total of 16,627 last births because the LBW number available in the IDHS was the last birth history. The independent variables being analyzed were ANC quality, area of residence, education level, wealth level, work status, ANC examination place, ANC examiner staff, and access to information media.The qualified ANC indicator in the WHO guidelines is positive pregnancy. 19There were only five recommendations that allow for analysis, namely getting iron, getting bacteria in the urine, getting tetanus toxoid (TT) injections during pregnancy, visiting at least eight ANC, and screening of smoking history.The dependent variable was the incidence of LBW in women with 4T. Data analysis The data analysis of this study used the IBM SPSS application version 21.The analysis of this study was carried out descriptively and inferentially.Descriptive analysis through univariate and bivariate analysis was conducted to determine the frequency distribution of the variables studied.Inferential analysis was carried out through multivariate analysis with binary logistic regression models (crude OR and adjusted OR) to determine the effect of the independent variables on the dependent variable. Ethics statement According to the DHS Program, "the procedures and questionnaires for standard DHS surveys are reviewed and approved by The Institutional Review Board (IRB) of ICF International while country-specific DHS protocols are reviewed by the IRB of ICF International and typically by an IRB in the host country".The IRB of ICF International ensures the protection of human subjects from the survey complies with the U.S. Department of Health and Human Services regulations, while the host country IRB ensures that the survey complies with the laws and norms of the nation.While downloading the data, the names and addresses of the respondents are de-identified.The data have been obtained by registering and requesting with the Demographic and Health Surveys (DHS) website (https://dhsprogram.com). Results The results of the univariate analysis presented a description of social, economic, and demographic characteristics as shown in Table 1.Descriptively, women in this study were relatively more middle-educated in each category (69% "too young", 52% "too close", 49% "too many", and 45% "too old").Based on the area of residence, the majority of women in the "too young" and "too many" categories are rural dwellers (66% and 53%), almost equal proportions of women in the "too close" category are urban and rural dwellers, and the majority of women in the "too old" category are urban dwellers. Based on the wealth index, most of the women were in a low wealth index category: women who were "too young" (60%), "too many" (44%), and "too close" (47%).Based on employment status, more than half of the women were not working, namely, women who were "too young" (69%) and "too close" (52%).In addition, more than half of the women underwent pregnancy checks at health facilities, namely women with "too old" (79%), while "too young", "too much" and "too close" were 77% each.The results of the descriptive analysis also showed that four out of five women had relatively more ANC check-ups with health workers in each 4T category.Table 1 also shows that women who perform qualified antenatal care in each 4T category have a percentage of less than 15%. Based on the birth status of the children, almost all of them were born at term (normal) with a percentage above 90% in each 4T category.Likewise in all 4T categories, more children were born with non-LBW status (above 90%).Qualified ANC in women of childbearing age with 4T Table 2 shows the percentage distribution of ANC quality among women with 4T in each category according to background characteristics.Most women with a 4T have non-qualified antenatal care.Just under 20% of women with a 4T perform qualified ANC.Women with "too old" performed qualified ANC (17%) more than women with other 4T. Among women with 4T categories, 82% have non-qualified ANC.While the higher the education of women, the more women who perform qualified ANC in each 4T category.Most of the women with qualified ANC were found in high education in the "too old" (23%) and "too close" (12%) categories.Meanwhile, in the "too many" and "too young" categories, most of the women with qualified ANC were found in secondary education, 16%, and 13% respectively. Based on the place of residence, more women who live in urban areas perform qualified ANC for each of the "4T" categories compared to women who live in rural areas.Furthermore, based on wealth status, the higher the wealth index, the more women who perform qualified ANC in each 4T category.Based on employment status, it is seen that women who are not employed are more likely to do qualified ANC at "too young" and "too many" respectively (14%).Meanwhile, among women with "too old" (17%) and "too close" (9%), most of the women with qualified ANC were working.Less than a fifth of "4 Too" women perform qualified ANC at health facilities in each 4T category and all of them are handled by health professionals. Women who performed qualified ANC were relatively higher among those exposed to information through the media, among women "too old" (21%), "too many" (15%), and "too close" (10%).Meanwhile, when viewed from birth status, women who gave birth to children at term/normally had relatively more qualified ANC for each 4T category compared to women with premature births of their last child.Based on the LBW category, relatively more women with non-LBW babies perform qualified ANC in each 4T category compared to women with LBW babies. The incidence of LBW according to the ANC and characteristics of childbearing-age women with 4T Table 3 shows the results of the logistic regression model testing between the characteristics and quality of ANC variables on the incidence of babies born with LBW in women with 4T.The effect of several variables on the incidence of LBW in each 4T risk model shows mixed results.The quality of the ANC only affects women with "too many" children on bivariate testing or together with other variables.Preterm birth status has a significant influence on the incidence of LBW in all groups of women with 4T compared to the quality of ANC and other variables.Babies born prematurely in the "too close" group of women have the greatest chance of LBW incidence compared to babies born normally in the other 4T category, as well as exposure to information through the media. In women with "too young" status, last childbirth status, women's exposure to media, and wealth index showed a significant effect when tested per variable or simultaneously on the incidence of LBW babies born.Women with the premature birth of their last child had a 10.48 times greater tendency to give birth to LBW babies compared to women who gave birth to a normal last child (AOR: 10.48; 99% CI; 4.74-23.16).In addition, women who were not exposed to the media had a 2.72 times greater tendency to give birth to LBW babies compared to women who were exposed to the media (AOR: 2.72; 90% CI; 0.87-8.48).Based on social characteristics, women with a middle wealth index were 0.19 times less likely to give birth to LBW babies than women with a high wealth index (AOR: 0.19; 95% CI; 0.05-0.71). In the second model, "too old", the status of the last child's birth, education level, wealth index, and media exposure had a significant effect on the incidence of LBW both on the test per variable and simultaneously.The qualified ANC in the "too old" group of women did not show a significant effect on the incidence of LBW, as well as the area of residence, place of ANC examination, ANC examiner staff, and employment status.As with the previous model, women in the "too old" group with the premature birth of their last baby had a 16.63 times greater chance of giving birth to LBW babies compared to women who gave birth to a normal last child (AOR:16.63;99% CI; 10.42-26.53).In addition, women with low levels of education have a 2.76 times greater chance of giving birth to LBW babies than women with higher education levels (AOR: 2.76; 99% CI; 1.30-5.88).Women with a low wealth index were more likely (1.60 times) to give birth to LBW babies compared to women with a high wealth index (AOR: 1.61; 90% CI; 0.97-2.65).Not only education level and wealth index, but media exposure in this group also has a significant effect on the incidence of LBW.Interestingly, women who were not exposed to the media were 0.56 times less likely to give birth to LBW babies compared to women who were exposed to the media (AOR: 0.56; 95% CI; 0.87-8.48).In fact, the opportunity is even greater when tested simultaneously with other variables. In the third model, women with "too many", birth status, birth rate, ANC quality, wealth index, and area of residence had a significant influence on both the tests per variable and simultaneously.Women with preterm birth had a 15.03 times greater chance of giving birth to a LBW baby compared to women who gave birth normally (not-preterm birth) (AOR: Education level becomes an important variable in the group of women with "too many" and "too close".The higher the education level of women, the lower the tendency to give birth to LBW babies.Women with low and middle education levels in the group of women with "too many" were more likely to give birth to LBW babies by 2.85 times (AOR: 2.85; 99% CI; 1.65-4.93)and 1.77 times (AOR: 1.77; 95 % CI; 1.04-3.01)compared with women with higher education levels. It was quite different because the quality of the ANC in the fourth model with "too close" did not show a significant effect on the incidence of LBW.In addition to education level, media exposure and preterm birth status were variables that consistently affect the incidence of LBW.Interestingly, preterm birth status has a nearly double chance of developing LBW in this risk group compared to other risk groups.Women with "too close," where the distance between the last two children was less than two years and gave birth prematurely, had a 21.72 times greater chance of giving birth to LBW babies than normal births in the simultaneous test.Likewise, women who were not exposed to media in the "too close" group had a greater chance of giving birth to LBW babies than other risk groups. Discussion Indonesia has tried to reduce infant mortality.One of the strategies is to prevent the incidence of babies with LBW.The results of this study showed that the birth incidence of LBW babies was almost the same in each 4T category, which is around 6 to 7%.This figure is lower than other Asian country, such as India. 18Also, comparing it to African country, the LBW in Indonesia is lower. 17e results indicate that ANC quality only affects LBW births in the category of too many children.Even so, previous studies also showed a significant relationship between ANC utilization and mothers who were too old (>35 years), whereas mothers who were too old were higher in using ANC. 20However, mothers who were too young had higher knowledge than mothers who were too old. 21A previous study showed that most adolescent births were from mothers with a low education level. 22men with too many children and non-qualified ANC will have a 1.47 times higher chance of giving birth to LBW babies than women with too many children and qualified ANC.This was in accordance with research conducted in Padang, mothers with less than four ANC visits were more likely to give birth to LBW babies compared to mothers with four ANC visits. 23Similarly, studies conducted in India 18 and China 8 also stated that a comprehensive antenatal examination was associated with a reduced risk of LBW in infants.Studies conducted in Rwanda, 24 Ethiopia, 12,16,17,25 and Sri Lanka 11 found that lack of ANC visits was associated with low infant weight.In a comprehensive ANC, including a complete number of visits, pregnant women carry out regular checkups, practice healthy living habits and obtain iron intake during pregnancy.,25 The results of this study showed that the age of childbirth has a significant effect on infants with LBW.Premature birth had the most significant impact on infants with low birth weight in the four 4T categories, namely "too young", "too old", "too many", and "too close".The World Health Organization (WHO) stated that premature birth is the cause of about onethird of LBW babies.Studies conducted in Yemen 15 and Ethiopia 26 showed the same.Likewise, in Abu Dhabi, babies born prematurely have an 18 times higher risk of becoming LBW. 7This happens probably because fetal growth and weight gain mainly occur in the late period of pregnancy, so premature babies receive less nutrition which causes low birth weight. Several socioeconomic and demographic characteristics such as education level, wealth index, and area of residence were significantly associated with the incidence of infants with LBW.The education variable has a significant relationship with the incidence of LBW in all four categories, except for women with births too young, which does not have a relationship with LBW incidence.In line with research conducted by Nuryani and Rahmawati, 2017 in Gorontalo Regency, there was a significant relationship between education level and the incidence of LBW (p=0.017). 27This finding is in agreement with several studies conducted in other developing countries such as India, 18 Ethiopia, 25 and Ghana. 28Generally, women with higher education were more informed about the risks of not receiving health care during pregnancy and paid more attention to nutritional intake during pregnancy, 18,28,29 On the other hand, women with low education generally had less access to health facilities, especially economically. 25However, the research conducted by Sharma et al. (2015) in Nepal and by Rahim FK and Muharry A (2018) in Kuningan showed different things, where maternal education was not associated with the incidence of LBW. 30,31garding the wealth index in this study, among women with too old and too many, it is seen that low wealth index are more at risk for giving birth to babies with LBW compared to high wealth index.Studies in India 18 and Sri Lanka 11 showed similar results, where the incidence of LBW decreases with an increasing wealth index. The residence variable is seen only in the category of too many children, which has a significant effect on LBW births.Women with too many children who live in urban areas are 1.37 times more likely to give birth to LBW babies than women in rural areas.In line with research conducted by Mohammed S et al. (2019), the probability of giving birth to an LBW baby was significantly higher in urban residents. 32This is different from the results of the study by Kaur et al. (2019) that found that LBW was more common in rural areas than in urban areas (9.8% vs. 2.0%, p=0.03) 33 and some studies conducted in Ethiopia. 16,17This may be related to the education level of women who generally have low and middle education in this study. Conclusion Based on bivariate testing or together with other variables, qualified ANC only has a significant effect on the incidence of LBW in women with the "too many" criteria.It is known that the most influential variable on LBW in women with 4T is premature birth.Besides that, it is known that with a low level of education women who give birth too close have the highest chance of giving birth to LBW compared to the other "4 too" criteria.Likewise, women who are "too old" and "too many" with a low wealth index and women who are "too many" who live in urban areas have the highest chances of giving birth to LBW. The findings show that the recommendations for qualified ANC according to WHO standards have not been fully implemented.In the case of qualified ANC including ANC visits of at least eight times, it is hoped that ANC with health workers at health facilities can be increased.It is also necessary to increase the monitoring of pregnant women with the risk of 4T to continue making ANC visits to reduce the risk of preterm and reduce LBW births.Moreover, increasing education and counseling related to maturing the age of marriage, reproductive health, family planning (spacing), and the dangers of 4T to reduce the risk of LBW events in women with 4T in various information media. 1. WHO recommends 4 or more ANC visits including 1st ANC in the first trimester.However, the authors focused on the effect of ANC visit on LBW and sociodemographic factors with women on high-risk birth, regardless the number of ANC visits.Some women visit a health facility for ANC just before the delivery.If the authors could compare the effect of LBW among women with no ANC visit to the women with 4 or more ANC visits, the results might be more plausible (NA). 2. For the analysis, you provided the definition of LBW to include the literature.You also need to provide the definition of ANC visits for inclusion criteria.If one or more ANC visits were your inclusion criteria, your conclusion may mislead the readers.You need to explain why you selected literatures mentioning ANC visit only, not the number of ANC visits in the limitation section. 3. Education level: It is the levels such as preschool, primary school, lower secondary school, upper secondary and higher education (diploma, certificate and above).Better to write the educational level of women's in a scientific way, unless you have evidence for such classification from table 1. Make words uniform across the document (like Educational level from table …Low, Middle High, while from the prose it says secondary.It lacks consistency). 4. Your outcome of interest is to see the effect of ANC and socio demographic factors on LBW, however from Table 1 you presented the birth weight status of women's as NA (what is the 5. importance of presenting this result, if the status is already unknown?). Operationalize the word Qualified ANC and Non-Qualified ANC, Non-health worker in ANC provider women of childbearing age. Discussion: From the first paragraph you wrote as "This figure is lower than other Asian countries, such as India.18While comparing it to African countries, the LBW in Indonesia is lower".Make a correction for this paragraph as in "Asian country" and "African country" -you had only one literature for this evidence.Works for the whole document. 1. You think the two sentences had difference?This figure is lower than other Asian countries, such as India.18While comparing it to African countries, the LBW in Indonesia is lower" (the figure is " lower than both in Asian country and Indonesia"…) Write the possible reason for the discrepancy (in your study and other studies) including the factors for the discussion. 1. 17 This may be related to the education level of women who generally have low and middle education in this study".You had only two evidences.Generally speaking, 'several' is used to refer to quantities above two or so but not so much that it's a lot or many.Perhaps the most common interpretation or intended sense of several is around three to five, but this can vary greatly depending on the context.Change it to 'some studies'. Conclusion: In the case of ANC visits of at least eight times, it is hoped that ANC with health workers at health facilities can be increased.Better to conclude your findings based on your discussion (the number of ANC visit is not related with your study/objective). 1. It is necessary to review the coverage of ANC in the National Health Insurance (NHI) mechanism, which is only four times, especially for the poor and with low education.Not related with your objective. 2. How is reviewing the National Health Insurance (NHI) related with LBW and related with ANC?It may be showing only the ANC attendants, number of ANC, the service provided during each ANC…not the low birth weight risks of the mothers on ANC follow up (remove the above two sentences). Are sufficient details of methods and analysis provided to allow replication by others? Yes If applicable, is the statistical analysis and its interpretation appropriate? Gouranga Dasvarma College of Humanities, Arts and Social Sciences, Flinders University, Adelaide, SA, Australia General comments: This is a useful study of the determinants of low birth weight babies in Indonesia based on data collected at a recent national survey, namely The Indonesia Demographic Health Survey 2017. 1. The authors have approached the problem of low birth weight babies (LBW) by sensibly selecting the groups of women who are at the highest risk of giving birth to babies with less than the recommended weight of 2,500 grams.Such groups of women comprise those with the following characteristics at the birth of their children, namely women (i) who are too young (less than 20 years of age), (ii) too old (more than 35 years of age), (iii) have too many children (3 or more) and too close (birth interval less than 2 years). 2. The aim of the study is to examine the effects of antenatal care (ANC) on the prevalence of LBW in each of the high-risk groups of women mentioned above, with the hypothesis that qualified ANC reduces the risk of LBW babies in the high-risk women. 3. The justification of the study appears to be that, by assumption the prevalence of LBW is high in the high-risk groups of women and that it can be reduced by good ANC.However, the prevalence of LBW is 7.1% as at the 2017 Indonesia Demographic and Health Survey, and it seems that Indonesia may be on track to achieving a 30% reduction in LBW between 4. 2015 and 2030 as one of the Sustainable Development Goals of the United Nations.Therefore, further justification is needed for the present study. Moreover, a similar study, based on data from the 2017 Indonesia Demographic and Health Survey exists (see Safitri et al., 2022 1 ), which identifies ANC as a determinant of LBW in Indonesia, although the present manuscript focuses on low birthweight among high-risk groups of women.But reference should be made to the Safitri et al. study. 5. The manuscript needs a major revision, particularly a revision of Table 2 and rewriting the discussion of Table 2 findings. 6. The manuscript also requires a thorough editing for English.7. Several other, specific comments are made in the body of the text, which is returned for revision -please also find two attachments of the manuscript with my comments linked here (Attachment 1 and Attachment 2).What new information does your study provide to the field of knowledge?Is it the analysis of low birthweight among high-risk groups of women?Abstract.Background, Line 2. Rewrite as "and is a leading cause of neonatal mortality". 1. 2. Abstract.Methods.Line 3. pre-term birth is not included in the four criteria (4Ts) mentioned above. 3. Abstract.Conclusions, Lines 1-2.The sentence reads as if WHO has found that qualified ANC standards have not been fully implemented (in Indonesia).But it is a finding of your analysis, is it not? 4. Methods.Lines 11-12.How was quality of ANC (qualified ANC) determined?There are no data in the 2017 IDHS about the quality of ANC. Methods.Line 12. Do you mean to say, "Data were collected at IDHS 2017 only for five WHO recommendations"? 7. Results.Line 3. The "too many" and "too old" categories of women (as well as the "too close" category") also have high proportions in the High education category.This is notable. 9. Results.Lines 3-4.This is not correct.Please rewrite this part as ""the majority of women in the "too young" and "too many" categories are rural dwellers, almost equal proportions of women in the "too close" category are urban and rural dwellers, and the majority of women in the "too old" category are urban dwellers."10. Results.Line 6.This is true only for the "too young" women.Please re-write correctly.11. Results.Lines 10-11.Where is it shown that "four out of five women had relatively more ANC check-ups"?12. Results.Lines 11-12.How is this true?The table shows that 85% or more of the women each category had a Health worker as their ANC provider.Do you mean to say that most of the Health workers are not qualified?13. Results.Lines 13-14.This result contradicts your hypothesis that women in the 4T categories run the risk of giving birth to babies with pre-maturity and low birth weight. 14. Table 1.Usually, the dependent variable (in this case women in each criterion group) should be shown on the horizontal axis and the independent variables on the vertical axis.This can be done by formatting the layout of the table as landscape.Also, try to put the entire table on one page (i.e.do not split a table between pages.Reduce the font size if needed). Results.First paragraph after Table 1.Table 2, as presented here shows the distribution of each socio-economic characteristic (independent variable) according to qualified ANC and unqualified ANC for each of the 4T categories of women.But in actual fact, you should show the distribution of qualified ANC and unqualified ANC according to each socio-economic characteristic.In other words, show the column percentages instead of row percentages.Therefore, please re-do Table 2 and re-write its description.Please also show the association (chi-square) between each of the socioeconomic characteristics and unqualified and qualified ANC for each of the 4T categories. 17. Table 3.It appears that you have not used any information derived from Table 2 in performing your analysis in Table 3. Please re-calculate Table 2 as suggested and use the relevant information from that (revised Table 2) to select the pertinent variables for logistic regression (Table 3). 18. Results.Table 3, Lines 2-3.The "mixed" results may be due to the effects of confounding factors.For example, take any one category, such as "Too old" -While this group excludes women who are "Too young", the "Too old" women may have children who are too closely spaced or may have too many children.Similarly, the women who have "Too many" children may be "too old" themselves or may have children that are "too closely" spaced, or the women who are in the "Too close" category may have too many children or may be too old themselves.Only the "Too young" women would not be subjected to confounding factors like too many or too old, but they may still have too closely spaced children.It is for these 19.reasons that you should also analyse the group of women who are Too old AND Too Close AND Too Many.The "Too young" group may be analysed separately, because "too young" women would have very little chances of having too many children or too closely spaced children. Results.Table 3, Line 15 and Line 28.Why do you refer to the groups as "Model"?Just call them what they are i.e., "Too old" or "Too many".A model may have the connotation of a separate logistic regression. 21. Discussion.Line 1. LBW.Is LBW a major cause of infant death in Indonesia?22. Discussion.Line 1. Indonesia has already reduced its IMR by much, but it is pursuing further declines in IMR. 23. Discussion.Lines 4-5.If the prevalence of LBW is already so low in Indonesia (6-7%), then why study it?You should cite the target of LBW in Indonesia or cite the prevalence of LBW in countries which is lower than that in Indonesia and then justify your study. 24. Discussion Line 5. So, the results show that your hypothesis (that ANC quality affects the prevalence of LBW) is not true in three out of four categories of women.How do you explain this? Conclusion.Lines 1-2.Assuming that the headings of your Table 1 are correct, according to the numbers of women in each category, women with too many children number 5,300.Thus, qualified ANC affects 55.5% of the women at risk (the number of women with all the Ts is equal to 9,546). 27. Conclusion.Line 8. Re ANC visit of at least eight times.The recommendation of eight ANC visits from the WHO came out in 2016 and probably implemented in Indonesia after the 2017 IDHS was conducted.Therefore, in most cases, at the time of IDHS 2017 the recommendation was to have at least 4 ANC visits.Table 9.2 of the 2017 IDHS Final report shows 90.6% of the women giving birth in the last five years had 4+ ANC visits. Recommendation: The authors should address the comments, provide further justification of this study and submit a revised manuscript. Are sufficient details of methods and analysis provided to allow replication by others? Partly If applicable, is the statistical analysis and its interpretation appropriate?Partly Are all the source data underlying the results available to ensure full reproducibility?Yes Are the conclusions drawn adequately supported by the results? Partly Competing Interests: A few of the authors (Resti Pujihasvuty, Sari Kistiana and Irma Ardiana) are my former students, but I have had no input whatsoever in the preparation of the manuscript.I confirm that this potential conflict of interest did not affect my ability to write an objective and unbiased review of the article. Reviewer Expertise: Demography, including infant and child mortality, maternal mortality, fertility, population and development, population and environment I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above. The benefits of publishing with F1000Research: Your article is published within days, with no editorial bias • You can publish traditional articles, null/negative results, case reports, data notes and more • The peer review process is transparent and collaborative • Your article is indexed in PubMed after passing peer review • Dedicated customer support at every stage • For pre-submission enquiries, contact research@f1000.com Are all the source data underlying the results available to ensure full reproducibility?Yes Are the conclusions drawn adequately supported by the results?Partly Competing Interests: No competing interests were disclosed.Reviewer Expertise: Public health and Epidemiology I confirm that I have read this submission and believe that I have an appropriate level of expertise to confirm that it is of an acceptable scientific standard, however I have significant reservations, as outlined above.Reviewer Report 25 May 2023 https://doi.org/10.5256/f1000research.139258.r170911© 2023 Dasvarma G.This is an open access peer review report distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. 8 . Specific comments: Title.A similar study based on data from the 2017 Indonesia Demographic and Health Survey exists (see Safitri et al., 2022 1 ), which identifies ANC as a determinant of LBW in Indonesia.Acknowledgement and appropriate references should be made of the Safitri et al. study.1. Table 1 . Sociodemographic characteristics of women with 4T. Table 2 . Sociodemographic characteristics among women with 4T according to the quality of ANC. Table 3 . Relationship of ANC in women of childbearing age with 4T on the incidence of LBW..03;99% CI; 10.76-21.00).Furthermore, women with non-qualified ANC were 1.47 times more likely to give birth to LBW compared to women with qualified ANC (AOR: 1.47; 90% CI; 0.98-2.20).Interestingly, women living in urban areas were 1.37 times more likely to have LBW babies than women living in rural areas (AOR:1.37;95% CI; 1.06-1.79).Furthermore, women with a low wealth index have a 1.43 times greater chance of giving birth to LBW than women with a high wealth index (AOR: 1.43; 95% CI; 1.05-1.95).
2023-04-15T15:05:42.761Z
2023-04-13T00:00:00.000
{ "year": 2024, "sha1": "bc106079aea695ebe5b0ad6ff05bdffadb9fffbf", "oa_license": "CCBY", "oa_url": "https://f1000research.com/articles/12-399/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "731cdfe94e980bf280f1d0e6ca7dd61c6ab41e4e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
235394761
pes2o/s2orc
v3-fos-license
Physical Performance Tests Correlate With Patient-reported Outcomes After Periacetabular Osteotomy: A Prospective Study Introduction: Individuals with hip dysplasia report significant functional disability that improves with periacetabular osteotomy (PAO). Four physical performance measures (PPMs) have been recently validated for use with nonarthritic hip conditions; however, their ability to detect functional improvement and correlate with improvements in popular hip-specific patient-reported outcome (PRO) instruments after PAO is unknown. The purpose of this study was to evaluate the responsiveness of four PPMs up to 1 year after PAO, compare PPMs with established PRO measures at these time points, and report the acceptability and utility of PPMs for assessing outcomes after PAO. Methods: Twenty-two participants aged 15 to 39 years completed the timed stair ascent (TSA), sit-to-stand five times (STS5), self-selected walking speed, four-square-step test, and seven hip-specific PRO measures before surgery and at approximately 6 months and 1 year after PAO. They completed questions regarding acceptability and utility of both types of testing. Wilcoxon rank sum test and unpaired Student t-tests were used to assess differences between time points; Spearman correlation and generalized linear modeling were used to determine the relationship between PPMs and PRO measures. Results: Six months after PAO, participants showed significant improvements on all seven PRO instruments (P < 0.001) and on the STS5 (P = 0.01). At one year, these improvements were maintained and TSA also improved (P = 0.03). Improvement in other PPMs did not reach significance (P = 0.07 and 0.08). The STS5 test demonstrated moderate to strong correlation (|r| = 0.43 to 0.76, P < 0.05) with all PRO measures, and the TSA test demonstrated moderate to strong correlation with almost all measures (|r| = 0.43 to 0.58, P < 0.05). Correlations strengthened on subanalysis of participants with unilateral disease (n = 11) (|r| = 0.56 to 0.94, P < 0.05). All participants (100%) found PPM testing acceptable despite disability; 25% preferred PPMs to PRO measures, whereas 75% of participants found them equal in usefulness. Discussion: The STS5 and TSA tests demonstrated moderate to very strong correlation with PRO measures at six and 12 months after PAO for dysplasia. These tests could be used as a functional outcome to supplement PRO instruments after PAO. unilateral disease (n = 11) (jrj = 0.56 to 0.94, P , 0.05). All participants (100%) found PPM testing acceptable despite disability; 25% preferred PPMs to PRO measures, whereas 75% of participants found them equal in usefulness. Discussion: The STS5 and TSA tests demonstrated moderate to very strong correlation with PRO measures at six and 12 months after PAO for dysplasia. These tests could be used as a functional outcome to supplement PRO instruments after PAO. P eriacetabular osteotomy (PAO) is a well-established surgical procedure to treat acetabular dysplasia in the skeletally mature, nonarthritic hip. [1][2][3][4] The typical patient is young and active with expectation for return to a high level of function after treatment. Measuring functional deficit is typically done with hip-specific patientreported outcome (PRO) instruments such as the hip disability osteoarthritis outcome score (HOOS), 8 International Hip Outcome Tool (iHOT), 5 modified Harris hip score (mHHS), Western Ontario and McMaster Universities Osteoarthritis Index, 6 or Patient-Reported Outcome Measurement Information System Physical Function (PROMIS PF). 7 Although these tools are validated for use in hip preservation surgery and correlate well with one another after PAO, [8][9][10][11] PRO instruments can impose substantial test burden and are limited by their reliance on patient recall and self-perception. 12,13 Physical performance measures (PPMs) allow objective assessment of impairment and recovery and provide information complementary to PROs. [14][15][16] Performancebased outcome measures are gaining widespread use to assess recovery after athletic injury and to evaluate the effects of hip and knee osteoarthritis. [17][18][19][20] The use of physical performance measures after surgical treatment of nonarthritic hip conditions is not widely reported. 21 Four PPMs have been recently explored for use with both hip impingement and dysplasia to correlate with common PRO measures: 22,23 the sit-to-stand five times (STS5) test, four-square-step test (FSST), self-selected walking speed (SSWS), and timed stair ascent (TSA). 8,19 Participants with symptomatic hip dysplasia demonstrate disability with slower time to completion or walking speed on all four tests compared with healthy peer subjects. 22 The utility of these tests in the postoperative setting has not been explored. The purpose of this study was to (1) evaluate the responsiveness of these four PPMs to at 6 months and one year after PAO, (2) compare these PPMs with established hip-specific PRO measures, and (3) report the acceptability and perceived benefit by patients in assessing postoperative outcomes. We hypothesized that (1) participants would show and maintain significant improvement on all four PPMs after PAO at 6 months and 1 year, (2) PPMs would correlate highly with function-based PRO measures, and (3) participants would find PPM testing acceptable to perform and more useful than PRO instruments. Methods This prospective study was approved by our institutional review board. All participants were enrolled at a single institution. Patients aged 15 to 39 years who were indicated for PAO surgery during the 8-month enrollment period (May 2018 to January 2019) were eligible for inclusion. Exclusion criteria included previous ipsilateral femoral or pelvic osteotomy, neuromuscular condition, history of Perthes disease, or slipped capital femoral epiphysis. Participants were compensated up to $100.00 each over the course of the study. Preoperative Workup Standing AP radiographs were used to assess lateral center-edge angle (LCEA) of Wiberg, Tönnis angle, extrusion index, and Tönnis grade. Alpha angle was measured on Dunn lateral and frog-leg lateral views. All measurements were done by a fellowship-trained surgeon (M.C.W.). PAO was indicated for patients who presented to clinic with hip pain, LCEA less than 20°or LCEA 20°-25°with hypermobility, Tönnis grade 0 or 1, and failure of nonoperative treatments including physical therapy, activity modification, and intra-articular steroid injections. Hip arthroscopy in addition to PAO was indicated when there was labral injury or cartilaginous pathology on hip MRI or when there was a history of previous hip arthroscopy. Outcomes Assessment PROs and PPMs were collected at four separate study visits: two preoperative visits staged at least 24 hours apart and postoperative visits at 6 months and 1 year. This study used data from the first preoperative visit only; the second preoperative visit was used in a previous study for interrater (Intraclass Correlation Coefficient [ICC] 0.97 to 0.99) and intrarater (ICC 0.83 to 0.93) reliability testing. 22 At each assessment, participants completed seven PRO instruments: visual analog scale (VAS) for pain, International Hip Outcome Tool short version (iHOT-12), 5 hip disability and osteoarthritis outcome score short version (HOOS PS) 24 and pain subscale (HOOS Pain), 10 PROMIS physical function and pain interference adaptive tests (PROMIS PF and PROMIS PI), 25,26 and modified Harris hip score (mHHS). 27 PRO questionnaires were administered in a randomized order using a handheld tablet computer. Participants were also asked to report frequency of opioid use in the past 30 days. After administration of PRO instruments, the participants proceeded to functional testing with a trained examiner (J.D.) ( Figure 1). The PPM standardized protocol has been previously described. 22 After performance testing, participants completed an electronic survey assessing (1) perceived difficulty and acceptability of the PPMs, (2) perceived performance compared with previous visits, and (3) how the PPM testing compared in utility and difficulty with PRO testing. Statistical Analysis All variables were evaluated for normality, and nonparametric methods were used when indicated. For all numeric variables, mean, median, minimum and maximum, standard deviation, and range were calculated. Wilcoxon rank sum test was used to compare PPMs and PRO measures between each data collection; to account for variation in follow-up time points between participants, linear mixed models were used to assess for changes in scores over time, with P values adjusted for multiple comparisons. Unpaired Student t-tests (alpha = 0.5) or Wilcoxon rank sum test where appropriate was used to compare body mass index (BMI), age, and radiographic data. Fisher exact test was used for comparison of categorical variables including opioid use and sex. Spearman rank correlations were used to determine the relationship between the PPMs and PRO measures at each time point. Correlations were defined as very strong (r . 0.7), strong (r = 0.61 to 0.69), moderate (r = 0.4 to 0.6), moderately weak (r = 0.31 to 0.39), and weak (#0.3). Statistical analysis was done by a trained statistician using SAS software (SAS version 9.4; SAS Institute). 28 Statistical significance was considered P , 0.05, and Bonferroni-Holm correction was used to correct for multiple comparisons. Results Demographics Thirty-two individuals were enrolled, and 27 of the 32 participants underwent PAO surgery. Of these 27 patients, 22 completed both preoperative and postoperative PRO and PPM data collection and were included in the full statistical analysis ( Figure 2). Most participants were female patients (20/22), and half (11/22) had bilateral hip pain. The 6-month follow-up occurred at an average of 6.3 6 0.9 months after surgery with 70% completion rate and the 1-year follow-up at an average of 12.9 6 1.9 months after surgery with 81% completion rate. Subject demographic and radiographic data are detailed in Table 1. One participant had undergone previous hip arthroscopy. Most participants (18/22) had a concomitant arthroscopy at the time of PAO, which included femoral offset correction (n = 18), labral repair (N = 15), subspine decompression (n = 3), and labral reconstruction (n = 1). Complications after surgery included one superior ramus nonunion with persistent pain, which was treated with open reduction and internal fixation 18 months postoperatively. All but two participants (N = 20/22) underwent removal of implant between the 6-month and 1-year follow-ups. Seven patients with bilateral hip pain also underwent arthroscopic or open surgery on the contralateral hip during the follow-up, including arthroscopic labral repair with capsular plication (n = 1), capsular débridement (n = 1), PAO 6 arthroscopy (n = 6), and implant removal (n = 1). Patient-Reported Outcomes Scores for all PRO measures improved significantly at 6 months (all P # 0.0002), with some cases reaching the level of healthy control subjects of similar age and sex 22 (Table 2). Scores at 6 months and 1 year were not significantly different (all comparisons P . 0.05). For PRO measures with an available minimal clinically important difference (MCID) (iHOT-12, 29 HOOS Pain, 8 HOOS PS, 30 and mHHS 8 ), 86.3 to 94.7% of participants improved by at least the MCID, and the mean change in score for all participants was more than three times the MCID at both follow-ups (Tables 2 and 3). When Physical Performance Measures At 6 months post-PAO, the mean times for STS5 improved significantly (P = 0.020, Wilcoxon rank sum; Table 4). At 12 months, improvements in STS5 were maintained (P = 0.01), and TSA additionally demonstrated significant improvement (P = 0.03). Changes in FSST and SSWS did not reach significance (P = 0.07 and 0.08 at 6 months and 1 year, respectively). With the generalized linear modeling approach accounting for variation in the time to follow-up, the effect of PAO on STS5 was significant at both six months and one year and on TSA at 1 year (Supplemental Table 2 When participants who underwent contralateral hip surgical procedure during the study period were removed and those with unilateral dysplasia (n = 11) evaluated in isolation, correlations were noted to be substantially stronger at the 1year time point (Supplemental Table 3, http://links.lww.com/ JG9/A140), with |r| . 0.90, P , 0.001, for STS5 and TSA with multiple PROs and |r| . 0.64 to 0.71, P , 0.03, for FSST with several PRO measures as well. Patient Surveys All participants (100%) found PPM testing acceptable to perform. During the follow-up, participants selected TSA as the most helpful test for gauging improvement (n = 17/19 and 18/22 at 6 and 12 months, respectively), followed by STS5 (n = 10/19 and 14/22, respectively), FSST (n = 8/19 and n = 11/22, respectively), and SSWS (n = 7/19 and n = 10/22, respectively). Four participants felt that PPM testing was more useful to them than PRO instruments, and 13 participants found PPMs and PRO instruments equally useful. No participants preferred traditional PRO testing to PPMs. Optional written feedback was uniformly positive; one subject at six months stated, "I feel like my performance has gotten better. It makes me feel like I made the right decision about surgery." Discussion This study evaluated the responsiveness of four PPMs (STS5, TSA, FSST, and SSWS) and their correlation with 1 Of these four tests, only TSA and STS5 ultimately demonstrated responsiveness in our cohort. In our predominantly female cohort, mean walking speed before surgery (1.2 m/s) was slower than healthy control subjects in the FAI study by Sheehan et al 23 (mean 1.31 m/s) and healthy control subjects of similar age and sex (mean 1.5 m/s). SSWS did not improve for our participants post-PAO and failed to correlate with any PRO measures, suggesting that a walking speed test does not sufficiently target the deficits associated with dysplasia. Similarly, FSST also failed to improve post-PAO, with mean test times remaining approximately 6.0 to 6.5 6 1.4 to 1.6 seconds throughout the study duration. Although requiring some single leg balance, the hip is relatively extended during this test, which may explain the lack of responsiveness in our cohort. The two physical performance tests that performed well in our study, STS5 and TSA, were also the most physically demanding. These tests evaluate coordinated lower extremity strength and require rapid and repetitive hip flexion. On subjective survey, participants correctly perceived these two tests as being both challenging to perform and a useful gauge of their functional abilities even after surgery. Considering STS5 can be done in virtually any examination space (without need for a staircase) and correlated moderately to very highly with PRO measures preoperatively and postoperatively, it should be of value to the hip surgeon interested in tracking functional improvements after PAO. Baseline deficits and improvements in both PROs and PPMs varied considerably on an individual level. PROs at all time points were in line with values previously published for PAO. 9,11,31 The ANCHOR cohort reported a mean HOOS Pain improvement of 28.3 (95% confidence interval, 25.3-30.1) at an average of 3 to 5 years of follow-up in their 391 patients compared with our mean increase of 34.9 6 22.2 points at one year. Older age, female sex, elevated BMI, and concomitant ipsilateral procedures were found in that study to be independent predictors of patient-reported outcomes. Our cohort at one year had a similar mean age (25.5 6 9.1 years, compared with 25.4 6 9.5 years in the ANCHOR cohort) and similar BMI (24.6 6 versus 24.9 kg/m 2 ; however, our study had a greater proportion of female subjects (91% versus 79%). Most patients (81%) of our cohort also had concomitant arthroscopy (percentage not reported in the ANCHOR study); these differences may explain the greater mean improvement we observed in PROs post-PAO. There were three participants who did not achieve MCID in PRO measures; interestingly, all three had bilateral hip dysplasia, with pain also in the contralateral hip. At one year, one participant was continuing to experience dysfunction related to their second PAO surgery. The other two participants were noted to be among the oldest participants in our cohort (aged 37 and 39 years) with Tönnis grade 1 hips on preoperative evaluation; these hips were examined arthroscopically at the time of PAO with evidence of labral damage with cartilage fissuring at the chondrolabral junction, likely, overall, indicating a more advanced level of hip degeneration. Regarding the effect of bilateral disease, correlations between PRO measures and PPMs strengthened when evaluating only those with unilateral dysplasia (N = 11, Supplemental Table 3, http://links.lww.com/JG9/A140). Half (n = 11) of our cohort had bilateral dysplasia at the time of enrollment, and seven of these 11 participants underwent contralateral PAO and/or arthroscopy between six months and one year after their first PAO. We hypothesized that one might expect a greater functional deficit at baseline in participants with bilateral disease compared with those with a single affected hip and either a larger or smaller functional improvement depending on whether the contralateral hip was also treated. Proximity of surgery on the contralateral hip must also be taken into consideration when evaluating hip function in this cohort. Future studies with larger sample sizes of both unilateral and bilateral hips may identify significant functional differences between unilateral and bilateral disease and even ideal timing for treatment of the second hip. Our small sample size, loss to follow-up, and dropout after study initiation likely affected our ability to fully evaluate correlation between PPMs and PRO measures. A primary limitation to this study was the small sample size. PPMs require in-person data collection, which limited our ability to enroll participants who would not follow up in person up to one year because of the long travel distance to our clinic. We also lost three participants to follow-up although 81% returned for PPM testing one year after surgery. The reasons for failure of follow-up included cancellation of visits for COVID-19 (1), prolonged medical illness (1), and relocating for school (1). Another limitation is the homogenous nature of the patient cohort we evaluated; although reflective of the local population in our area, it may limit the generalizability of our results to other more diverse populations. In conclusion, we recommend use of the STS5 and TSA physical performance tests, for both preoperative evaluation and monitoring of functional improvement after PAO. At 6 months and 1 year after surgery, these tests correlated moderate to very strongly with common hip-specific PRO measures and provided an objective means of assessing disability that was both appealing to patients and easily performed without specialized equipment.
2021-06-11T06:16:26.780Z
2021-06-01T00:00:00.000
{ "year": 2021, "sha1": "3f4d6c5229226942dcf6e8e1b68be5928a0d49ce", "oa_license": "CCBY", "oa_url": "https://doi.org/10.5435/jaaosglobal-d-21-00100", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "157354f5a4944528f862ca990d5c8d8bc8ab8fe0", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
119334616
pes2o/s2orc
v3-fos-license
Energy barriers for diffusion on stepped Rh(111) surfaces Energy barriers for different moves of a single Rh adatom in the vicinity of steps on Rh(111) surface are studied with molecular statics. Interatomic interactions are modeled by the semi-empirical many-body Rosato-Guillope-Legrand potential. We calculate systematically barriers for the descent at straight steps, steps with the kink and small islands as well as barriers for diffusion along the step edges. The descent is more probable on steps with a {111} microfacet and near kinks. Diffusion along a step with a {100} microfacet is faster than along a step with a {111} microfacet. We also calculate barriers for diffusion on several surfaces vicinal to Rh(111). Introduction Surface diffusion is a very important process in many phenomena, in particular in crystal growth. That is why the diffusion of single adatoms on stepped metal surfaces has been recently widely investigated both experimentally and theoretically. Energy barriers for the moves of an adatom on a surface with steps or islands are not easily accessible by experiment but for many elementary processes they can be calculated on microscopical level by molecular dynamics. The knowledge of the barriers can then be utilized in the construction of kinetic Monte Carlo models to study growth processes. The diffusion energy barriers have already been calculated for various metals and different surface orientations. For example, in the case of fcc (111) surface there are calculations for Al [1,2], Ag [3], Au [3], Cu [4], Ni [5], Pt [6,7], Ir [8]. In most of these studies the semi-empirical potentials were used due to their simplicity allowing a systematic study of numerous possible processes. Comparable ab initio calculations demand much more computer power, therefore the number of investigated processes must considerably be reduced. Although the recent first principles calculations [6] indicates that in the case of Pt(111) the semi-empirical potentials may be insufficient, in many other studies they lead to reasonable results and their application helped at least qualitative understand diffusion energetics and reveal new processes (see e.g. [9]). In this paper we study diffusion on stepped Rh(111) surface. The research was motivated by a recent STM experiment on unstable growth of Rh(111) [10] where a coarsening due to a step-edge barrier was observed over three orders of magnitude of deposited amount. More recent observation on Pt(111) [11] indicates almost no coarsening over the similar interval of deposited material. Whereas the step-edge barriers on Pt(111) surfaces were extensively studied (see references in [6,7]), results for Rh(111) are not available. We present here a systematic study of energy barriers for inter-layer transport as well as for diffusion along the step edges. Method Our simulations were done for finite atomic slabs with a free surface on the top, two atomic layers fixed on the bottom, and periodic boundary conditions in the two directions parallel to the surface. The slab representing the substrate of (111) surface was 11 layers thick with 448 atoms per layer. We used systems of approximately 5000 atoms consisting of 19 to 44 layers, with 110 to 240 atoms per layer for diffusion along channels on the vicinal surfaces (311), (211), (331), (221) and (322). The semi-empirical many-body Rosato-Guillope-Legrand (RGL) potential [12] including interactions up to the fifth nearest neighbors [13] was used. For computational details see [7,8]. The energy barrier for a particular diffusion process was obtained by testing systematically various possible paths of an adatom. The path with the lowest diffusion barrier was chosen to be the optimum one, and the diffusion barrier, E d , was calculated as E d = E sad − E min where E sad and E min are the total energies of the system with the adatom at the saddle point and at the equilibrium adsorption site, respectively. We considered both the jump and exchange processes. The minimum energy path for jump diffusion was determined by moving an adatom in small steps between two equilibrium positions and by allowing the adatom to relax in a plane perpendicular to the line connecting two equilibrium positions. The rest of the atoms in the system were allowed to relax in all directions. The energy barrier for exchange process was determined by moving the edge atom, that should be replaced, in small steps toward its final position. This final position was one of neighboring equilibrium sites. The moving atom was allowed to relax in the plane perpendicular to the exchange direction at each step, whereas the other atoms, including the adatom, relaxed free in all directions. Flat surface In our simulation we obtained the energy barrier 0.15 eV for self-diffusion on the flat Rh(111) surface, which is in good agreement with experiments. In the field-ionmicroscope (FIM) experiment [14] the barrier 0.15 ± 0.02 eV was found and recently the value 0.18 ± 0.06 eV was obtained in the STM experiment [10] from the temperature dependence of island density. The results of molecular statics calculations and experimental values are summarized in Table 1. We calculated also binding energy for the supported dimer. The value E B = 0.57 eV is in good agreement with 0.6 ± 0.4 eV obtained in the STM experiment [10]. Descent to the lower terrace We studied the descent of an adatom to the lower terrace from both types of steps on the (111) surface, i.e., step A with {100} microfacet and step B with {111} microfacet (see Fig. 1). We performed calculations for several geometries: straight steps, steps with a kink, and also for a small island 3 × 3 atoms. For all considered geometries we systematically investigated all possible adatom jumps and pair exchange processes. Our results for straight steps and steps with a kink are summarized in Table 2. The energy barrier for a direct jump from the upper to the lower terrace is 0.73 eV for straight step A and 0.74 eV for straight step B. The presence of a kink decreases the barrier for the jump to 0.57 eV on both steps. We can see that the energy barriers for the jumps are always larger than for the exchange which are 0.47 eV and 0.39 eV for A and B step, respectively. In more complex geometries the number of competing processes to be energetically compared increases, e.g. in the case of step B with a kink we consider four types of processes according to which step-edge-atom (denoted by r1, r2, r3, or r4) is pushed out (see Fig. 2). We call them exchange next to corner, exchange over kink I, exchange over kink II, and exchange next to kink, respectively. We consider all possible combinations of initial and final positions. For example, in the case of the exchange next to the corner there are three possible processes: 1 → r1, 2 → r1, 3 → r1. In the process 3 → r1, e.g., the adatom starts in the fcc site labeled by 3 and pushes out the edge atom r1. Two possible directions of moving for pushed atom r1 are shown schematically in Fig. 2. The lowest barrier for the inter-layer transport is the barrier for two exchange processes near the kink on the step B (0.24 eV), i.e. the Ehrlich-Schwoebel barrier is only 90 meV. The barriers for exchange processes on the step A are significantly higher. For a 3 × 3 island, the minimal values were obtained for the exchange of the atom in the middle of the edge (0.43 eV for A-type edge and 0.24 eV for B-type edge). We found that for Rh(111) similar as for Pt(111) [7] the barriers for the descent at a small island are significantly lower than for the descent at straight long steps. Fig. 3 shows the energy profile for the diffusion along two edges of a large island. The structure in the middle corresponds to a diffusion around the corner formed by two edges. The angle contained by the edges is 120 • . There is a small minimum just at the corner positions. The transport between two edges is asymmetric. Diffusion along the step edges We found that the diffusion along the straight step of type A is faster (the barrier is 0.40 eV) than along the step of type B (the barrier is 0.81 eV). This could be attributed to a purely geometrical effect due to different local geometries along the steps. The adatom diffusing along the step B has to pass closer to the topmost atoms of the lower terrace than when it is diffusing along the step A (see Fig.1). There are no available experimental data for diffusion along the steps on Rh(111) surface. Only one measurement on the (311) and (331) surface has been published [14]. In order to have some comparison, we calculated the energy barriers for the diffusion along steps on vicinal surfaces with terraces: (211), (311) -terraces with step edges of type A, and (332), (221) and (331) -step edges of type B. Results are summarized in Table 3. The vicinal surfaces are ordered according to the distance between terraces. We can see that there is a clear tendency with the decreasing distance between steps. In the case of A-step the barrier along the step is increasing with the step distance increasing, whereas for B-step it is decreasing. We obtained the barriers 0.45 eV and 0.78 eV for the diffusion along steps (311) and (331) surfaces, respectively. Experimental results of FIM measurements are the energy barriers E 311 = 0.52 eV and E 331 = 0.62 eV [14]. There is qualitative agreement between experimental and calculated data: E 311 < E 331 . Conclusion Using the RGL potential we calculated the energy barrier for self-diffusion on the flat Rh(111) surface and binding energy of supported dimer which are in good agreement with the experimental data. With the same potential, we systematically studied energy barriers for the descent at straight as well as rough steps on Rh(111). We found that the lowest energy barriers for descent to the lower terrace is for the exchange process near a kink on step B. We also calculated barriers for the diffusion along step edges on Rh(111) surface and along step edges on several vicinal surfaces. We found that the diffusion along step A is faster than along step B, which is in qualitative agreement with the FIM experiment. We observed that these barriers are slightly affected by the step-step interaction. We expect that due to rather large barriers for the diffusion along steps, both steps will be rough during the growth at lower temperatures and the interlayer transport will prefer step B. At a higher temperature the diffusion along step A starts to be active and the descent on both steps will be possible. However, step B will remain rough and the descent on this step will be easier. In island growth this would imply that B-edges of an island will grow faster than A-edges, therefore, B-edges will become shorter. However, the number of kinks for the easy descent on a shorter B-edge will be lower. Hence we expect that for a certain interval of temperatures the shape of the growing island will be asymmetric with longer A steps. This picture seems to be in agreement with the morphologies presented in [10]. Step A Figure captions Step B Step A Step B Step A Step B Step A Step B Step A Step B Step A Step B Step A Step B Step A Step B Step A Step B Step A Step B Step A Step B Step A Step B Step A Step B Step A Step B Step A Step B Step A Step B Step A Step B Step A Step B Step A Step B Step A Step B Step A Step B Step A Step B Step A Step B Step A Step B Step A Step B Step A Step B Step A Step B Step A Step B Step A Step B Step A Step B Step A Step B Step A Step B Tables
2019-04-05T03:37:41.946Z
1998-07-21T00:00:00.000
{ "year": 1999, "sha1": "2fa7214889010bf7bf4f8201d59e560a55e5d668", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/9908077", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "2fa7214889010bf7bf4f8201d59e560a55e5d668", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Chemistry", "Physics", "Materials Science" ] }
119290481
pes2o/s2orc
v3-fos-license
Entangling Power in the Deterministic Quantum Computation with One Qubit The deterministic quantum computing with one qubit (DQC1) is a mixed-state quantum computation algorithm that evaluates the normalized trace of a unitary matrix and is more powerful than the classical counterpart. We find that the normalized trace of the unitary matrix can be directly described by the entangling power of the quantum circuit of the DQC1, so the nontrivial DQC1 is always accompanied with the non-vanishing entangling power. In addition, it is shown that the entangling power also determines the intrinsic complexity of this quantum computation algorithm, i.e., the larger entangling power corresponds to higher complexity. Besides, it is also shown that the non-vanishing entangling power does always exist in other similar tasks of DQC1. I. INTRODUCTION Quantum entanglement is employed in most of quantum information processing tasks (QIPTs) including the quantum algorithms and the quantum communications [1]. There is no doubt that quantum entanglement is an important physical resource in quantum information processing. However, quantum entanglement cannot be competent for all the QIPTs [2][3][4][5]. Strong evidence has shown that some QIPTs display the quantum advantage, but there does not exist any entanglement in the tasks [6,7], which is also verified in experiment [8]. One such remarkable evidence is the scheme of the deterministic quantum computing with one qubit (DQC1) which accomplishes to evaluate the normalized trace of a unitary matrix by only measuring a control qubit, irrespective of the complexity of the unitary matrix of interests [9]. But DQC1 can not be performed effectively by only using classical computation [9,10]. So what quantum property leads to the quantum advantage in the DQC1? tum discord could be the quantum nature of the DQC1 [10]. However, after all, there exist some general unitary matrices (for example, Hermitian unitary matrices) that will arrive at the output states without any quantum discord. In this sense, it seems that quantum discord should not be the source of the quantum advantage of the DQC1 either, as is first suspected in Ref. [31]. Thus what on earth is the quantum nature of DQC1 is still left open. In this paper, we find that the trace calculation and the complexity in the DQC1 can be directly related to the entangling power which is defined by the maximal average on the ability to entangle a qubit with a pure state in a given ensemble. We find that the entangling power of the DQC1 circuit can be directly written as the normalized trace of the unitary transformation to be measured. Based on this result, we find not only that in the nontrivial DQC1 is there the entangling power, but also that the intrinsic complexity (which will be given at the end) of the evaluation of the normalized trace is determined by the entangling power. In this sense, we say that the entangling power could be used to signal the quantum nature of the DQC1. As a supplement, we also consider the other QIPTs using the similar DQC1 circuit. One will find that the entangling power is always present if the information of the system can be extracted by the control qubit. NOMALIZED TRACE We begin with a brief introduction of the generalized DQC1 described by the quantum circuit given in Fig. 1 [9]. The initial state can be written as where ρ α c = 1 2 (1 1 + ασ z ), σ x,y,z are Pauli matrices, the subscript 'c' denotes the control qubit, the superscript α means that the density matrix depends on the parameter α and 1 n means the identity of n qubits. It is noted that in the standard DQC1 [7], the initial control qubit is given by |0 . Through the quantum circuit, the state given in Eq. (1) will be transformed into the final state ρ n+1 , Thus if we measure the control qubit in the basis of σ x and σ y , respectively, one can obtain the corresponding expectations as 1 2 n Re(TrU n ) and − 1 2 n Im(TrU n ). In this way, the normalized trace of the unitary matrix U n is obtained only by the measurements on a single qubit, irrespective of the complexity of the unitary matrix. This shows the quantum advantage of the DQC1 in the reduction of the computational complexity. In this task, the initial state ρ 0 is obviously mixed, so in practical scenario, it has to be prepared by one of its pure-state realizations {p i , |̺ i } such that ρ 0 = p i |̺ i ̺ i |, p i = 1,where |̺ i is normalized but not necessarily orthogonal. An intuitive observation shows that the DQC1 circuit can lead to the entangled final state of |̺ i if any information on U n is unknown. Therefore, it is not difficult to ignite the light to relate this kind of entanglement to the mechanism of the quantum advantage of the DQC1. To do so, we would like to introduce the variational concept of entangling power. It is initially defined for a unitary transformation by measuring the average entanglement produced by this unitary transformation on the separable state subject to some kind of distributions [32]. However, in some special QIPTs, not all the quantum separable states are covered, so it is necessary to give an explicit definition well-suited to the given QIPT. So in the DQC1, we would like to consider the entangling power of the controlled unitary transformation 1 n−1 ⊕ U n with the assistance of the Hadamard gate H subject to the initial ensemble 1n 2 n . In other words, we will quantify the ability of the whole DQC1 circuit to entangle the control qubit with the n-qubit pure state selected from the initial ensemble 1n 2 n . With this aim, we have the following rigid definition. Definition.-The entangling power of the DQC1 cir-cuit is given by represents any a good entanglement measure [1]. Here we let E [·] = 2 (1 − Trρ 2 r ) with ρ r the reduced density matrix of the state taken into account [1]. The maximum is taken due to the non-uniqueness of the realization of 1n 2 n . In addition, ρ c is not limited to the pure state, which is different from the original definition of the entangling power besides the limited ensemble. Next, we will give our main results on E p Ũ n by two theorems. 3) for the standard DQC1 circuit corresponding to ρ 1 c = |0 0| ,i.e., α = 1, is given by Proof. Substitute ρ 1 c = |0 0| and any an n-partite pure state |ϕ i chosen from the ensemble 1n Fig. 1, the final state after these substitutions can be written as The reduced density matrix by tracing out the control qubit is given by So the entangling power can be expressed as The inequality comes from the concave property of the entanglement measure E [·] and the maximum is attained by the realization 1n 2 n = q i |φ i φ i | whereq i = 1 2 n and |φ j = k e i 2jkπ 2 n |υ k with |υ k the eigenvectors of U n . The proof is completed. It is quite interesting that the entangling power for the standard DQC1 is directly described by the normalized trace of the measured U n . So long as TrU n = 2 n which implies U n = 1 n e iθ , the entangling power will not vanish. This means that the DQC1 will demonstrate the quantum advantage. Otherwise, the entangling power will vanish for U n = 1 n e iθ , but the unitary matrix U n in this case will be easily evaluated in the classical computation. So it is a trivial case. Theorem 2.-The entangling power for the generalized DQC1 circuit corresponding to ρ α c = 1 2 (1 1 + ασ z ), i.e., 0 < α < 1, is given by Proof. If 0 < α < 1, the control qubit is obviously a mixed state. Based on the definition of entangling power given in Eq. (3), we can write with with ̺ i = j r j |γ j γ j | and the subscript i corresponding to ̺ i . Thus the entangling power can be rewritten as where the exchange of the maximum and minimum is attributed to the independence of realizations {r j , |γ j } and {q i , |ϕ i }. Consider the eigendecomposition of ̺ i : Thus any decomposition of ̺ i can be given in terms of the eigendecomposition characterized by Eqs. (14) and (15). Let ̺ i = j r j |γ j γ j | be one decomposition with the form of matrix given by ̺ i = ΨW Ψ † where the columns of Ψ correspond to |γ j and the diagonal entries of the diagonal matrix W correspond to r j . Therefore, we have ΨW T where T denotes a right unitary matrix with T T † = 1 1 . So |γ j can be given by the eigenvectors as where and Substitute Eq. (16) into Eq. (13), we can arrive at Based on Eq. (7) (or Theorem 1), we have Insert Eqs. (17) and (18) where the minimum is achieved when both T 1j and T 2j are real or imaginary for all j. The proof is completed. From the above two theorems, one can find the result given in Theorem 2 can be reduced to Theorem 1 for α = 1. That is, the entangling power E α p Ũ n pertains to all the cases of α. In addition, the entangling power of the generalized DQC1 including the standard case is directly described by the normalized trace of U n to be measured. As is the same as the analysis of the standard DQC1, if U n = 1 n e iθ which is a trivial case, the DQC1 will demonstrate the quantum advantage of the quantum computation, the entangling power will not vanish. In addition, it can be easily found that the entangling power will increase as α increasing. When α = 0, this means that the control qubit is an identity which will extract nothing about the measured U n . In this case, the entangling power vanishes, which is consistent with our expectation. III. THE COMPLEXITY WITH THE ENTANGLING POWER In fact, our entangling power is also closely related to the complexity of the DQC1. The complexity of the DQC1 can be given by where L(ε) is the measurement complexity which describes how many rounds of measurements are necessary to be operated on the control qubit for a given standard deviation ε, and n is the input complexity which denotes the number of qubits needed to be input into the DQC1 circuit. In usual analysis of the complexity of DQC1, only the measurement complexity L(ε) is considered. So it is stated that the complexity L(ε) only depends on the accuracy ε that we expect instead of the scale of the measured U n , because where P e is the probability that the estimate is farther from the true value than ε [10]. When we consider the practical experiment, the standard deviation should not exceed the true value of the measured quantity. In this sense, the complexity will also depend on the true value of the measured observables to different extents. Thus, instead of the standard deviation ε, it would be reasonable to describe the accuracy in terms of the relative error defined by with X the true value of some measured quantities. In the DQC1, σ x and σ y will be measured on the control qubit corresponding to the normalized trace of U n (actually to the real and imaginary parts, respectively). Let our finally expecting relative errors be ǫ ≥ max{ǫ(σ x ), ǫ(σ y )} with ǫ(σ x ) and ǫ(σ y ) denoting the expecting relative errors for the measurements σ x and σ y such that Substitute Eqs. (24)(25)(26) into Eq. (9), the entangling power can be written as with M = ln[1/Pe(σx)] |ǫ(σx)| 2 + ln[1/Pe(σy)] |ǫ(σy)| 2 . Thus for a given M , the measurement complexity L is directly determined by the entangling power E α p Ũ n . Since M is determined by the expecting errors and is independent of the scale of the measured U n , we think that the complexity L with the fixed M is the intrinsic complexity. The larger entangling power means the larger intrinsic complexly L. Thus one can find that the intrinsic complexity is directly determined by the entangling power. IV. DQC1-LIKE CIRCUITS IN OTHER TASKS We would like to generalize the current DQC1 circuit to general QIPTs, by which one will find that the nontrivial tasks with DQC1 circuit are indeed accompanied with non-vanishing entangling power. Suppose this circuit is used to extract the information of some state ρ n instead of 1 n and the control qubit is the general quantum state with P the polarization vector and σ the corresponding vector of Pauli matrices. The final state ρ f of ρ c via the circuit can be written as ρ f = Tr nŨn Hρ c H † ⊗ ρ n Ũ † n . The linear entropy of ρ f is given by . (29) In order to accomplish this extraction of information, L(ρ f ) should include at least the information on the state ρ n or the unitary operation U n based on different aims, since one hopes to extract the information by the control qubit. That is, L(ρ f ) should not vanish. In this case, we can obtain the similar theorem as Theorem 1 and Theorem 2. Theorem 3. Let the entangling power of the DQC1 circuit subject to ρ n be E P3 p Ũ n , ρ n , E P3 p Ũ n , ρ n does not vanish for L(ρ f ) > 0; and where the superscript P 3 means P = (0, 0, 1) T , the superscript P is the general case in ρ c and λ i are the square root of the eigenvalues of the matrix ρ c σ z ρ * c σ z in decreasing order. Proof. The proof is quite similar to that of Theorem 2. The details are given in the Appendix. One can easily check that Theorem 1 and Theorem 2 are covered in this theorem. V. CONCLUSIONS In summary, we have shown that the normalized trace of the unitary transformation U n can be directly described by the entangling power of the circuit of the DQC1. In addition, the entangling power also determines the intrinsic complexity of the evaluation of the normalized trace of the measured unitary matrix. In this sense, we think that the entangling power could be the signature of the quantum advantage of the DQC1. Furthermore, we also present a generalization of the current DQC1 circuit to other QIPTs. We find that the entangling power will not vanish if the information can be extracted by the control qubit. where the first inequality holds because the purity of ρ i r is not more than 1, and we consider the relation between the eigendecomposition ρ n =ΦMΦ † and the other decompostions similar to those between Eqs. (15) and (16). One can find that the lower bound of E P3 p Ũ n , ρ n given in Eq. (30) will vanish if [U n , ρ n ] = 0. However, in this case, one can easily prove that E P3 p Ũ n , ρ n will not vanish unless U n = e iθ 1 n which leads to zero L(ρ f ). Thus we show that any non-trivial QIPT with DQC1 circuit is accompanied with entangling power. In order to show that Eq. (31) holds, we have to rewrite the initial control qubit ρ c given in Eq. (28). Similar to Eqs. (14,15), any decomposition of ρ c can be related to its eigendecomposition with cos θ = P 3 /Γ, Γ = 3 k=1 P 2 k . Thus any decomposition of ρ c = Ψ ′ W ′ Ψ ′ † can be written by Ψ ′ √ W ′ = Φ ′ √ M ′ T ′ with T ′ the right unitary matrix. Substitute any one possible pure state [Ψ ′ 11 , Ψ ′ 12 , · · · ] T and ρ n into the DQC1 circuit, one will arrive at the final state as where and r ′ j = x ′ j 2 + y ′ Similar to Eq. (20), the entangling power can be given by where λ i are the square root of the eigenvalues of the matrix ρ c σ z ρ * c σ z in decreasing order. The proof is completed.
2013-07-04T03:31:45.000Z
2013-02-19T00:00:00.000
{ "year": 2013, "sha1": "3dd6a994300c47c4aa0e54294d9ec50801d6a46b", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1307.1196", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "3dd6a994300c47c4aa0e54294d9ec50801d6a46b", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
18243823
pes2o/s2orc
v3-fos-license
Neck Strength Imbalance Correlates With Increased Head Acceleration in Soccer Heading Background: Soccer heading is using the head to directly contact the ball, often to advance the ball down the field or score. It is a skill fundamental to the game, yet it has come under scrutiny. Repeated subclinical effects of heading may compound over time, resulting in neurologic deficits. Greater head accelerations are linked to brain injury. Developing an understanding of how the neck muscles help stabilize and reduce head acceleration during impact may help prevent brain injury. Hypothesis: Neck strength imbalance correlates to increasing head acceleration during impact while heading a soccer ball. Study Design: Observational laboratory investigation. Methods: Sixteen Division I and II collegiate soccer players headed a ball in a controlled indoor laboratory setting while player motions were recorded by a 14-camera Vicon MX motion capture system. Neck flexor and extensor strength of each player was measured using a spring-type clinical dynamometer. Results: Players were served soccer balls by hand at a mean velocity of 4.29 m/s (±0.74 m/s). Players returned the ball to the server using a heading maneuver at a mean velocity of 5.48 m/s (±1.18 m/s). Mean neck strength difference was positively correlated with angular head acceleration (rho = 0.497; P = 0.05), with a trend toward significance for linear head acceleration (rho = 0.485; P = 0.057). Conclusion: This study suggests that symmetrical strength in neck flexors and extensors reduces head acceleration experienced during low-velocity heading in experienced collegiate players. Clinical Relevance: Balanced neck strength may reduce head acceleration cumulative subclinical injury. Since neck strength is a measureable and amenable strength training intervention, this may represent a modifiable intrinsic risk factor for injury. Minimizing head acceleration has been a focus of recent research in many sports, including American football, 9 rugby, 31,32 lacrosse, 25 and soccer. 33,34 Protective headgear does not appear to be a valid solution for soccer players, 4,34 hence other strategies have been explored. Heading technique, neck muscle strength, and head size are potential intrinsic risk factors that have been investigated to date. 1,7,26,46 Modeling of head and neck motion has identified the stiffening effect that neck muscles adopt when experienced players anticipate a head to ball contact. 4,24,41 This stiffening is likely achieved by volitional eccentric contraction of the sternocleidomastoid muscles in the anterior cervical region as the ball makes contact with the head and the head moves backward. Concentric sternocleidomastoid muscle activity follows as the head moves forward and the ball rebounds from the head. 1,4 The upper trapezius muscles are also involved in the heading motion. This muscle stiffening may absorb the kinetic energy of the ball by the head and torso rather than just the head, thus minimizing the acceleration that the head experiences by increasing the relative mass of the player and extending the time of contact with the ball. 4,42,43 Neck musculature may also play a role in damping the flexion/ extension oscillations that the head experiences during and after the act of heading. 19,41 Damping is any effect that tends to reduce the amplitude of oscillations in an oscillatory system. In this case, neck muscle acts as a viscoelastic mechanism producing an effective shock absorber. Greater muscle stiffness may be a principal factor in lower head angular acceleration in headneck stabilization. 46 Further evidence exists in motor vehicle accident research 23 ; awareness of impending impact may reduce the risk of injury through stiffening/damping. Investigations into vestibular reflexes 17 during a sudden head drop have also highlighted the neck muscles' ability to stabilize the head. In soccer, this neck muscle stabilizing effect may be particularly important during low ball velocity practice scenarios where the neck muscles are attempting to purposefully direct the ball rather than redirect the ball without effort. Such scenarios are commonplace in practices for novice players. In this investigation, the authors sought to correlate head acceleration during heading to neck strength. While neck strength has not been correlated directly with head acceleration, 26 it was hypothesized that the difference between neck flexion and neck extension strength may be a predictive factor of head acceleration. Achieving a balance between agonist and antagonist muscle activation has become an established means of preventing injury in other body parts. 5 Previous comparison of novice and experienced soccer players 19 suggests that differences in neck muscle coordination may be responsible for higher acceleration during heading by novices. Optimal damping of the ball's kinetic energy would be expected in the setting of coordinated and balanced neck flexor (sternocleidomastoid) and neck extensor (trapezius) contraction to maximize the viscoelastic dashpot effect and diminish the head's acceleration. While heading coordination is a learned skill that all experienced players are expected to achieve, neck strength varies between populations. 26,46 Greater muscle strength is expected to correlate with greater viscoelastic resistance, as both rely on a greater number of muscle filament cross-bridges. 8 As with other body areas, the head-neck complex is expected to be stable, with symmetry in neck strength between flexors and extensors. This may be most important in practice situations at low incoming ball velocity because this necessitates greater voluntary effort and hence neck muscle activation. Both male and female athletes were included in the study expecting there to be differences in mean neck strength between sexes. Thus, the study was designed to test the main hypothesis that neck extensor strength correlates with head acceleration. EXPERIMENTAL DESIGN AND METHODS In this cross-sectional study, college-level soccer players were from Division I and II programs. Prior to enrollment, all subjects were informed of the risks and benefits of participating, and written informed consent was obtained. The Institutional Review Boards at both Albany Medical College and Rensselaer Polytechnic Institute approved this protocol. Furthermore, the variables used in this study were not available at the time of data collection, thus both the tester and the subjects were blinded to the values that were generated at time of testing. MEASURING MOVEMENT OF SUBJECT HEAD, SUBJECT BODY, AND BALL DURING HEADING After providing demographic information (age, weight, height, and years of soccer experience), markers were attached by Velcro™ tape (VELCRO USA, Manchester, New Hampshire) to the subject in preparation for the motion capture experiment. The acceleration of the subject's head during heading was measured using a 14-camera Vicon MX3 Motion Capture System (Vicon Motion Systems, Los Angeles, California). During preliminary testing, the motion capture system was able to record the position and movement of the subject marker set and the ball simultaneously at a rate of 450 Hz. Given that the impact between the ball and the player lasts approximately 20 milliseconds, 19 this capture rate is sufficient to detect the peak acceleration of impact. 47 A stationary marker set was used to estimate the sampling error at 450 Hz. To record motion, 11 retro-reflective markers were attached to anatomic landmarks: the suprasternal notch (Clav) and the xyphoid (Strn), as well as the seventh cervical (C7) and tenth thoracic (T10) vertebrae ( Figure 1). Torso markers were placed principally to facilitate kinematic analysis of heading technique. Both temples (left and right front of head; LFH, RFH) and along the parietal/occiput suture (left and right back of head; LBH, RBH) were marked and provided data for kinetic analysis of head acceleration. A minimum of 3 markers are required to determine an object's rotational motion. However, redundant markers are often used in case 1 marker becomes obscured. Hip position was recorded by markers located at the first sacral vertebra (S1) and the anterior superior iliac spines (L/R ASIS) and was used by the Vicon software to identify the subject and differentiate subject markers from the ball markers. The soccer ball had 6 low profile soft markers attached, enabling simultaneous recording of ball speed and subject movement To record a single heading trial, soccer balls (450 g collegeregulation Adidas inflated to 82.7 kPa) were served to the subjects by an investigator from 3 meters away mimicking a soccer practice scenario of low ball velocity. Subjects were asked to return the ball to the experimenter's hands by executing a header. To be included, recordings had to contain all of the markers for the entirety of the heading maneuver. If a marker was not visible or the recording incomplete, the next suitable trial was used. Pilot testing showed 1 in 4 headers were of sufficient quality for further analysis, so approximately 20 headers were recorded for each subject to obtain 5 usable recordings. Five was chosen to minimize the effects of fatigue while obtaining a sample representative of the normal variation from header to header. CALCULATION OF TRANSLATIONAL HEAD ACCELERATION The translational acceleration of the subject's head during the task was calculated by taking the numerical first derivative of each head marker. This provided velocity vectors in each direction (X', Y', and Z'). The numerical derivative of these velocity vectors was used to calculate component acceleration vectors for each marker. Because the skull is a rigid body, the component acceleration vectors were averaged to find a head acceleration vector. 19 Commercial motion capture studios use similar techniques built around rigid body segments that combine to form full-body skeletons. 48 Using multiple markers to calculate acceleration reduces intratrial error. Within each frame of data, the acceleration of the 4 head markers (RFH, LFH, RBH, LBH) was averaged for the subject's mean head acceleration. The head acceleration throughout an entire trial was calculated by assuming that the greatest absolute head acceleration during the time of impact (when the ball first touches the subject's head until when the ball leaves) was due to impact with the ball. The average of these values across each subject's 5 trials was reported in the final analysis. CALCULATION OF ANGULAR HEAD ACCELERATION Head angular velocity was in the sagittal plane with the second numerical derivative of the angle defined between the head normal vector and the floor vector (the horizontal axis was used as a reference, as the ball was served parallel to this axis). The head normal was defined as a vector perpendicular to the plane of the 4 head markers (Figure 2). Combinations of the head markers were used to create vectors, the cross-product of which was used to calculate an average head normal vector. The head-floor angle was defined as the angle between the head normal and the floor and was plotted as a function of time throughout the trial. The numerical second derivative of this angle was calculated to find the subject's angular head acceleration. As with peak translational acceleration, the peak angular acceleration was identified as the highest value during the time of impact. The resulting peak angular acceleration values were averaged across each subject's 5 trials and reported in radians per second squared. MEASURING NECK STRENGTH Neck strength measurements were taken using a spring-type clinical dynamometer (Baseline Push-Pull Dynamometer, Fabrication Enterprises Inc, White Plains, New York). The subject was seated facing the sensor and was restrained at the level of their shoulders to help isolate the neck muscles and reduce accessory movement (Figure 3). A special adapter allowed the sensor to be fit to the forehead. The sensor was horizontal and secured at the level of the subject's forehead when the subject was seated upright. The dynamometer automatically digitally recorded the highest force generated during any 1 push. The subject attempted 5 isometric neck flexion movements at maximum voluntary exertion. The maximum force (in Newtons) generated by each attempt was recorded, with as much rest time as the subject desired between exertions. Subjects rested an average of 10 to 20 seconds between attempts. After completion of flexion, subjects then rotated 180° (facing away from the sensor) with the back of their head cradled by the sensor. Subjects then provided 5 isometric neck extensions, and the maximum force generated by each attempt was recorded by the dynamometer. Neck strength imbalance was defined as mean flexion strength minus mean extension strength. STATISTICAL ANALYSIS Independent samples t tests were used to compare differences in mean neck flexion strength, mean neck extension strength, and mean strength imbalance between sexes. Differences were considered statistically significant for P ≤ 0.05. When no sex-based differences were identified, data from all subjects were pooled (men and women) and correlations between: RESULTS Sixteen subjects, 8 men and 8 women, consisting of 6 forwards, 4 midfielders, 5 defenders, and 1 goal keeper, were tested. The mean age was 20.5 years (±1.9 years). There were no significant differences between sexes in mean neck flexion (P = 0.201), extension strength (P = 0.130), or mean imbalance (P = 0.631), justifying the pooling of data ( Table 1). The ball was pitched to subjects at a mean 4.29 m/s (±0.74 m/s) and subjects returned them at a mean velocity of 5.48 m/s (±1.18 m/s). Figures 4, 5, and 6 depict a typical heading event. The dip in the ball velocity is due to the drop that occurs when the ball hits the player's forehead (Figure 4). First, the head accelerates into the ball and then rapidly slows with impact ( Figure 5). The head then speeds up as it launches the ball forward, going through a few flexion/extension oscillations. The remainder of the acceleration is negative as the player begins to decelerate his or her head and neck after accomplishing the heading task. Data indicate a maximum error in linear acceleration of the system of 2.19 m/s 2 . Angular acceleration is slightly negative prior to impact, as the head angularly accelerates into the ball (Figure 6). The DISCUSSION This study shows that symmetrical strength in neck flexors and extensors may reduce head acceleration of experienced collegiate players at low ball velocity. In contrast, previous investigations have been unable to correlate lower head acceleration with enhanced neck strengthening when neck flexor/extensor strength difference was not explored. 26 It is not known if these results apply to a younger population, which may be at highest risk of head injury from repetitive heading. Studies in movement science have used accelerometers to measure head acceleration because they afford a high sample rate. 13,33 However, the necessary sensors, headgear, and wires often associated with accelerometers may impede natural motion. A 450 Hz capture rate satisfies the Nyquist criteria, 47 and motion capture measures head acceleration naturally without the interference of wires and extra sensors. Realistic movement was captured at low ball velocities. These acceleration values are consistent with previous investigations. 12,13,33,34,44,45 Achieving and maintaining a balance in neck strength may be a key preventive technique in limiting acceleration, hence limiting the potential risks of repetitive heading in soccer. 6,11,21,22,[27][28][29][30]38,39 Balancing agonist and antagonist muscle activation is an established injury prevention strategy for other areas such as the knee. 5 Balancing muscles may be particularly beneficial for younger players learning the game and would perhaps be a more objective, quantitative parameter when deciding when to introduce heading relative to the abstract age limits currently in existence. 36 Increasing ball velocities might magnify the relationship between head acceleration and neck strength. Neck strength assessment could be used for pre-participation screening of soccer players to identify those most at risk and strengthening regimens employed during preseason conditioning to lower the risk of injury during soccer participation. Such strength and conditioning interventions have been successful in reducing incidence of anterior cruciate ligament injury in soccer. [14][15][16] Because of the small cohort of subjects in this study and the lack of a power analysis, sex differences and player position or experience were inadequately investigated as predictive factors in head acceleration. There was a high standard deviation in mean flexion and mean extension strength in each sex. Therefore, no definitive conclusions can be made about the strength imbalance between sexes. This study is further limited because the measurement of neck flexion and extension strength was not randomized. This may have biased the results toward greater flexion strength because it was always tested first. CONCLUSION This cross-sectional study of collegiate soccer players suggests a correlation between neck strength imbalance and angular head acceleration during heading. While the exact mechanism underlying this correlation is not clear, a balance of neck flexor/extensor strength may reduce head acceleration by increasing the relative mass of the head and neck and damping the flexion/extension oscillations that the head experiences during and after the act of heading.
2018-04-03T06:03:40.322Z
2013-07-01T00:00:00.000
{ "year": 2013, "sha1": "1416f909d7a4d3647e0db4c3e174477c0a223fbc", "oa_license": "implied-oa", "oa_url": "https://europepmc.org/articles/pmc3899908?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "93c27ca3eda3115328d6c4298ff84134e218728c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
229298540
pes2o/s2orc
v3-fos-license
Heterotopic gastric mucosa in the gallbladder—a rare find Abstract It is universally known and accepted that the development of a certain type of tissue outside its usual location, like in the gastrointestinal tract, can occur. This is a relatively common situation in the upper region of the gastrointestinal tract. However, the development of gastric mucosa in the gallbladder is a rare find. The following is the case of a 22-year-old male with an 18 mm gallbladder polyp, who electively underwent a laparoscopic cholecystectomy, having been diagnosed at a histopathological level with heterotopic gastric mucosa in the gallbladder. This brief article also aims to provide a reflection on the possible evolution of neoplasms from this histological change, based on the doubts raised in literature. INTRODUCTION Heterotopic or ectopic tissue is defined as the presence of tissue outside its normal location, devoid of neural, vascular or anatomic connection to the main body of an organ in which it normally exists [1]. Heterotopic gastric mucosa is most often located in the upper gastrointestinal tract [2]. However, it can appear throughout the whole gastrointestinal tract as it is known, for example, in Meckel's diverticulum [3]. The development of gastric mucosa in the gallbladder is a rare find, with ∼34 cases reported in the existing literature. Other tissues such as liver, adrenal and thyroid have already been described in the gallbladder [1]. As heterotopic tissue may promote carcinogenesis of the gallbladder, close attention should be paid to any occurrence of such lesions in this anatomical region. CASE REPORT A 22-year-old male with smoking habits was periodically followed in internal medicine consultations for epigastric abdominal pain, in which, through endoscopic examinations, a Helicobacter pylori positive test was identified. Once the eradication of H. pylori was completed and confirmed, the patient started experiencing symptoms of recurrent pain in the right hypochondrium. His blood tests were normal. An abdominal ultrasound was performed, which revealed a nodular image adhering to the internal wall (not moving with positional changes) with 18 mm, found in the infundibular region. It was echogenic with an anechoic center. These aspects were compatible with sessile polyp with a necrotic center associated with surrounding parietal thickening. In order to obtain further clarification, an endoscopic ultrasonography was carried out, which confirmed a homogeneous echogenic lesion with central hypoechogenicity with a larger transversal diameter of 18 mm adhering to the wall (Fig. 1). In this context, the patient was proposed for elective laparoscopic cholecystectomy, which occurred without complications. After surgery, microscopy revealed a cavitated polyp consisting largely of gastric body mucosa, although pyloric type mucosa was still found, with no intestinal metaplasia or epithelial dysplasia being identified ( Fig. 2-macroscopic examination/Fig. 3-microscopy). DISCUSSION In a systematic review on this subject, it is described that this change is found equally in males and females, with an average age of 36.4 years [3]. Clinically, heterotopic gastric mucosa manifests symptoms such as colicky pain in the epigastric or right hypochondrium, associated with nausea and vomiting [1], although sometimes it can also be described as asymptomatic or even an accidental finding [4]. About 50% of patients had normal blood examination; however, sometimes they had elevated transaminases and gamma-glutamyltransferase or leukocytosis caused by disturbed bile flow leading to inflammatory reactions. In the ultrasound diagnosis, a polypoid mass of broad or sessile base, usually hyperechoic, was found. Regarding the localization, the cystic duct and the gallbladder neck were the most frequent sites [3]. Heterotopic gastric mucosa in the gallbladder is a rare condition that raises many doubts regarding the causes of its development and the consequences thereof. There are some proposed causes for the appearance of heterotopic gastric mucosa, such as the entrapment of primitive gastric tissue [6], abnormal development, heterotopic differentiation or the metaplastic differentiation [1]. However, due to absence of a clear embryological explanation and the extreme rarity of the condition, the etiology of this situation remains unknown. Regarding the consequences, Ishii et al. [7] suggested that heterotopic gastric mucosa may have the potential for carcinogenesis. Although no malignant transformation has yet been reported, dysplasia in heterotopic gastric mucosa in the gallbladder has been already reported [7]. Thus, it is considered that carcinoma must be ruled out in polypoidal lesions of the gallbladder >1.0 cm due to the high incidence of gallbladder carcinoma in sessile polypoidal lesion [8]. Another possible consequence of the existence of gastric tissue in the gallbladder is ulceration [1]. Thus, considering that preoperative diagnosis is impossible, it is necessary to use imaging characteristics to establish a differential diagnosis between benign and malignant polyps. Benign polyps are usually <10 mm in size, whereas carcinoma or heterotopic gastric mucosa may have larger dimensions [9]. If a computed tomography is performed, one difference is the hypovascularization of the carcinoma in contrast to the hypervascularization of the heterotopic gastric mucosa [10]. Another way in which it can be differentiated is based on the location, as the heterotopic gastric mucosa is usually located in the neck of the gallbladder and the carcinoma can infiltrate the gallbladder fossa. In conclusion, once the final diagnosis is made by histopathology, it is the surgeons' role to look into an eventual heterotopic gastric mucosa in young patients with symptoms of cholecystitis or cholelithiasis and gallbladder polyps, while not being neglectful of the differential diagnosis of a possible carcinoma. It is worth remembering that despite the cause that underlies the symptoms, cholecystectomy is always the indicated intervention for large polyps and symptomatic patients. DECLARATION OF PATIENT CONSENT The authors certify that they have obtained all appropriate patient consent forms. In the form, the patient has given his consent for his images and other clinical information to be reported in the journal. The patient understand that his name and initials will not be published, and due efforts will be made to conceal his identity.
2020-11-12T09:04:22.357Z
2020-12-01T00:00:00.000
{ "year": 2020, "sha1": "15881029fa692af13da2f36b38d19b98db8ee810", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1093/jscr/rjaa490", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "afb325a3114eb3faf7d1f07a86294f882836e0a7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
247355317
pes2o/s2orc
v3-fos-license
The structure and magnetic parameters of the Fe-Cr-Co additive alloy The structure of Fe-Cr-Co magnetic material, manufactured by selective laser melting (SLM) on RussianSLM FACTORY unit from spherical powder of size less than 80 μm, was studied. The powder was produced by the melt atomization. At different speeds of scanning and laser power, samples were manufactured to study the structure, magnetic and mechanical parameters. By constructing hysteresis loops, data obtained indicating the growth of magnetic characteristics (Br, Hcb and (BH)max) in SLM samples in comparison with similar, obtained by foundry technologies. Introduction In mechanical engineering, laser additive technologies are increasingly used. One of the technologies that has spread recently -selective laser melting of powder (SLM) [1][2][3]. In this technology, the construction of a monolithic material manufactured from metal powder occurs by consistently laying small volumes, their melt and crystallization where the focused laser beam is directed. The movement of the beam is controlled by a computer according to a 3D-model. The feature of technology is the speed of alternating acts of melting and crystallization of the powder mixture, as a result of which the structure of the composite in the form of a mixture of crystallites of different morphology is formed in the stack of exposure to the laser. Local melting acts and structural-phase transformations implemented under non-equilibrium thermodynamic conditions are not sufficiently studied, although widely distributed in various technological applications with transitional modes. The interest of using SLM for magnetic materials is caused by the fact that with the transition to fine metal powders, the density of electrons in the valence zone and the conduction zone of materials changes dramatically [4][5]. This is reflected on the properties caused by the behavior of electrons, primarily magnetic. The aim of the work was to approbate the SLM for the manufacture of permanent magnets from the Fe -25 wt. % Cr -15 wt. % Co hard magnetic alloy. It is necessary to obtain powders and manufacture samples with parameters close in magnitude to magnets of traditional manufacturing technology. Materials and experimental methods Fe -25 wt. % Cr -15 wt. % Co alloy relates to a group of precision hard magnetic materials. It is used for the manufacturing of permanent magnets. The basis of the alloy represents iron (Fe), the content of 2 which can vary in the range from 45 % to 64 %. Also present cobalt, chrome, impurities. The chemically composition of the powder obtained by atomizing on the Hermiga 75/3VI unit with induction heating of the crucible (figure 1) is presented in table. 1. The initial raw material for atomization was the ingots provided by JSC "S-magnet", Moscow, Russia. Spraying parameters are: at a temperature of ~ 1650 °C in an argon atmosphere, followed by cooling with rates from 10 5 to 10 8 °C / s. After spraying, the powders are made to the desired fraction of less than 80 μm satisfying the requirements of SLM. The manufacture of the necessary samples from the atomized powders was produced on the laboratory areas of Nanocentre NRC "Kurchatov Institute" -CRISM "Prometey" using the RussianSLM FACTORY unit with a solid-state iterbium laser (figure 2). To select the best melting parameters, the laser power and the scanning speed ranged from 150 to 195 W and from 800 to 1000 mm / s so that the heat input is maintained constant. Structural studies were performed on thin sections by metallography using the Axiovert light microscope and Tescan Lyra 3 raster electron microscope, equipped with an Oxford instruments Symmetry diffractometric analysis of reflected electrons with quantitative image processing. Magnetic parameters with the definition of B r , H cb and (BH) max were obtained on hysterezographs and militeslameter. Experimental results and discussion It can be considered established that when using laser additive technologies, from metal powders, finegrained composite material is obtained with micro and nanostructure objects [6]. According to triple isotherms, in Fe-Cr-Co composites (figure 3) complex structural and phase transitions initiated by concentration inhomogeneities are possible. The heterogeneity in the chromium distribution is noteworthy as one of the powerful carbido-forming elements that determine the strength and plastic properties by the formation of hardening particles. Sequential quasi-periodic processes of melting and solidification may be accompanied by dissolving carbide particles, forming dendritic cells of different sizes with repeated release of nanoparticles of variable stoichiometric composition. At each level of the structural hierarchy of nanoparticles, not compatible among themselves by crystallography, locally combine on the sections of epitaxial growth in the ensembles due to the coherence effects. The stability of atomic clusters is determined by the type and strength of interatomic bonds, temperature and the nearest environment. Than the particle smaller and the temperature below, than its ∂c/ ∂τ = ∂2c/ ∂x2 -k f(c) (1) Here x = X / a (X is a coordinate, measured from the surface of the powder particle), a is the scale, Depending on the type of kinetics functions, the distribution of reagent concentrations within the reaction volume may vary. [7] it was assumed that in the zone of the laser beam in the acts of melting and crystallization, the temperature of the powder ranged from 0.24 to 0.34 T main / T melt . The identified structures indicate micron-sized objects that do not correspond to the nanoscale range. Consequently, the issue of its formation should be attributed to the initial stages of the crystallization of the melt. Therefore, it is necessary to refer to the characteristics of the observed objects in a micrometric scale with consideration of the not the entire set of crystal atoms, and its small partaggregates, clusters and imperfect crystals up to 5 μm. Structural inhomogeneity in the form of objects of different morphology detected on the thin sections is a fundamental property of a nano-condition, the reason for which the quantum properties of the system and the appropriate structure of the structure are. Therefore, the control of the processes of the nano-range should be based on the spin nature of the electronic subsystem interaction, in which the movement of the magnetization vector has the form (equation 2): dM/dt = γM x H (2) Here M is the magnetic moment vector of the volume unit, H is a vector of a static magnetic field, γ -the ratio of the magnetic moment by the time the amount of movement. The size of the domains is not a substance constant, but it is determined by the properties of the sample. In terms of optimizing the structure, a possible model with a limitation of domain growth in the form of a logistics function corresponds to the Malthus evolutionary equation 3: dX/dt = X -X2 -kY (3) Here X is the magnetization due to the distribution of spins, Y is the size of single-domain particles. k < 1. Under the action of the external field, the increase in the resulting magnetic moment is due to an increase in the volume of domains with the rotation of the magnetization vectors in the direction of the external field [8]. But in materials consisting of small grains (< 1 μm), the formation of a domain structure is unprofitable. Therefore, it is assumed that in the SLM manufacturing of magnets in shortterm acts of melting and crystallization, the formation of particles is due to their substitutionclusters. The formation of clusters is an unstable physical process in the thermodynamic sense. According to preliminary estimates in the used method, the heating rate of the powder to melt is determined in 10 4 °C / s in the laser focusing zone, and the cooling rate is based on the recurring heating cycles at 10 2 °C / s, regardless of variation of the operation modes of the RussianSLM FACTORY unit on the injected laser power and scanning speed. In fast acts of melting and crystallization, electronic shells s, p and others with a suitable state of phases of wave functions (figure 5) can take part by not only the diffusion of atoms as a whole, but also with the participation of electronic shells from collectivized electrons with a periodic lattice field and valence due to spin splitting. For example, overlapping orbitals s, p or d of the external electronic shells responsible for the formation of magnetic properties. Therefore, according to indirect features, it can be assumed that the beginning of crystallization is concentrated in a vapor cloud during the desublimation on the surface of the laser beam focus [9].
2022-03-09T18:51:36.520Z
2022-03-01T00:00:00.000
{ "year": 2022, "sha1": "d6497d39b113df3b8878d6d452f7d8c3d1a3792d", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1088/1742-6596/2182/1/012084", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "ac2c6f87e0b2fe1a52108ff91e02470064782261", "s2fieldsofstudy": [], "extfieldsofstudy": [] }
55321706
pes2o/s2orc
v3-fos-license
College-School Dialogue and Mentoring in Teacher Training Programmes in Zimbabwe Globally, mentoring has been recognized as one of the effective approaches in professional training and development of teachers. Most importantly, in Zimbabwe, mentoring has been largely adopted as one of the Teaching Practice strategies by teacher training colleges and schools. Good quality mentoring in schools makes an important contribution to developing professional skills especially to the student teachers (mentees) as this will ultimately ensure good quality learning experiences of learners. The purpose of this study was to establish the extent to which effective dialogue between colleges and primary schools can enhance effectiveness of the mentoring strategies in training teachers on teaching practice. Interviews, questionnaires and focus group discussions were adopted in this study to establish the importance of dialogue between colleges and schools in strengthening the effectiveness of mentoring as a training strategy. The study concluded that lack of mutual sharing of ideas, skills, knowledge and information on mentoring between training colleges and schools have a negative impact on the quality of graduate teacher. The study recommends the development of communication strategies that break the barriers between colleges and schools so as to enhance the effectiveness of the mentoring approach. Introduction and Background Globally, mentoring is not considered as a new phenomenon but recently acknowledged as one of the most effective ways of developing people's knowledge in real contexts.What might be new could be the mode of implementation adopted by a particular country during a particular era.Whichever model used, the universal goal is that mentors (experienced, knowledgeable persons) assist mentees (novice, inexperienced) in developing into acceptable, relevant practitioners in any field. Various models of mentoring include school-based, peer mentoring, group mentoring or one to one mentoring.The Zimbabwean teacher education system adopted the school-based mentoring in training both primary and secondary school teachers to replace the long established Zimbabwe Intergrated National Teacher Education Course (ZINTEC) and conventional four year programme that stipulated that student teachers are allocated a class under their sole responsibility with the head or deputy head playing a mentoring role.The 2-5-2 mode of training primary student teachers stipulates that students spend the first two residential terms in college; the next five terms in schools on teaching practice then the last two in college and thus their last residential final phase.During the teaching practice phase, the school based mentors are accountable for the student teachers' learning.Mentors are expected to practically and professionally develop student teachers in collaboration with both the training college and the University of Zimbabwe, particularly the department of Teacher Education (DTE) that is responsible for teacher training (Tomlison, 1995;Kasowe, 2013).Chiromo (1999) states that in Zimbabwe, school-based mentoring where a student is attached to a qualified teacher started in 1995.The mode of training specifies that during the teaching practice phase, student teachers are no longer in charge of a class as it used to be but attached to an experienced teacher as a mentor in schools (Policy No. 1 of 2002).Student teachers are exposed to real teaching to avoid theorizing ideas (Kasowe, 2013).The objective of attaching students allows them to observe the mentor teaching, while mentors would advise student teachers on matters relating to their professional development.In this training process, mentors are expected to work as partners with college tutors or lecturers (Hagger, Burn, & McIntyre, 1994).Supporting the idea, Chakanyuka (1998) goes beyond immediate circles of partnership and suggest that stakeholders need to work together closely.Thus, the model is expected to take a collaborative character. Different lenses are used in determining the meaning of "mentoring".Mentoring is viewed as a developmental partnership through which one person shares knowledge, skills, information and perspectives to foster the personal growth of someone else (Sandura & Williams, 2002;Chakanyuka, 2006;Wright-Harp & Cole, n.d.).Definitions that stem from different contexts whether African or Western, bear common aspects, primarily nurturing potential for the whole person, widening skills base and competencies and building wisdom and ability to apply skills, knowledge and experience to new situations. Zimbabwe teacher training school-based mentoring sought to achieve a variety of objectives that include; -Exposing student teachers to the art of teaching through linking theory learnt in colleges to practice in primary schools. -Developing teachers with relevant pedagogical skills and prepare student teachers professionally, academically and socially. In pursuant of the foregoing objectives, mentors are to take the responsibility of ensuring the continued high levels of professionalism that teaching demands and develop an understanding of how students learn to teach and enter the debate about the forms of professionalism that effective teaching and processes demands (Furlong, Whitty, Barret, Barton, & Miles, 1994).In developed countries, school-based mentoring provides colleges an opportunity to participate in selecting suitable schools as attachment is not done to every or any school (Smith & West, 1993).In the Zimbabwean context, colleges only make a choice of provinces where student teachers would be deployed although the students have to identify the schools.In most cases, students hunt for schools that are convenient for them in terms of social life at the expense of academic life.This situation could be attributed to the meagre government stipend students get during the practicum phase.Often, students forcibly settle themselves in areas where cost of living is not very expensive.In most cases, such schools are academically impoverished.There is stiff competition for good urban schools between colleges.Schools that are known to be good, students compete for placement.The challenge is that such schools would not have the capacity to absorb the large numbers of students on teaching practice. In urban or peri-urban areas, a single school at times may be host to students from more than one college.Each college having its own demands and expectations, teaching practice supervision styles, and as a result schools experience some challenges as there is no universal way of mentoring students.In some situations high demand for placement in a school forces school heads to end up attaching students to some teachers who are neither experienced nor very competent.Such practice makes communication between colleges and schools an inevitable and indispensable component for the implementation of a meaningful, and effective school-based mentoring. Statement of the Problem School-based mentoring model that has been adopted in the training of teachers in Zimbabwe has been acknowledged as an effective strategy in most contexts elsewhere in the world but what is yet to be established is the degree of interface between colleges and schools.Despite these school efforts in assisting the nurturing of student teachers professionally, it appears that there is some knowledge gap existing between the training colleges and mentoring schools.Processes of preparing and sustaining effective learning of mentees are not being communicated fully to schools.Colleges are not effectively communicating with the schools specifically the mentors on mentoring roles, supervisory skills, college and the qualification awarding board expectations.Existing literature on mentoring in the Zimbabwean context reveals lack of systematic inquiry on the role of such communication for effective mentoring.This study attempts to fill that gap. Purpose and Importance of Study This study sought to establish the extent to which communication between teachers' colleges and mentoring schools can enhance the effectiveness of school-based mentoring as a training strategy for student teachers in primary schools.The study sought to achieve the following objectives: -Identify the communication strategies employed by training colleges and mentoring schools in implementing school-based mentoring programmes. -Evaluate the impact of communication on the performance of the school-based mentoring. -Describe the challenges confronting colleges and schools involved in the programmes. Results from this study are hoped to inform the mentoring schools and teacher training colleges on how both can utilize communication as a tool to improve school-based mentoring.The government would also be made aware of the strategies that can be implemented to ease the existing training challenges between mentoring school and training colleges.Study findings would also contribute to the existing literature as it appears that most studies focussed much on challenges, perceptions, mentoring roles and preparedness of mentoring personnel. Dialoguing Models According to Watkins and Whalley (1993) communication is needed at many levels and at various times during mentoring, as it is the process of imparting or interchanging ideas, thoughts, opinions or information by speech, writing or signs.Those involved in the communication need to clarify what is communicated, to whom, when and then plan a method and means for various occasions as people communicate different needs at different occasions.As a result, various communication models are employed when people communicate in different contexts for different purposes.The study that was carried out by Mennecke (n.d.) showed that the linear model where communication has the top-down structure, has little benefit mostly for viable organisations whilst the interactive model which could be in the form of conferences, seminars, debates, workshops greatly impact on people's knowledge expansion.Stakeholders have room to share ideas, experiences, challenges, knowledge and could work together in finding solutions to confronting challenges. Purpose of Dialoguing Mentoring is considered a benefiting learning process for the mentee, mentor, practicum school and training college.Identified stakeholders benefit in one way or the other.It is believed that the training colleges are a rich resource of the theory, whilst the school offers much on practice (Mutemeri & Tirivanhu, 2004).Merging these quad loose ends would strengthen the skills, knowledge base of the beneficiaries of the programme.Merging of knowledge justifies the need for continual communication between these two different knowledge bases, the school and the training college.As McIntyre, Hagger and Wilkin (1993) argue that there are no experts on how to do the job of mentoring students in initial teacher education.In order to minimize confusion and disparities, beneficiaries have to share ideas, knowledge and skills so that the education system produces competent, efficient and knowledgeable products. General trend acknowledges mentoring as an effective approach but with reduced levels of success because of mentors not knowing their roles (Kasowe, 2013;Maphosa & Ndamba, 2012;Makura & Zireva, 2013;Mudzielwana & Maphosa, 2014).Different researches were carried out in Zimbabwe especially to assess the preparedness of mentors, understanding of mentoring and challenges associated with mentoring.In a way such studies paved way for meaningful communication between colleges and schools because colleges need to know the quality of mentors who offer a hand in training their students.For this reason the studies suggested that colleges should improve on partnership with schools.Where strong partnership exists, colleges' expectations and standards would be effectively communicated to the schools particularly the mentors.Sutherland, Scanlon and Sperring (2005) observed that improved partnership improves quality of training.Mutemeri and Tirivanhu (2014) on the same note concluded that, where there is no collaboration between schools, colleges and awarding universities, mentoring is not helpful in assisting student teachers. Dialoguing Strategies Colleges need to communicate with schools through offering training workshops to enhance the quality of mentoring.Studies conducted by Ngara and Ngwarai (2012), Allen (2011), Mudzilielwana and Maphosa (2014), Maphosa and Ndamba (2012), Mutemeri and Ngwarai (2014), although they were conducted in different contexts concur that through work shopping, mentors would know their roles in the mentoring programme.The studies suggest that workshops would be used as the communication strategy to convey to the mentors the information that enlightens them to know their expected roles in order for them to offer quality training through mentoring.Justifying the need for workshops, Furlong and John (1993) and McInntyre et al. (1993) had earlier argued that mentors in schools did not know how to do the job because it was not only a demanding role but also quite different from what mentors had experienced before.They further stressed that there were no experts on how to do the job hence need for work-shopping each other. Mentoring is a process that involves supervisory roles; as such, mentors are expected to possess such skills.McIntyre and Hagger (1993) argue that mentors are expected to have supervisory skills so that they would be in a position to analyze, examine, reflect on practice, situations, problems, mistakes, successes to learning opportunities and gaps implying that they need to be well read (Maynard, 1996).Orland- Barak and Hasin (2001), Edwards and Collison (1996) also observed that mentors need to have updated theoretical knowledge.As Bvukuvhani, Zezekwa and Sanzuma (2011) argue, theory informs practice and practice modifies theory.For mentors to possess the most needed knowledge and skills, they need to be involved in continuous improvement programmes (Hollingsworth in Fullan, 1991).When colleges offer such programmes to school teachers that can be considered as a mode of communication that is called for in order to have effective relevant mentoring.Mutemeri and Tirivanhu (2014) had same observation in their study on preparedness of mentors that training workshops would create a platform for dialogue for the lecturers and the schools to share expectations.Colleges would train the mentors pitching their content on the real college standards to minimize disparities in advising or guiding students on training.Improvement programmes or the workshops would ensure greater uniformity of experience for students (Mutemeri & Tirivanhu, 2014).It has been observed that their competencies would be measured through the performance levels of their products or student teachers under their mentorship. Challenges Encountered Although teachers take the active role in the development of teaching skills; operations are guided by the requirements and specifics of the training college.Ngara and Ngwarai (2012) discovered that colleges are not communicating fully with the schools and mentors.Study that was carried out by Chiromo (2007) buttress the discoveries as he also realized that students were not fully benefiting from school-based mentoring as it was badly done with colleges and schools focusing on different issues.As a result, mentors and colleges do not speak the same language resulting in both the students and mentors confused (Makura & Zireva, 2013).Such findings highlight the need for communication between colleges and schools. In the study that was conducted at Midlands State University in Zimbabwe (Mutemeri & Tirivanhu 2014), it was expressed that though training mentors was a noble move it was equally expensive for colleges.If colleges are financially constrained, they can resort to the use of circulars, student handbooks, newsletters, portfolios or make use of modern technology using things such as videos even CDs to keep the schools updated and informed.Mountford (1993) caution that, even if colleges pass on circulars, there is need to follow up on mentoring circulars as some are never read or referred to resulting in fragmented communication.Mutemeri and Tirivanhus (2014) suggest the use of school-based cluster model as a control measure and assurance of interactional communication.In this model, colleges provide training to school mentors in clusters.Justifying the advantages of using the model, the same study highlighted that the method is cost-effective as lecturers would reach big numbers of mentors on limited transport and accommodation costs.The most striking advantage is the existence of dialoguing which strengthens school-based mentoring as policies, standards, practices and challenges would be discussed on in real contexts.Furlong et al. (1988) distinguished four different levels of school-based training.Level one which seems to be critical is direct practice which cannot be left to chance.They argue that, it is only the teachers who have access to that level of knowledge.It is them who know about particular children working on a particular curriculum in a particular school.When lecturers visit they give generalized advice.That justifies the need for partnership and regular continuous communication between training colleges and schools to minimize much reliance on generalised advice.Both systems need to really know each other, practice good communication which is believed to be the the issue of greatest concern for student benefit. Methodology Research objectives usually determine the research design to be adopted in a study.For this particular study, the qualitative research design was adopted as the researcher wanted to acquire the views of participants on whether dialoguing could enhance the school-based mentoring in training student teachers in Zimbabwe.Furthermore, detailed explanation of the dialoguing practice currently existing in those systems needed to be given and justified hence the need for a qualitative design and interpretive paradigm. The study had a sample of forty participants from three schools and one teacher training college who were purposively selected on the basis of their roles.Ten participants were selected from each category of mentees, mentors, school heads and college lecturers.Schools and the college were conveniently selected as those in accessible environments were selected to minimize costs.Gender issue was put into consideration but the study failed to strike a balance resulting in many females participating in the study.It was also discovered in this research that most mentors were female teachers while school heads were mostly males.Data was collected using a combination of questionnaires, semi-structured interviews, lived experiences and document analysis. The study was also ethically informed.Participation was on voluntary basis and consent forms were completed before participating in the study.Permission was granted before entering the schools.Information does not bear names of participants or institutions to maintain the issue of confidentiality and anonymity. Presented findings were drawn from the completed questionnaires, individual interviews and focus group discussions which were carried out as planned.Both the questionnaires and interview guide had different questions crafted basing on participants' roles however all focussing on dialogue issue.The findings in this study suggest that dialogue do exist between the university, college and schools however with variations. College and School Practices and Cultures Participants agreed that colleges and schools are operating as separate entities in terms of knowing own cultures and practices.The systems are dominated by the individualistic approach such that neither the schools nor colleges can claim that they know the culture of each other.All the lecturers confirmed that they prepared students for mentoring before leaving the college for teaching practice in different schools with different or even contrasting cultures.For the preparation to be relevant, McIntyre, Hagger and Wilkin (1993) posit that colleges have to know the cultures and practices of the schools so that they would prepare student teachers to go and fit effectively.If colleges just generalise their preparatory process banking on bookish knowledge, then it would not benefit the students much.They may receive knowledge, skills and values that are at odds with the lived cultural experiences in schools they would be on attachment and students would waste time trying to adapt to the new environments.In some cases, the study discovered that the host or mentoring schools have been reduced to battle grounds for conflicting practices and cultures especially in cases where the schools were hosts to student teachers from different colleges.The challenge is further deepened by the fact that the schools and colleges are guided and governed by policies from different ministries; the colleges under Higher and Tertiary then schools, the Ministry of Primary Education.Mountford (1993) and Allen (2002), also argue that not all schools are capable of mentoring students and as a result they need to be carefully chosen with the help of education officials.In practice, careful selection would suffice in situations where the colleges dialogue with the schools on regular basis.In Zimbabwe even if the selection of schools is done by the students, the colleges approve the placement.Knowledge of schools requires having an idea of the school's performance track-record and both staff qualifications and competency.Such information would be used as a selection tool between schools that qualify to mentor students and those to be excluded.If the colleges really know the schools it would be easier for the college to advise students not to attach themselves to schools of bad reputation hence making mentoring more effective and helpful.Students would be allowed to get into schools that are supportive, that exhibit reasonable learning culture aiming to achieve high learning standards. Considering mentoring to be part of training process, no college would like students to be attached to schools with the history of failure; therefore communication is critical as schools play much larger part in initial training (McIntyre et al, 1993).The ideal situation has to be reciprocal as schools need to know fully the performance of the feeding college as well.In the Zimbabwean education system, as no school is allowed to reject taking part in training students, that knowledge would guide the schools on where to pitch the mentoring given the diversified nature of the students from different colleges. College and School Policies Responses from mentors, school heads and lecturers confirm that for effective mentoring to be implemented, both systems need to know what really happens within each context which currently partially exists.Policies need to be standardized within school and college partnerships and communicated openly.The suggestion from the participants buttress Sampson's (1994) ideas that colleges and schools should have shared understanding and expectations.Mutemeri and Tirivanhu's (2014) study results on preparedness of mentors were also confirmed as they stated that policies need to be shared to help mentors know their roles and avoid confusing students.Mountford (1994) in support of the idea claims that knowing policies would only be possible where partners' relationship is an open and honest one.Chakanyuka (2006) observed that policies that colleges and schools used are interpreted differently resulting in students receiving conflicting advice.There is need for regular dialoguing between the parties explaining matters such as own policies, standards even assessment criteria as this adversely affected effective mentoring.If sharing of policies is done carefully, Frost (1994) claimed that, would relieve both institutions from producing para-professionals who are able to reproduce little more than what they have observed in their mentors. College and Mentor Dialoguing All interviewed mentors acknowledged their role as critical implementers of the school-based mentoring.This is in support of earlier studies in Zimbabwe emphasising the need for mentors to know their roles, the need to have the skills to develop efficient teaching practitioners and be competent teachers from where students can observe and learn.School heads, mentees and lecturers also indicated that mentors at times fail to help students not because they are incompetent but due to intergenerational incongruence.Mentoring is a different approach from what most mentors received during training.Although participants acknowledge the work done by the college of rolling out workshops and seminars in different districts, they still expressed dissatisfaction as all the workshop items or activities were decided upon by the "owners of the programme".School heads and mentors observed that this approach had a "top-down" flair and adversely affected their participation.Mentoring programme as a form of exchange or interface was not based on either mutual dependency or reciprocal causation (Burgleman, 2002). Mentors and school heads even pointed out that at times they were involved in activities that do not address confronting challenges in mentoring.Moswela (2006) caution that for any improvement or development programme to be relevant, it should address the problems teachers are facing in schools not to address anticipated challenges.Real challenges could be known to the colleges or schools if purposeful dialoguing does exist between these systems.This has recently been reinforced by Mutemeri and Tirivanhu (2014) who noted that to enhance preparedness colleges should embark on school-based training models in order to reach out to the mentors in their real workplaces. Lecturers and mentors described that communication between the college and schools is fragmented at the same time inconsistent.College only gets school supervision reports during the last phase of teaching practice when the college would be compiling the teaching practice marks for all students.What happens on daily basis during the other four terms is not known at college level.Mentors argued that, such a practice results in colleges getting shocking results during critical teaching practice examination moments.They expressed the need for regular dialoguing especially on students' performance citing that that would make colleges know their real students not basing on the college supervision only that they have at times once per term which is at times done in a fly past manner because of time constraints (Yeomans & Sampson, 1994).Lecturers expressed that when they get to schools on supervision purposes, the mentors automatically become core-supervisors.Consistent with the expectations and practice of that particular college, most mentors feel challenged to take up the supervision roles.When it comes to conferencing that is done after the lesson observation between the mentee, mentor and lecturer, most beneficial dialoguing in most cases mentors would take a very passive role of just listening to the lecturer's comments instead of giving feedback as well.When asked to justify such stance, mentors expressed the fear of passing on conflicting comments as some were never exposed to the standardization process or supervision criteria.The foregoing discussion, unfolds that dialoguing is inevitable as it has been found to be an effective way of empowering stakeholders involved in the mentoring programme.As illustrated in Fugure 1, there are various patterns and formations of dialoguing experienced between university, college and mentoring school.The diagram shows the school as the weakest link in the matrix. School University The study cited a number of challenges associated with the communication between colleges and schools performing mentoring roles.Both systems stated that mentoring requires a widened financial base and that concurs with Mutemeri and Tirivanhu (2014) study findings.The college made reference to some of the workshops they rolled out in different districts and indicated that both the facilitators and participants needed financial assistance for their travel and food.For that reason, the workshops have been reduced to irregular interactions as financial commitment is a critical piece of the mentoring puzzle that determines the quality and effectiveness of the mentoring programme. On the other end, timing of the workshops is a hurdle as during the term some do not want to leave their classes or the workshop dates may collide with school timetabled crucial events.Holidays had a lot of excuses forwarded as some would be engaged in various continuous improvement programmes.That also impacted on deciding on the length of the programmes and the roles of participants.Some schools would end up sending a representative hoping him/her to cascade to other staff members.Experience has shown that in most cases schools would not spare time for that, thus impacting on effectiveness of communication. Attitudes of some school heads have been cited as one of the impediments in effective mentoring.Some school heads are non-responsive to the communication from the college.As a result, such schools would not be in a position to meet or implement mentoring according to the college standards or expectations in turn disadvantaging student teachers under their mentorship. Conclusions and Recommendations Regular and well programmed knowledge-intensive dialoguing can take mentoring to greater heights.The absence of knowledge-intensive interactions or dialogue between two institutions reduces their capacity to unlock real value from the mentoring programme.Lack of communication has weakened the effectiveness of mentoring as an integrative or collaborative programme.The study findings support earlier observation elsewhere (Sherwood & Govin, 2008) that effective knowledge-transfer communication was stifled by the divergence in institution missions and cultural preparedness.Schools are results-focussed, while colleges seek more the creation and development of a future professional school teacher. According to Furlong and Maynard (1993) mentoring must be built on a clear understanding of the learning process it is intended to support.They further claim that if mentoring is not properly done, the training would be of the "sink" or "swim" variety.One of the best ways to mitigate this is to adopt an "all stakeholder approach" in the process of knowing specific roles, regulations, practices, standards and expectations.The dialogue should not only be there but it has to be open and honest (Mountford, 1994) so that students as main clients of the process would benefit at all levels.Colleges are the responsible institutions for training students and schools are guided by their standards, policies and expectations.Colleges are urged to vary the communication models and employ strategies such as student handbook, mentoring module, CDs for example on lesson delivery or roles of mentors, videos, newsletters, video conferencing in possible environments not discarding the usual workshops, circulars and hand outs.Variations would capture users' interests, abilities and meet various learning styles. The study also concluded that dialoguing between colleges and primary schools should create room for mentors to know their roles, have mentoring skills and the most needed theoretical knowledge they seem to fairly possess.Mentors and schools need that knowledge to blend it with the pedagogical knowledge they possess so that schools would produce teachers that are developed in total.Thus, colleges can take a lead in training teachers the mentoring process.To make it more viable or effective, mentoring could be included among other development programmes offered in various universities or colleges (Watkins & Whalley, 1993).The ministry however should also come up with the utilisation plan so that all those who would go through the programme are given the chance to use the learnt skills and remunerated.It is recommended that "the triple helix model" in which the three increasingly interfacing spaces; colleges, schools and universities are encouraged to engage in knowledge-intensive interactions.Lastly, the findings suggest revisiting the whole mentoring package (Michael, 2008) and thus paving way for future research to evaluate the communication styles and come up with the most effective method that would keep all mentoring stakeholders informed to enhance effective school-based mentoring in the Zimbabwe context.
2018-12-05T09:26:28.555Z
2015-05-29T00:00:00.000
{ "year": 2015, "sha1": "bcc8faa41b986a3d9a4c886ff753ad2b3c622345", "oa_license": "CCBY", "oa_url": "https://doi.org/10.5539/jel.v4n2p19", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "bcc8faa41b986a3d9a4c886ff753ad2b3c622345", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology" ] }
59307724
pes2o/s2orc
v3-fos-license
A Novel Loop: Mutual Regulation Between Epigenetic Modification and the Circadian Clock In response to periodic environmental fluctuations generated by the rotation of the earth, nearly all organisms have evolved an intrinsic timekeeper, the circadian clock, which can maintain approximate 24-h rhythmic oscillations in biological processes, ultimately conferring fitness benefits. In the model plant Arabidopsis, the core mechanics of the circadian clock can be described as a complex regulatory network of three feedback loops composed of core oscillator genes. Transcriptional regulation of each oscillator gene is necessary to maintain the structure of the circadian clock. As a gene transcription regulatory mechanism, the epigenetic modification of chromatin affects the spatiotemporal expression of multiple genes. Accumulating evidence indicates that epigenetic modification is associated with circadian clock function in animals and plants. In addition, the rhythms of epigenetic modification have a significant influence on the timing of molecular processes, including gene transcription. In this review, we summarize recent progress in research on the roles of histone acetylation, methylation, and phosphorylation in the regulation of clock gene expression in Arabidopsis. In response to periodic environmental fluctuations generated by the rotation of the earth, nearly all organisms have evolved an intrinsic timekeeper, the circadian clock, which can maintain approximate 24-h rhythmic oscillations in biological processes, ultimately conferring fitness benefits. In the model plant Arabidopsis, the core mechanics of the circadian clock can be described as a complex regulatory network of three feedback loops composed of core oscillator genes. Transcriptional regulation of each oscillator gene is necessary to maintain the structure of the circadian clock. As a gene transcription regulatory mechanism, the epigenetic modification of chromatin affects the spatiotemporal expression of multiple genes. Accumulating evidence indicates that epigenetic modification is associated with circadian clock function in animals and plants. In addition, the rhythms of epigenetic modification have a significant influence on the timing of molecular processes, including gene transcription. In this review, we summarize recent progress in research on the roles of histone acetylation, methylation, and phosphorylation in the regulation of clock gene expression in Arabidopsis. EPIGENETIC REGULATION OF THE CIRCADIAN CLOCK The circadian clock is a ubiquitous molecular oscillator that provides basic timing information and regulates biochemical, physiological, and behavioral processes. In plants, the circadian clock regulatory mechanism regulates responses to the environment at the transcriptional level as well as at physiological and biochemical levels. This rhythmic oscillation of nearly 24 h decreases the unnecessary consumption of energy and organics, while increasing competitive productivity and viability. Multiple interlocked transcriptional feedback loops formed by transcription factors are central to circadian clock function. In the model plant Arabidopsis, the circadian clock system can be described as a complex regulatory network of three loops. The core loop is composed of three important genes, namely, CIRCADIAN CLOCK ASSOCIATED 1 (CCA1), LATE ELONGATED HYPOCOTYL (LHY), and TIMING OF CAB EXPRESSION 1 (TOC1). In this loop, CCA1 and LHY inhibit the expression of TOC1, whereas TOC1 directly represses CCA1 and LHY, thereby establishing a complete regulatory process. As a DNA-binding transcription factor, TOC1 binds directly to the promoters of CCA1 and LHY to repress their expression. CCA1 and LHY, two MYB transcription factors that are active in the morning of the subjective day, repress the expression of PSEUDO RESPONSE REGULATOR5, 7, and 9 (PRR5, 7, and 9), whereas PRR5, 7, and 9 in turn suppress the expression of CCA1 and LHY. The evening complex (EC) includes three other key clock components, namely, LUX ARRHYTHMO, EARLY FLOWERING3 (ELF3), and EARLY FLOWERING4 (ELF4). The EC complex can directly repress PRR9. In the evening loop, TOC1 suppresses the expression of GIGANTEA (GI), and GI promotes TOC1 expression, whereas the transcription of GI is inhibited by CCA1 and LHY. These three negative feedback loops, together with the input-output pathway of the circadian clock, constitute a complex regulatory network that controls various physiological and crucial metabolic processes in plants (Huang et al., 2012;Nagel and Kay, 2012;Aguilar-Arnal and Sassone-Corsi, 2015;Oakenfull and Davis, 2017). The nucleosome is a repeating unit of chromatin fiber that consists of 147 base pairs (bp) of genomic DNA wrapped around an octamer of histones. A standard octamer of histones comprises two copies of each of the four canonical histone proteins: H2A, H2B, H3, and H4. Each histone possesses a highly basic N-terminal tail, which protrudes from the surface of the histone octamer and serves as a substrate for several enzymes that lead to different post-translational modifications, including acetylation, phosphorylation, and methylation. Since histone post-translational modification constitutes an extra (epi) layer of gene regulation beyond that of the DNA sequence, this mechanism is termed epigenetic. Epigenetic regulation is necessary for survival and reproduction in unpredictable environments (Wang et al., 2016). Recent studies have indicated that circadian oscillations in plants need to be monitored to facilitate the modification of oscillator regulatory mechanisms according to circumstances. Interestingly, some of the transgenerational plasticity of the plant circadian clock does not involve the alteration of clock gene DNA sequences, but instead manifests as reversible changes in the chromatin structure that determines the expression of the core oscillator genes. Chromatin reshaping depends on epigenetic factors, such as histone post-translational modifications/replacements, which create a flexible loop of gene regulation (Henriques and Mas, 2013;Baerenfaller et al., 2016;Kim et al., 2017;Hung et al., 2018;Lee and Seo, 2018;Stevenson, 2018). Here, we provide examples of clock gene regulation mediated via epigenetic alteration, and also discuss rhythmic epigenetic changes in plants as well as the contribution of circadian clock epigenetic modification to the processes of adaptation and acclimation in plants. EPIGENETIC MODIFICATIONS REGULATE THE CORE OSCILLATORS In Arabidopsis, the circadian clock is composed of three interlocking transcription-translation feedback loops. The central loop, referred to as the core oscillator, was first proposed a decade ago. This loop comprises the three transcription factors TOC1, CCA1, and LHY. The morning-expressed CCA1 and LHY inhibit the transcription of the evening gene TOC1. Conversely, at dusk, TOC1 represses the transcription of CCA1 and LHY (Huang et al., 2012;Oakenfull and Davis, 2017). Previous studies have shown that histone acetylation, methylation, and phosphorylation are associated with transcriptional regulation of the core oscillator genes in the circadian clock. EPIGENETIC MODIFICATIONS IN THE CORE LOOP Expression of the circadian clock oscillator gene TOC1 is modulated by dynamic changes in histone deacetylation in the TOC1 promoter at dawn. The morning transcription factor CCA1 represses the expression of TOC1 by binding to the TOC1 promoter, which is accompanied by conditions favoring histone deacetylation in the TOC1 promoter (Ni et al., 2009;Huang et al., 2012;Nagel and Kay, 2012). Histone deacetylase (HDAC) is responsible for this histone deacetylation, which contributes to declining TOC1 expression near dusk. In a cca1/lhy double mutant, histone H3 acetylation (H3ac) in the TOC1 promoter was observed to be higher than that in the wild type, indicating that CCA1 has a strong inhibitory effect on TOC1 expression and that it antagonizes H3ac to decrease the abundance of TOC1 mRNA (Ni et al., 2009;Malapeira et al., 2012;Ng et al., 2017). Characterization of H3ac dynamics in the TOC1 promoter revealed an interesting regulatory mechanism. Studies examining CCA1-overexpressing lines indicated that a decrease in H3ac is associated with the repression of TOC1, whereas analysis of a cca1/lhy double mutant revealed an increase in H3ac in the TOC1 promoter. These observations indicate that CCA1 represses TOC1 expression by binding to the TOC1 promoter. In addition, the rhythms of histone H3 deacetylation have been found to be negatively correlated with TOC1 transcript levels. HDACs can remove acetyl groups on lysine residues, thereby generating hypoacetylated histones, which promote chromatin fiber compaction and gene repression. In plants treated with the HDAC inhibitor trichostatin A, TOC1 is more highly expressed after dusk (Perales and Mas, 2007;Malapeira et al., 2012), thereby indicating that the declining phase of TOC1 is induced by HDAC activity. These results also suggest that CCA1, as a repressor of TOC1, might rely, at least in part, on the recruitment of HDACs to the TOC1 promoter (Henriques and Mas, 2013;Barneche et al., 2014). A further component contributing to chromatin modification in the TOC1 promoter is REVEILLE 8/LHY-CCA1-LIKE 5 (RVE8/LCL5), which affects the repression of TOC1. Similar to CCA1 and LHY transcription, RVE8 transcription peaks in the morning. Altered expression of RVE8/LCL5 in plants modifies the circadian period. Similar to CCA1, RVE8/LCL5 regulates the expression of TOC1 by binding to the TOC1 promoter; however, once bound, it promotes hyperacetylation of H3 in the TOC1 promoter and subsequently activates the expression of this gene. In contrast, CCA1 inhibits the expression of TOC1 by promoting histone deacetylation. Thus, although their sequences and expression peaks are similar, RVE8/LCL5 and CCA1 have contrasting effects on the regulation of TOC1 transcription (Farinas and Mas, 2011;Barneche et al., 2014;Horak and Farre, 2015). Recent studies have shown that the rhythm of histone H3K4 trimethylation (H3K4me3) is related to the oscillatory expression of the core clock genes. Analysis of the correlation of clock gene expression in seedlings treated with an H3K4me3 inhibitor has revealed that, compared with the control seedlings, the circadian rhythms of CCA1 and TOC1 display a longer period of expression. Therefore, it is conceivable that H3K4me3 is required to ensure correct expression peaks of the clock genes. The oscillatory waveform of H3K4me3 accumulation in core oscillator gene promoters has been shown to have a phase delay compared with that of H3ac, indicating that H3K4me3 might have a different regulatory mechanism whereby it regulates the expression of clock genes (Perales and Mas, 2007;Ni et al., 2009;Barneche et al., 2014). The successive accumulation of H3ac (H3K56ac and H3K9ac), H3K4me3, and H3K4me2 is known to exhibit circadian rhythmicity. The inhibition of acetylation and H3K4me3 suppresses the expression of the clock genes. Blocking H3K4me3 enhances the binding activity of circadian clock inhibitors, indicating that H3K4me3 could be a marker of the transformation from activation to inhibition. Specifically, the histone methyltransferase SET DOMAIN GROUP 2/ARABIDOPSIS TRITHORAX-RELATED 3 (SDG2/ATXR3) may directly or indirectly contribute to oscillatory gene expression and H3K4me3 accumulation, and altered expression of SDG2/ATXR3 has been observed to modify the binding activity of certain clock repressors (Malapeira et al., 2012;Henriques and Mas, 2013;Barneche et al., 2014). LYSINE-SPECIFIC DEMETHYLASE 1-LIKE 1 (LDL1) and LDL2 interact with CCA1 and LHY to repress the expression of TOC1. Chromatin immunoprecipitation sequencing (ChIP-Seq) analysis has shown that many circadian genes regulated by CCA1 are targeted by LDL proteins. LDL1 and LDL2 interact with the histone deacetylase HDA6, and the LDL1-HDA6 complex binds directly to the TOC1 promoter and represses TOC1 expression by increasing histone deacetylation and H3K4 demethylation. These findings have contributed to elucidating a pathway through which histone modifications regulate clock genes and the inner network of core oscillator genes (Hung et al., 2018). EPIGENETIC MODIFICATIONS OF THE OTHER LOOPS The rhythmic expression of the core oscillator gene TOC1 is preceded by the oscillation of H3ac accumulation in its promoter. Moreover, H3ac accumulation parallels the expression of almost all circadian clock components, including CCA1, LHY, PRR9, PRR7, LUX, and TOC1. In this regard, chromatin immunoprecipitation quantitative PCR (ChIP-qPCR) analysis has revealed that H3K9ac, H3K14ac, and H3K56ac are involved in the transcriptional activation of clock genes (Perales and Mas, 2007;Malapeira et al., 2012). Recent studies have also revealed that HISTONE ACETYLTRANSFERASE OF THE TAFII250 FAMILY 2 (HAF2) is involved in circadian clock regulation. The HAF2 protein facilitates H3ac accumulation in the LUX and PRR5 promoters to activate gene expression at midday, with the expression of HAF2 being regulated by CCA1. Future studies are expected to further elucidate the HAF2-mediated temporal coordination of late-day and evening-expressed genes (Lee and Seo, 2018). The expression of JUMONJI C DOMAIN-CONTAINING 5 (JMJD5/JMJ30), which peaks in the evening, is regulated by the circadian rhythm. In addition, the regulation of JMJD5/JMJ30 is jointly controlled by CCA1 and LHY. LHY can suppress the expression of JMJD5/JMJ30 by directly binding to the JMJD5/JMJ30 promoter. Furthermore, the expression of CCA1 and LHY under high-intensity red light has been found to be lower in a jmjd5/jmj30 mutant than in the wild type, indicating that JMJD5/JMJ30, CCA1, and LHY form a negative feedback loop in response to red light (Jones et al., 2010). Although histone H3 phosphorylation is known to play a role in the regulation of gene transcription, there have been few reports regarding the function of histone H2A phosphorylation in the promoters of circadian clock genes. Recent work has, nevertheless, demonstrated that MUT9P LIKE KINASE4 (MLK4) induces GI expression (Su et al., 2017). In this process, MLK4 initially interacts with CCA1 at the GI promoter. CCA1 in turn recruits YAF9a, resulting in accumulation of the histone variant H2A.Z and the acetylation of H4 at GI, thereby inducing GI expression (Su et al., 2017). The monoubiquitination of histone H2B (H2Bub) is widely observed in plant clock genes, and H2Bub has been shown to have substantial effects on the oscillatory expression of circadian clock genes. The loss-of-function mutant histone mono-ubiquitination1 (hub1-1) exhibits reduced H2Bub accumulation. The oscillation of CCA1 and ELF4 is dampened in hub1-1, and the LHY expression phase is also advanced. Moreover, hub1-1 mutation may enhance the expression of TOC1 by reducing the inhibitory activity of CCA1. H2Bub appears to act as a positive regulator of TOC1, PRR7, and GI expression in etiolated seedlings exposed to light. On the basis of evidence obtained to date, it appears that histone H2B may affect a large number of clock components, the mRNA abundances of which are tightly regulated by intense oscillations (Bourbousse et al., 2012;Barneche et al., 2014) ( Table 1). DET1 may act as a transcriptional corepressor of CCA1 and LHY to repress TOC1 transcription. Similar to cca1/lhy mutants, a det1-1 mutant was shown to exhibit a shorter period of TOC1 oscillations. Given that DET1 interacts with H2Bub, it may repress clock genes via H2Bub (He et al., 2011;Lau et al., 2011;Kang et al., 2015). A further study has revealed that PRR5, 7, and 9 can interact with TOPLESS/TOPLESS RELATED PROTEINS (TPL/TPR) and HDA6 to form a complex at the promoters of CCA1 and LHY, thereby repressing the expression of these two genes (Wang et al., 2010(Wang et al., , 2013. THE RHYTHMIC EXPRESSION OF HISTONE-MODIFICATION ENZYMES Histone modifications, including acetylation-deacetylation and methylation-demethylation, play a key role in regulating the expression of clock genes, and in this regard, previous studies have indicated that the rhythmic expression of epigenetic enzymes is correlated with the daily rhythms of epigenetic modification and the expression of downstream genes (Loenen and Raleigh, 2014;Baerenfaller et al., 2016). HISTONE METHYLTRANSFERASES Methylation of lysine residues in the H3 histone tail is a key mechanism contributing to the regulation of chromatin state and gene expression, and is mediated by a family of enzymes with a SET domain. One of the main functions of these enzymes is to regulate H3K4 di-and tri-methylation, which has been discovered in the TOC1 promoter and shown to play a role in the repression of TOC1 by CCA1 (Sanchez et al., 2010;Malapeira et al., 2012). The SET DOMAIN GROUP (SDG) protein family in Arabidopsis contains 49 members and can be divided into five classes based on activity and structure. The five members in class III SDG, ATX1-5, which are homologous to the Trithorax proteins in other eukaryotes, have been shown to participate in H3K4me. Among these proteins, ATX1 (SDG27) is important for the trimethylation of H3K4. Although ATX2/SDG30 shows sequence homology to ATX1, it exhibits H3K4me2 rather than H3K4me3 methylation activity, whereas ATX3/SDG14, ATX4/SDG16, and ATX5/SDG29 have been observed to affect 1000s of H3K4me2 and H3K4me3 sites across the entire Arabidopsis genome. The SDG family of Arabidopsis also contains seven ATXrelated (ATXR) proteins, among which ATXR7/SDG25 and ATXR3/SDG2 have functions similar to that of ATX. The function of ATXR3/SDG2 is comparable to that of ATX3/SDG14, ATX4/SDG16, and ATX5/SDG29, and it may regulate clock gene expression by modulating H3K4me3 in promoters. However, unlike the expression of ATX5/SDG29, which peaks in the morning, peak expression of ATXR3/SDG2 occurs at midday. The Diurnal database indicates that both ATXR3/SDG2 and ATX5/SDG29 have rhythmic expression ( Table 2) (Malapeira et al., 2012;Chen et al., 2017). The other three important SDG proteins, SU(VAR)3-9 HOMOLOG 4 (SUVH4), SUVH5, and SUVH6, are histone H3 lysine 9 (H3K9) methyltransferases. SUVH4 and SUVH6 are responsible for maintaining the H3K9 methylation of inverted repeats during transcription, whereas SUVH5 is necessary for the accumulation of H3K9me2 DNA methylation. Recent studies have shown that HDA6 can interact with these three histone methyltransferases, and indicate that the C-terminal region of HDA6 is important for this interaction. In this regard, two phosphorylated serine residues, S427 and S429, have been identified in the C-terminal region of HDA6, and HDA6 phosphorylation (amino acid substituents that mimic phosphorylated proteins) has been observed to lead to increased enzyme activity. Furthermore, mutation of S427 in HDA6 to alanine was found to abolish the interactions between HDA6 and SUVH5 and SUVH6, thereby indicating that the phosphorylation of HDA6 is important for its activity and function (Yu et al., 2017). The ChIP-seq result also showed that the SUVH members displayed different DNA binding preferences, deciphering the mechanism of sequencebiased non-CG methylation in plant methylomes . Currently, the involvement of SUVH4, 5, and 6 in the circadian clock is largely unknown; however, a previous study has shown that SUVH4, 5 and 6 affect H3K9me but not H3K4me in the TOC1 promoter. According to the Diurnal database (Table 2), SUVH4, 5, and 6 and HDA6 are rhythmically expressed, and given that SUVH4, 5 and 6 interact with HDA6, it is probable that they play a role in circadian clock regulation (Cho et al., 2012). HISTONE DEMETHYLASES Recent studies have demonstrated that histone methylation can be inhibited by at least two different types of enzymes, LSD1 and the JMJ proteins. As discussed above, JMJD5/JMJ30 is a component of the circadian clock in Arabidopsis, and the expression of JMJD5/JMJ30 peaks 2 h after midday (Jones et al., 2010;Hemmes et al., 2012). A further two JMJ proteins, JMJ20 and JMJ22, have also been shown to be involved in the regulation of clock genes. When the important clock input pathway gene phytochrome B (PHYB) is inactive, JMJ20 and JMJ22 are directly repressed by the zinc-finger protein SOMNUS (Lu et al., 2008;Cho et al., 2012). The Diurnal database indicates that JMJ22 is rhythmically expressed and that its expression peaks in the evening ( Table 2). CONCLUSION AND PERSPECTIVES Epigenetic regulation of the circadian clock has recently been investigated via advanced molecular biology and genetic approaches. The periodic expression of core clock genes is regulated epigenetically at the chromatin level and modification of histones primarily leads to alterations in the transcriptional activity of clock genes. Interestingly, the deacetylation of H3ac is related to H3K4me demethylation, which directly connects histone acetylation with methylation. In addition, the phosphorylation of H2A by MLK4 directly regulates the formation of H2A.Z and the acetylation of H4. These observations indicate that epigenetic regulation plays an important role in the regulation of the circadian clock. We also highlight that certain epigenetic modification enzymes have a rhythmic expression, suggesting that clock genes may regulate epigenetic modification enzymes (Su et al., 2017). In addition, histone acetylation modification is a reversible dynamic process that involves both HDACs and HISTONE ACETYLTRANSFERASES (HATs). Although HAF2 is known to regulate PRR5 and LUX (Lee and Seo, 2018), the involvement of other HATs with regards to circadian rhythms remains unclear (Aquea et al., 2017). In the Diurnal database for Arabidopsis, we found that peak expression of HISTONE ACETYLTRANSFERASE OF THE GNAT 5 (HAG5), HISTONE ACETYLTRANSFERASE OF THE CBP FAMILY 12 (HAC12), HAC1, HAC12, HAF1, HAC2, HAC4, and HAC5 occurs in the morning, whereas peak expression of HAG2 and HAG3 occurs in the evening, and only the expression of HAC1 peaks near midday (Wang et al., 2014;Fina et al., 2017) (Figure 1). Accumulating evidence gained from studies on the mammalian showed that the epigenetic modification is also important for the mammalian circadian clock. Similar to plants, histone acetylation and methylation also regulate the mammalian circadian clock. Consistent with the expression rhythm of clock genes, the histone acetylation H3K9ac and H3K27ac also displays circadian rhythm (Ripperger and Schibler, 2006;Feng et al., 2011;Vollmers et al., 2012). Histone methylation (H3K4me3) rhythmically oscillates at transcription start sites (TSSs) of clock genes (Le Martelot et al., 2012;Yue et al., 2017). In mammalian, the rhythmic expression of DNA methyltransferase indicated that the DNA methylation is involved in the transcription and regulation of clock genes (Benoit et al., 2013;Masri et al., 2015;Ng et al., 2017;Padmanabhan and Billaud, 2017;Kwapis et al., 2018). To date, however, no similar evidence has emerged in plants. Higher plants have three DNA methylation sites, namely, CG, CHG, and CHH (where H is A, C, or T), among which the methylation of CG and CHG sites is most important for the regulation of gene expression. DNA methyltransferases can alter the DNA methylation level of CG and CHG sites (Underwood et al., 2018), and in the Diurnal database for Arabidopsis, we found that CHROMOMETHYLASE 3 (CMT3), CMT2, DNA METHYLTRANSFE RASE 1 (MET1), MET2, DORMANCY-ASSOCIATED PROTEIN 1 (DRM1), and DRM2 have rhythmic expression patterns, with peak expression of DRM1 and DRM2 occurring near midday, that of CMT3 and MET1 occurring in the evening, and only that of CMT2 occurring near midnight (Diurnal database; Xia, 2008;Wang et al., 2014). Although the mechanisms underlying the associations between epigenetic modifications and the circadian clock have recently been a focus of research, the relationships between certain epigenetic modification enzymes and the circadian clock currently remain undetermined. Nevertheless, data obtained from ChIP-Seq analysis of the core oscillator genes in the Arabidopsis circadian clock have indicated that many epigenetic modification enzymes are rhythmically expressed. These findings provide compelling evidence that epigenetic modification enzymes are directly regulated by core oscillator genes. Thus, we hypothesize that the circadian clock can directly regulate epigenetic modification enzymes and that these enzymes in turn contribute feedback to the circadian clock, involving the mutual regulation of core oscillators (Figure 2). This potential output pathway might be an interesting topic of plant circadian clock study in future. The oscillator genes that directly regulate transcriptional epigenetic modification enzymes are yet to be found. The DNA methylation modification of core oscillator genes still needs to be detected. The mechanisms underlying the epigenetic modification of the core circadian clock genes remain to be further elucidated. DATA AVAILABILITY Publicly available datasets were analyzed in this study. This data can be found here: http://www.mocklerlab.org/supplements. AUTHOR CONTRIBUTIONS WH conceived the article. SD wrote the first draft. LC and LG critically revised the article. All authors were involved in the revision of the drafted manuscript and have agreed to the final content. FUNDING This work was supported by the Pearl River Scholar Fund of Guangdong Province Universities and Colleges (to WH), a Research Team Project from the Natural Science Foundation of Guangdong Province (2016A030312009), and the NSFC-Guangdong Joint Fund (U1701232).
2019-01-29T14:06:04.702Z
2019-01-29T00:00:00.000
{ "year": 2019, "sha1": "24cd277e029eaf61aa81cf300a1cce756927c0ff", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2019.00022/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "24cd277e029eaf61aa81cf300a1cce756927c0ff", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
13853156
pes2o/s2orc
v3-fos-license
Toxic epidermal necrolysis induced by lansoprazole* Toxic epidermal necrolysis is a rare, severe cutaneous reaction, mostly caused by drugs. It affects the skin and mucous membranes, with involvement of more than 30% of body surface. We describe the case of a young woman, previously healthy, who developed skin detachment of more than 90% of the body surface 15 days after being administered lansoprazole for peptic disease. The treatment consisted in discontinuation of the drug involved and early administration of intravenous human immunoglobulin, which led to a satisfactory outcome of the case, substantiating the impact of early diagnosis and treatment on the morbidity and mortality of these patients. INTRODUCTION Toxic epidermal necrolysis (TEN) is a rare disorder, having significant morbidity and over 30%. mortality 1 It is characterized by extensive apoptosis of keratinocytes, leading to epidermal detachment and mucosal involvement. 2 The Stevens-Johnson syndrome (SJS) and TEN represent severe variants of the same process, due to etiopathogenic, clinical and histopathological similarities. These entities differ only in the percentage of body surface involved: a detachment below 10% represents SJS, 10-30% overlapping of both and above 30% it characterizes TEN. 3 The main factors involved in the etiology are drugs, mainly antibiotics, anticonvulsants, oxicam family of nonsteroidal anti-inflammatory drugs and allopurinol. The proton pump inhibitors (lansoprazole and omeprazole) are considered of low risk. 4-6 Cases of TEN have been attributed to several new drugs, given that the ones with longer half-life pose higher risk. The pathogenesis is not fully understood and involves the inability to detoxicate reactive metabolites of drugs, genetic susceptibility and immune factors related to cellular apoptosis. The main pathway of cell death in this case is the interaction of Fas receptor and Fas ligand on the surface of the keratinocyte, since the latter acquires an increased expression on the surface of keratinocytes in TEN. 7,8 The clinical signs begins, on average, one week after administration of the drug, and it may range from 7 to 21 days in a first exposure. In a reexposure, the onset happens earlier, and it may happen in 2 days. 2,4 It is common that unspecific symptoms such as fever, sore throat, stinging eyes and vagina, precede cutaneous manifestations by a few days. The first lesions tend to happen on the trunk, and are usually erythematous papules or purpuric macules, irregular in shape and size, which tend to coalesce. The progression of the disease, if the offending drug is not removed, occurs from 2 to 5 days or in hours; it rarely takes more than one week. The lesions become greyish red, there is intense necrosis of the whole epidermis, and flaccid blisters are formed, leaving large denuded areas. There is oral mucosa, ocular and genital involvement in more than 90% of the patients with extensive and painful erosions which lead to lip crusts, odynophagia, photophobia, dysuria and painful evacuations. 1,3 Systemic manifestations happen due to acute cutaneous failure, which results in water and electrolytic disorder, hypovolemia, renal failure, thermoregulatory unbalance and higher predisposition to sepsis. 1 There is a severity score for TEN (SCORTEN), which may be useful to assess the prognosis of these patients, if performed within the first 48 hours of onset (Table 1). 7 Factors such as lymphadenopathy, increased transaminasis and neutropenia also imply worse prognosis. 1,3 Management in the acute stage involves prompt identification and withdrawal of the culprit drug, support therapy in intensive care unit or burn intensive care and eventual specific drug therapy. Early ophthalmologic evaluation is important to avoid late complications, such as synechia and amaurosis. 3,9 Systemic corticosteroids were the main therapy over decades. Currently, their use is controversial, some experts suggest there is increased risk of sepsis and death, while others show that short course of high dose, at the onset, may be beneficial. 9,10 The administration of high doses of intravenous immunoglobulin (IVIG) seems to be a promising alternative, as it showed reduction in mortality in some studies. Its probable mechanism of action consists in blocking the Fas receptor-Fas ligand binding, thus inhibiting the apoptosis of keratinocytes. The dose has not been established yet, but the literature shows a higher benefit with the early administration of high doses (2-3 g/kg divided in 3-4 days). 4,9,10 Other drugs may be used, such as cyclosporine, cyclophosphamide and TNF-alpha antagonists. 4,7 CASE REPORT A previously healthy, 23-year-old woman, began to have ocular and vaginal itching and later symmetrical erythematous macules appeared on the limbs, mainly hands and feet. Within 7 days there was significant spread of lesions, which evolved into blisters. She sought the emergency care unit, where prednisone 2 mg/kg was administered. After 2 days, she was transferred to our service. She reported use of lansoprazole 15 days before the appearance of the lesions. On examination, the patient was pale 2+/4+, febrile and tachycardic. Epidermal detachment involved more than 90% of the body surface, and there were some areas of erosion (Figures 1 and 2). Mild involvement of the oral mucosa and lips with crust formation. She was admitted to the intensive care unit, lansoprazole was stopped and intravenous immunoglobulin, 2 g/kg was administered for 3 days, in addition to skin debridement and daily dressings. The patient developed significant reepithelialization in 15 days (Figures 3 and 4). After two months, she presents only residual hyperchromic macules. DISCUSSION The proton pump inhibitors are drugs which are little related to the development of TEN, and only 5 cases have been reported in the literature to date. Nonetheless, as they are often used, they may not be considered as cause of pharmacodermia. Our patient showed symptoms 15 days after the administration of lansoprazole and had nonspecific complaints 1 day before the onset of skin lesions, as described in the literature. However, the first lesions were on the extremities and there was palmoplantar involvement, which is not frequent. Although the patient has presented more than 90% of epidermal detachment, there were no severe systemic complications, and the process ended after withdrawal of the culprit drug and immunoglobulin administration. We believe that SCORTEN is useful, but in this
2016-05-12T22:15:10.714Z
2013-01-01T00:00:00.000
{ "year": 2013, "sha1": "ac6dc6ec181c7d08698da79a1bde1eee43041a66", "oa_license": "CCBYNC", "oa_url": "http://www.scielo.br/pdf/abd/v88n1/0365-0596-abd-88-1-0117.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4f221341228f9dcfd490071eb6a23183f926c3a2", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
255808129
pes2o/s2orc
v3-fos-license
Comparison of different microbiological procedures for the diagnosis of Pneumocystis jirovecii pneumonia on bronchoalveolar-lavage fluid The current diagnostic gold standard for Pneumocystis jirovecii is represented by microscopic visualization of the fungus from clinical respiratory samples, as bronchoalveolar-lavage fluid, defining “proven” P. jirovecii pneumonia, whereas qPCR allows defining “probable” diagnosis, as it is unable to discriminate infection from colonization. However, molecular methods, such as end-point PCR and qPCR, are faster, easier to perform and interpret, thus allowing the laboratory to give back the clinician useful microbiological data in a shorter time. The present study aims at comparing microscopy with molecular assays and beta-D-glucan diagnostic performance on bronchoalveolar-lavage fluids from patients with suspected Pneumocystis jirovecii pneumonia. Bronchoalveolar-lavage fluid from eighteen high-risk and four negative control subjects underwent Grocott-Gomori’s methenamine silver-staining, end-point PCR, RT-PCR, and beta-D-glucan assay. All the microscopically positive bronchoalveolar-lavage samples (50%) also resulted positive by end-point and real time PCR and all, but two, resulted positive also by beta-D-glucan quantification. End-point PCR and RT-PCR detected 10 (55%) and 11 (61%) out of the 18 samples, respectively, thus showing an enhanced sensitivity in comparison to microscopy. All RT-PCR with a Ct < 27 were confirmed microscopically, whereas samples with a Ct ≥ 27 were not. Our work highlights the need of reshaping and redefining the role of molecular diagnostics in a peculiar clinical setting, like P. jirovecii infection, which is a rare but also severe and rapidly progressive clinical condition affecting immunocompromised hosts that would largely benefit from a faster diagnosis. Strictly selected patients, according to the inclusion criteria, resulting negative by molecular methods could be ruled out for P. jirovecii pneumonia. Introduction Pneumocystis jirovecii is an atypical fungus causing Pneumocystis Pneumonia (PCP) in immunocompromised subjects, AIDS and/or immunosuppressive treatments [1,2]. Clinical history and presentation along with Open Access † Iacopo Franconi and Alessandro Leonildi contributed equally to this work. *Correspondence: antonella.lupetti@unipi.it 1 Department of Traslational Research and of New Technologies in Medicine and Surgery, University of Pisa, Via San Zeno 37, 56127 Pisa, Italy Full list of author information is available at the end of the article radiological examinations drive presumptive diagnosis and empirical therapy. However, the role of laboratory tests has progressively grown in confirming it [3,4]. The current diagnostic gold standard for P. jirovecii is represented by microscopic visualization of the fungus from clinical respiratory samples, as bronchoalveolar-lavage (BAL) fluid. Drawbacks of this technique include low sensitivity, results dependent on microbiologist's experience, time-consuming [5,6]. Due to its suboptimal sensitivity, negative microscopy cannot exclude infection. Faster and more accurate detection of P. jirovecii can be provided by PCR-based assays, e.g. end-point and quantitative real-time PCR (qPCR) [6][7][8][9], which are more sensitive than microscopy, but cannot discriminate between colonization and PCP. Serum (1-3) β-D-glucan (BDG) assay measures the level of a cell-wall component of many fungi, with the exceptions of Zygomycetes, Blastomyces dermatitidis and Cryptococcus spp., which may synthesize extremely low levels of BDG or not at all. Therefore, this assay lacks specificity, as BDG serum levels can rise due to various fungal infections, e.g. Candida spp. colonization and/or infection, but also for the presence of not fungi-related interfering factors, like intravenous treatment with antibiotics, albumin, immunoglobulin, and haemodialysis. Serum BDG might help the clinician to exclude the diagnosis of invasive fungal infections (IFD), like PCP, due to its high negative predictive value (93%) [10][11][12][13][14], although several authors have pointed out that this test cannot be considered the only test to perform in order to rule out the diagnosis of both IFD and/or PCP [15,16]. BDG can be measured also in BAL, however, its role in clinical practice is still a matter of debate. The aim of the present study was to compare diagnostic performance of end-point PCR and RT-PCR and BDG assay examinations with reference standard Grocott-Gomori's methenamine silver-staining on BAL from patients with suspected PCP. Results Based on the inclusion criteria, 18 patients with suspected PCP and four control patients were selected. One BAL sample/patient was analysed by the different methods (Table 1). Microscopy by GMS allowed to identify P. jirovecii in 9 (50%) of the 18 samples and molecular analysis by end-point Unyvero-HPN detected 10 (55.6%) positive samples. P. jirovecii was visualized microscopically in the same 9 samples categorized as intensely positive by end-point PCR. Differently, the sample n. 14, categorized as weakly positive by end-point PCR, was not visualized microscopically. BDG was measured in all samples. Fifteen (83.3%) out of 18 showed positive results, with 7 among those diagnosed by GMS and end-point PCR showing values from 591 to > 1000 pg/mL BDG. P. jirovecii DNA amplification by RT-PCR (Sacace P. jirovecii Real-TM ® ) detected 11 (61.0%) of the 18 samples. The same 9 samples already detected by GMS and end-point PCR showed a Ct value < 27, the additional sample detected only by end-point PCR (n. 14) resulted positive with Ct = 30, and a further sample (n. 2) with Ct = 28 was not detected by GMS nor end-point PCR. Interestingly, the BAL samples n. 17 and 18, which resulted positive microscopically and by end-point PCR, both with a Ct = 19 by RT-PCR, showed a negative BAL BDG result (255 pg/mL and < 10 pg/mL, respectively), which was performed in duplicate. For these patients, serum BDG, received on the same day, of patient n. 17 resulted negative (< 10 pg/mL) as well, whereas of patient n. 18 resulted positive (168 pg/mL). In sample n. 14, positive at molecular methods, the BAL BDG resulted negative (118 pg/mL). The sample n. 2 presented high BDG level (> 1000 pg/mL). Control patients resulted negative to each assay. A Kappa agreement analysis was performed showing a good strength of agreement (k = 0.780) between end-point PCR and the reference standard (GMS), a moderate strength of agreement (k = 0.576) between RT-PCR and the reference standard, and a poor strength of agreement (k =-0.33) between BDG on BAL and the reference standard. On secondary analysis a very good strength of agreement (k = 0.818) was found between the two PCR methods. Discussion To the best of our knowledge, this is the first study comparing BDG assay with GMS microscopy and both endpoint PCR and RT-PCR for P. jirovecii detection in BAL samples. Direct microscopic staining is considered the gold standard for the diagnosis of "proven" PCP differently from qPCR, which allows to define "probable" PCP [17,18]. However, molecular methods, such as end-point PCR and qPCR, require less man-hours, are faster, easier to perform and interpret, thus allowing the laboratory to give back the clinician useful microbiological data in a shorter time. In addition, the positive result by molecular methods can play an important role also for microscopic examination, as the latter can be difficult to interpret above all if the microbial load is low. The present study highlights that careful selection of patients with strict inclusion criteria is essential to define both an appropriate request and to save healthcare facility resources. In this study, all the microscopically positive BAL samples (50%) also resulted positive by end-point and real time PCR and all, but two, resulted positive also by BDG quantification. As expected, RT-PCR was more sensitive than microscopy, allowing to detect P. jirovecii in 61% of the tested samples. In agreement with previous studies, positive results by RT-PCR with Ct < 27 were confirmed microscopically [19]. Levels of BDG were found to be high (≥ 837.3 pg/mL) in 6 patients with no evidence of P. jirovecii infection. Since BDG is present in the cell-wall of many fungi, BDG cannot be used to diagnose a specific fungal infection, thus indicating that BAL BDG should only be taken into consideration for its high negative predictive value [13]. Indeed, it resulted useful in ruling out the diagnosis of PCP for all the negative control patients. Moreover, this test is considered even more reliable in HIV positive patients to rule out PCP. However, our data indicate that negative BAL BDG as well as negative serum BDG values cannot exclude "proven" PCP infection in accordance with current literature [15,16]. Indeed, patient n. 17 who had both negative BAL and serum BDG and was HIV positive, and patient n. 18 who had negative BAL and positive serum BDG and was HIV positive confirm that BDG test alone is not sufficient to rule out PCP even among high-risk patients as HIV positive patients [16]. Together, diagnosis of PCP can be upgraded if an appropriate microbiological test result becomes positive. According to the guidelines [17,18], appropriate host factors, clinical and radiologic criteria should be confirmed by microscopy and qPCR. Although the present data are too limited to draw firm conclusions, we propose that patients strictly selected according to the inclusion criteria and resulting negative by molecular methods could be ruled out for PCP diagnosis. This conclusion is based on the following findings. First, the positive results obtained by GMS were confirmed by molecular methods, e.g., end-point PCR and RT-PCR. Second, RT-PCR showed an enhanced sensitivity in comparison to GMS and endpoint PCR. Noteworthy, in comparison to GMS, molecular methods require reduced man-hour and turnaround Table 1 Comparison of microscopic, end-point PCR, RT-PCR, and β-D-glucan assays between PCP-patients and negative controls a + weakly, + + moderately, + + + intensely positive samples b (Ct) stands for threshold cycle value c Results were < 400 pg/mL = negative; 400-450 pg/mL = indeterminate; > 450 pg/mL = positive time, and provide results independent from microbiologist's experience. Third, in agreement with Fauchier et al. [19], all patients resulted positive by RT-PCR showing a Ct < 27 were confirmed microscopically, thus suggesting that these patients could be considered affected from PCP. Further studies are needed to determine PCR cutoff values to discriminate "proven" from "probable" PCP and from colonization, which would allow providing clinicians faster and reliable results. Conclusions In conclusion, our work highlights the need of reshaping and redefining the role of molecular diagnostics in a peculiar clinical setting, like P. jirovecii infection, which is a rare but also severe and rapidly progressive clinical condition affecting immunocompromised hosts that would largely benefit from a faster diagnosis. Methods This is a prospective methodological study. This study was conducted at the Pisa University Hospital, Mycology Unit, from January 2020 to October 2020. Inclusion criteria Current immunosuppression (AIDS or immunosuppressive treatments), new-onset progressive exertional dyspnoea, fever, cough and hypoxia, a suggestive imaging with ground glass opacities, diffuse infiltrates and/or nodules, high lactate dehydrogenase levels, and no response to empirical antibiotic treatment [3,4,17,18]. Four patients who could not meet the overmentioned criteria were used as negative controls. Microbiological diagnosis of P. jirovecii Reference standard was Grocott-Gomori's methenamine silver-staining. BAL samples were centrifuged (4000 × rpm, 10 min) and divided into two aliquots, one immediately used for microscopy and end-point PCR, one frozen (-20 °C) for further RT-PCR and BDG quantification. All samples underwent: i) microscopic examination via Grocott-Gomori's methenamine silver-staining (GMS); ii) end-point PCR Curetis Unyvero ® -HPN (Hozgerlingen, Germany), target amplified gene was the 26S rDNA. BAL fluid (180 µL) was lysed in the Unyvero Sample Tube, and together with the Master Mix set within the cartridge inside the Unyvero Analyzer. Positivity to DNA search was expressed as weakly, moderately, intensely positive samples. End-point PCR analytical sensitivity regarding P. jirovecii detection was 10 5 pathogens/mL, diagnostic sensitivity and specificity reported by manufactures' instructions specifically for P. jirovecii were both 100%. This assay is performed with a commercially available kit that has overcome quality controls as required by CE approval; iii) P. jirovecii DNA extraction was performed by QIAamp DNA minikit (Qiagen GmbH, Hilden, Germany) and amplification by RT-PCR (Sacace Pneumocystis jirovecii Real-TM ® , Sacace Biotechnologies, Como, Italy) with fluorescent reporter dye probes specific for P. jirovecii and internal control (β-globin gene) used as an amplification control for each specimen and to identify possible reaction inhibition. The target amplified gene was the 26S rDNA. Positive/negative controls were tested along with the patient's sample, following manufacturer's instructions in a 96-well plate on CFX96 Touch Real-Time PCR Detection System (Bio-Rad Laboratories, Hercules CA, USA). Samples were considered positive if the threshold cycle (Ct) value was ≤ 38 and curves showed the typical sigmoidal profile. RT-PCR (Sacace Pneumocystis jirovecii Real-TM ® ) analytical sensitivity was 200 DNA copies/mL, both sensitivity and specificity were 100%. The method used is qualitative, nevertheless, through Ct values, Real-Time PCR allows an estimation of the fungal burden in study samples as previously described [18,20]. This assay is performed with a commercially available kit that has overcome quality controls as required by CE approval; iv) thawed BAL samples were centrifuged (3000 × rpm, 10 min) and supernatants used for BDG assay via Category Goldstream ® -Product name: Fungus (1-3) β-D-Glucan Test Chromogenic Method (GKT-5 M) (Era Biology, Tianjin, China). BDG was quantified by a kinetic automatic reader (IGL-200, Era Biology, Tianjin, China), and compared with standard curves. Results were < 400 pg/mL = negative; 400-450 pg/mL = indeterminate; > 450 pg/mL = positive. Regarding P. jirovecii infection serum BDG assay showed a pooled sensitivity of 87% (95% CI: 0.73-0.94), a specificity of 97% (95% CI: 0.87-0.99), a positive predictive value of 97% (95% CI: 0.85-0.99), and a negative predictive value of 88% (95% CI: 0.75-0.95) [9]. This assay is performed with a commercially available kit that has overcome quality controls as required by CE approval.
2023-01-15T14:52:30.518Z
2022-05-21T00:00:00.000
{ "year": 2022, "sha1": "531e93975688505e8a56b55c336e13b8e2fdab42", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s12866-022-02559-1", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "531e93975688505e8a56b55c336e13b8e2fdab42", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
253714764
pes2o/s2orc
v3-fos-license
Comparative LCA of automotive gear hobbing processes with flood lubrication and MQL The life cycle inventory (LCI) data of a gear hobbing was obtained by means of the methodology unit process life cycle inventory (UPLCI), to conduct a comparative life cycle assessment (LCA) between hobbing assisted by flood lubrication (FL) and minimum quantity lubrication (MQL). The results pointed out 4 among 11 normalized environmental impact categories totalized more than 80% of the accumulated impacts: Fossil Depletion (43%), Climate Changes (19%), Terrestrial Acidification (11%), and Freshwater Consumption (8%). The identified hotspot in the case study was the input flow of raw material for the system “Hobbing Machine,” which was linked to more than 75% of the total amount of normalized potential environmental impacts. Once, changes on raw material depends on the gear design, the research focused on the environmental aspects of energy and cutting fluid consumption, which depends directly on the hobbing process parameters. The introduction of MQL provided reduction of 70.77% on the total amount of normalized potential impacts, while the strategies to reduce electric energy consumption by the machine tool accounted only for 3.74%. Nevertheless, when raw material flow is considered in the LCA, it turns into the process hotspot, due to high energy demanded in the steel-making process, forging, and turning operations to shape the semi-finished gear. The relevance of the key environmental aspects, electric energy, cutting fluids, and raw material, can vary significantly according to the gear size itself. The performed case study was considered a pilot project for the hosting company and can be scaled up to a whole gear manufacturing plant to identify manufacturing cells, which are eligible to optimization in the use of cutting fluids and electric energy by the machine tools. Introduction The industrial segment consumes near to 30% of total amount of energy available globally within the end users sector [1]. Wegener et al. [2] and Liu et al. [3] highlight that a considerable part of this energy consumption is due to machining processes in the manufacturing industry, which can trigger environmental impacts such as Fossil Depletion and Climate Changes [4]. Moreover, the intense use of cutting fluids in machining operations can also lead to the occurrence of other environmental impacts such as human toxicity that may result in occupational diseases, from skin irritations to cancers [5]. Therefore, the study of the diverse environmental aspects associated to machining operations figure as a relevant activity inserted in the lifecycle management of industrial environments. In this sense, the life cycle assessment (LCA) has become an acknowledged tool for the evaluation of environmental impacts, in connection to the complete life cycle of products, processes and economic activities [6]. In regard to the manufacturing phase within the LCA of a product system, Silva et al. [7] identified, among researches in Green Manufacturing, some proposals focused on: the energy consumption analysis, the practice of LCA per se and the machining processes design. Campitelli et al. [8] explain that the design of a machining process can employ the technique known as minimum quantity lubrication (MQL), which is associated to expressively lower environmental impacts when compared to results obtained by the use of conventional flood lubrication (FL). Although the LCA find direct application on the industrial machining, Arena et al. [9] claim its practice is difficulty, once it requires the inventory data of each component, that composes the assembled final product, and the systematic data collection and interpretation of diverse energy and material flows of all relevant activities throughout the entire product lifecycle. The authors also highlight, as obstacles for the realization of LCA, the lack of life cycle inventory (LCI) datasets, which may lead to definition of uncomplete system boundaries, missing some processes, and, as consequence, resulting in underestimated environmental impacts. Suh et al. [10] clarify the LCI database are valuable resources for the conduction of LCA, once they help on reducing the time and resources invested in analysis and evaluation, since the available datasets can be employed initially to detect hotspots, before the practitioner decides to work on data collection. In addition to this, the LCI databases help on increasing the comparability among LCA. However, there is a scarcity of LCI about machining processes, once the LCA studies are more frequent in products than manufacturing processes [7]. Brundage et al. [11] affirm the LCA methods rely on LCI data containing impact estimates of manufacturing processes. Thus, the accuracy of LCI data is critical for quality assessments, however available datasets are often insufficient to cover the variety of existing machining processes and are often only a coarse estimate of actual impacts. Gamage and De Silva [12] noticed the lack of LCI datasets is more evident for nonconventional machining processes, such as: electrochemical, electroerosion, laser or electrons beam, water jet and other hybrid processes. The adaptation of existing LCI datasets from one global region to another could be a solution for the data scarcity, however Henriksen et al. [13] warn that data non-representativeness can take place when datasets containing a specific technology are assigned to replace a mix of technologies, or the average data from a region are employed as replacement to process data in a specific location. In Brazil, the development of LCI is more recent when compared to European and North-American initiatives. The national database of life cycle inventories-"SICV Brasil"was firstly published in 2016 [14], and, today, its collection includes 22 published process datasets [15]. In this way, research on life cycle inventories of manufacturing processes add value to the life cycle studies, as they contribute to increase the accuracy of the LCA, which constitutes guidance to drive projects and initiatives to foster sustainability in public and private spheres. In particular, to the Brazilian context, the LCI research contribute to the creation of genuine national datasets, which capture actual aspects of local industrial and economic activities, and encourage additional LCA studies and applications. The datasets produced by the present research may compose the LCI database "SICV Brasil," in conformity to the Qualidata Guidelines [16]. The present research intends to contribute to theoretical front of sustainable manufacturing by: • Elaborating a LCI dataset on hobbing process to compose the national database of life cycle inventories-SICV Brazil; • Providing the information about energy consumption in each machine tool operating state within a gear hobbing cycle to help manufacturers during the development or optimization of machine tools in regard to energy efficiency strategies; • Outlining the influence of material content in the inventory of the hobbing unity process to guide future estimations of potential environmental impacts throughout the supply chain of a gear manufacturing plant. The scope of present research is to conduct a comparative LCA between automotive gear hobbing processes assisted by FL and MQL, in order to evaluate the potential environmental impacts derived from those 2 processes configurations. To provide foundation to the mentioned LCA, a literature review about LCI methodologies was carried out. As part of the LCA, 3 different process setups for the gear hobbing were proposed to decrease energy and cutting fluid consumption, and the corresponding effects over the potential environmental impacts. In Sect. 2, the literature review approaches the core topics: LCA of machining processes employing MQL, and, specifically, LCA of gear hobbing process; the methodology UPLCI and the interplay with LCA practice and its guideline ISO 14044 [17]. Section 3 presents the methods and resources employed to carry out the case study. In Sect. 4, the results of the LCA and the conducted sensitivity analysis are provided and discussed. Finally, Sect. 5 concludes the paper comparing the case study findings to the previous published researches on LCA of gear hobbing, and suggesting future research directions. LCA of machining processes The use of LCA to investigate the potential environmental impacts of machining processes in the industry, worldwide, has achieved different deployments, such as: identification of machining process hotspots; machining process parameters optimization aiming at lower environmental burden; comparison between conventional machining processes and additive technologies; integration of LCA into the product and process development phases, as presented by Sharma et al. [18], Campitelli et al. [8], Awad and Hassan [19], Kamps et al. [20], Filleti et al. [21], and other researchers. One huge source of energy consumption in the machining process is the machine tools. According to Stehlík [22], the energy output costs for operating ten years the same machine tool is 100 times higher than their own acquisition cost. Pusavec et al. [23] enhances the key factor to increase the environmental performance of the industrial machining processes is reducing the energy consumption, once 10% is employed to air compression systems, 40% on machine electric drives, and 40% to heating and lighting. LCA of machining processes with MQL One important environmental aspect of the machining processes is the application of cutting fluids to control heat generation during the material remotion by the cutting tools, in order to keep work piece quality requirements, while optimizing the cutting tool lifetime [24,25]. However, the use of cutting fluids present some side effects, such as the dispersion of hazardous substances like biocides and chemical additives to fight the growing of fungi and bacteria into the cutting fluids and control oxidation of machine parts, work pieces and tools. Shashidhara and Jayaram [26] and Shokrani et al. [27] stated cutting fluids dispersed as liquid, and, into the air, as aerosol, can cause diseases like asthmas, lung, esophagus, colon cancers, and diverse occupational infections in the production environments. One promising technology used to fight the environmental impacts of cutting fluids is the MQL. It involves the delivery of small quantities of cutting fluids, between 10 to 100 ml per hour, mixed to compressed air, as aerosol, directly into the machining cutting zone [28]. Figure 1 shows the lubrification mechanism by means of a milling operation. Although, MQL employs mineral oils, its consumption is reduced drastically, by a factor of until 10,000 times over original use, in flood lubrification technics [28,29]. Grzesik [30] compared metal machining performance for flood lubrication (FL), near-dry machining (NDM) and MQL assistance technologies, ordering the aspects: lubrication and refrigeration effects, chips transportation and recyclability, investments in technology, operational and disposal costs, health aspects, protection against corrosion, and cutting fluid waste (Fig. 2). In most of analyzed aspects NDM and MQL showed higher performance, although NDM use is prevent in some situations due accelerated cutting tool wear. Campitelli et al. [8] concluded MQL was widely favorable than FL for drilling and milling processes in aluminium, steel and iron alloys, once MQL provided significant reduction of potential environmental impacts over FL: Climate Changes (44%), abiotic resource depletion (31.5%) and land use (70.3%), considering 70% of the impacts connected to the energy consumption and 27% to cutting fluids. LCA of gear hobbing processes The machining technology most employed to manufacture gears is the gear hobbing process, especially for spur and helical gears, because it provides lower production costs and higher dimensional accuracy than other methods [31]. Tapoglou et al. [32] stated the gear hobbing is characterized by a complex geometry in the interface between the hob and the workpiece, with different hob cutting edges removing material simultaneously, causing different chip forming. Brecher et al. [33] pointed out that gear hobbing process design depends on empirical parameters such as metal chips color, shape, and width, which serve as input data to estimate cutting efforts developed during the machining. Xiao et al. [34] proposed an empirical formula to calculate the cutting efforts by hobbing. It encompassed 12 parameters such as empirical coefficients obtained by means of orthogonal experiments, and geometrical tool and gear features, such as module, teeth number, and process parameters such as hob angular speed, feed rate. Stachurski and Kruszyński [35] explained that gear hobbing assisted by MQL has not been studied in depth yet. They carried out some experiments varying the cutting speed with high-speed steel hob cutters and concluded In 2019, 92 million vehicles were produced globally [36]. It means more than a billion of gears were manufactured only for application in the automotive segment. Within this context, comprehension of potential environmental impacts derived from gear hobbing process can be seen as a relevant research theme. The search for research work by means of combined terms "LCA," "gear hobbing" or "gear milling" or "gear manufacturing," resulted into selection of only 3 articles published in 2010, 2018 and 2019, in the database "Web of Science." Fratila [37] compared the gear hobbing of gears manufactured from steel alloy DIN 16MnCr5 assisted by conventional FL and MQL. The process configuration with MQL consumed 9% less energy than the one set with FL, due to constant liquid cutting fluid pumping into the cutting zone. In regard to environmental impacts, the method Eco-indica-tor99 revealed hobbing process with MQL provided 8% less potential impacts than hobbing with FL. Zeng et al. [38] proposed a LCA-based methodology to support decisions in designing machine tools under the perspective of Ecodesign, aiming at the reduction of energy output throughout the use phase of such equipments, once they consume 75% of energy demanded by the industrial manufacture, which, in turn, employs 33% of the annual global energy offer and 20% of global CO 2 emissions. The case study performed with gear hobbing machine tool showed the resulting carbon footprint, in Tons of CO 2 equivalent, got contribution of the environmental aspects: energy consumption in the machine tool use phase (84%); production and tools (10%); raw material obtainment (6%); machine tool manufacture (3%); transport (1%) e recycling phase (6%), although such percentages may range according to the constructive technology and type of machine tool. Jiang et al. [39] compared the environmental performance of small gears weighting 10 g, produced by CNC-hobbing, in steel DIN 42CrMo4, and the additive manufacture technology named laser engineered net shaping (LENS), where the part in formed by multiple successive layers of steel alloy obtained by the fusion of metallic powder melted by high power laser beam. The conclusion is that LENS process achieved less potential environmental impacts due to lower consumption of energy and production consumables, although quality and process scalability were not monitored in the course of the case study. The methodology unit process lifecycle inventory (UPLCI) Unit process life cycle inventory (UPLCI) is a modeling approach that enables users to estimate the energy use and material flow of a unit process, allowing the reuse of such models in a wide range of machines, products, and different materials [40][41][42][43]. The methodology UPLCI was introduced by Kellens et al. [44,45] aiming at the offer of procedures to enable the generation of robust and complete unit process inventory data, in order to contribute on the identification of potential environmental improvements for the analyzed processes. The UPLCI was designed to provide the practitioner one framework to collect process inventory data, both, by equipment subunit levels and by its use modes [21]. Kellens et al. [44] developed 2 approaches for the application of the UPLCI: the "Screening" approach is based on mathematical and computational engineering, and the "Indepth" approach employs real-time measurements of the manufacturing process. The "screening" does not include data collection in the shop floor and provides an initial description of the process inventories. On the other hand, the "In-depth" approach is time-consuming, since process primary data collection is mandatory, nevertheless it supplies more precise and complete inventory data, helping more at the detection of process environmental hotspots [44]. The application of methodology UPLCI is guided through 8 steps, mainly, covering: process classification according to DIN 8580 taxonomy [46]; generation and collection of inventory data based on engineering calculation, industrial process measurements or combination of both; LCI dataset peer-reviewing and publication; proposition of actions to improve process environmental performance and guideline of best practices. As a core step in the methodology UPLCI, the inventory data collection, Kellens et al. [44] explain the goal and study scope must be clearly defined and consistent with the intended unit process. Therefore, some activities must take place: (i) Investigate machine tool architecture and the most influential process parameters; (ii) Identify the sub-processes, machine tool subsystems and production modes to be monitored throughout the study; (iii) Define the system boundaries; its functional unit and reference flow. System boundaries, functional unit, and reference flow The definition of system boundary should declare the studied process and its sub-processes, process input and outputs, justifying the excluded ones provided their low relevance to the study. Linke et al. [41] claims that machine operation should not take into account the influence of external elements such as material handling and feeding into the system boundary. Kellens et al. [44] state, that, in UPLCI studies, the system boundaries are set to encompass only the operating phase of the unit process, excluding the manufacture or disposal of the corresponding machine tool, as shown in Fig. 3. Moreover, all inputs and outputs from Technosphere and Ecosphere must be listed (Fig. 3). The next step is picking the process parameters, which are relevant to process performance, in reference to a quantitative and qualitative measurable functional unit. Thus, input and output flows in/from system boundary always refer to the functional unit [44]. The description of the functional unit specifies a performance indicator to functional output flows in the analyzed product system, serving to establish the reference flow, that measures the quantity of demanded product to fulfill its previously assigned function [47]. Kellens et al. [44] propose to use a generally applicable reference flow of 1 s of processing time for a specified load level of a unit manufacturing process for a specified material, based on a working scheme of 2000 h per year. Inventory analysis in manufacturing processes The LCI phase, according to the ISO 14040 [47], consists of collecting data regarding all the inputs and outputs to/ from the product system related to environmental impact categories of interest [44]. The "In-Depth" approach determines the conduction of 5 studies to measure and analyze the materials and energy consumption synchronized to real-time manufacture operation. Kellens et al. [44] explain the time studies are developed to spot the different use modes of machine tool, the respective process parameters, counting the time for each use mode and active machine subunits. Directly connected to time study, the power study maps the electric energy consumption by each machine tool processing unit, in each use mode. The energy consumption is calculated by the power consumed and the duration of each machine use mode, for both machine tool and the corresponding active subunit. For the consumable study, Kellens et al. [44] indicate the measurement of consumed compressed air, lubricants, water, tools, and other material employed in each processing unit and production mode, in parallel to the time and power studies. The generated amount of waste must also be included as consumable in the study, once its recycling, combustion or landfilling impacts need to be taken into account when the UPLCI data are used in the modelling of a product LCI. Where relevant, gaseous, liquid and solid emissions measurements need to be performed for each production mode, and expressed in volume units or in weight per volume. LCIA and LCA interpretation in manufacturing processes Although the LCIA does not compose the methodology UPLCI, it is essential to interpret the LCA results in reference to the questions posed in the definition of LCA goals [48]. The main goal of the LCIA is to classify and to bring forward the relevance of potential environmental impact of each inventoried flow in the LCI phase [47]. There are dozens of LCIA methods available in the literature, and the application of their proposed environmental impact categories depends on studied process. Firmino [49] points out the most used LCIA Methods in manufacture processes are: ReCiPe [50]; IMPACT 2002 + [51]; Eco-Indicator 99 [52]; e CML2001 [53], as shown in Table 1. EC/JRC/IES [54] highlight 2 purposes for the LCA results interpretation-the last phase of an LCA: • It provides hints for the improvements of LCI model based on the scope and goals of the LCA, which were set initially in the study; • It serves to underpin technical and management decisions and recommendation referring to studied product or process. According to the ISO 14044 [17], the LCA interpretation phase comprises: the identification of hotspots derived from the results obtained in the LCI and LCIA; evaluation of LCA in regard to its completeness, sensibility, and consistency; the presentation of the study findings, limitations, and recommendations. The use of UPLCI methodology leads to recording the LCI of manufacturing processes in such way that allow comparisons between machining processes, even if they present distinct characteristics. Methods The methodology UPLCI was used to develop a case study about the hobbing operation of automotive gear teeth. Afterwards, the LCI data was entered into a LCA model considering 4 scenarios for this hobbing operation. Case study The case study was conducted in the plant of one auto parts manufacturer located in the state of São Paulo, Brazil. The gears and other components are used in the assembly of commercial vehicle transmissions systems for the South-American market. The hobbing operation was assisted either by FL or the MQL. The materials, procedures, work premises, and techniques employed at the study case are described in the following subsections. Materials The machine tool used in the study is a hobbing machine, model S300, year 2011, manufactured by the company Samputensili, operating on three-phase voltage of 380 VAC, 60 Hertz, equipped with a Siemens Sinumerik numeric control. For the study case, the premises below were observed: (a) machine tool set up with a multiple thread involute hob (Fig. 4b); (b) cutting assisted by conventional FL, and MQL supply, in distinct time periods; (c) gear teeth module 4.5 mm, 32 teeth and 14 kg of mass, typical dimensions of gears for heavy duty commercial vehicles (Fig. 4c); (d) annual gear production of 80,000 units. The machine tool has three linear axes (X, Y, Z), three rotating axes (A, B, C) and one robot arm for positioning the workpieces (Q) according the Schema shown in Fig. 4a. Over the machining time, the hobbing cutter rotates about the axis B while the hob head moves in an axial direction along the gear axis (Z axis). Simultaneously, the work gear rotates about the axis C. At the end of operation, the robot arm holds the machined gear, turns 180°, and positions a new blanked work gear onto the worktable to start a new machining cycle. The machine tool operates in six distinct states, according to the ISO 14955-1:2017 [56], from the full machine systems activation level to the state characterized by absence of power supply from the industrial electrical grid: Within one hobbing cycle, the "Extended Standby" state takes place right after the material removal phase and lasts until new blanked work gear is positioned onto the worktable. On the other hand, the state "Standby" is activated for longer machine downtimes, such as meal breaks, shift changes and weekends. Over one entire hobbing cycle, the machine tool switches among the states "Processing," "Warm Up," Ready for Processing," and "Extended Standby," which actuate over the hobbing machine subunits: • Primary system (lighting, control unit, sensors) • Hydraulic unit; • Hob spindle; • Worktable spindle; • Axial, radial and vertical feed drives; • Rotating drives; • Exhausting system; • Cooling system; • Compressed air system or flood lubrication pump; • Chips extraction conveyor. Figure 5 presents a correspondence between machine tool operating states and the activation states of its subunits in order to support the data interpretation and identification of energy consumption sources in the LCI. Functional unit, reference flow, and system boundaries In accordance to the methodology UPLCI [44], the functional unit was expressed as the removed material volume during the hobbing operation. The removed volume within one second defines the functional unit as: 0.67 cm 3 of steel alloy 20MnCr5, from the blanked work gear. Likewise, as recommended by Kellens et al. [44], the reference flow is the same as the functional unit, expressed within one second of operation, including the machine tools operating modes: full load, partial load and standby. The scheme outlined in Fig. 6 shows the system boundaries, including its inputs from Technosphere, outputs to Technosphere, emissions to the Ecosphere, and the hobbing machine operation states and its subunits. Premises about the LCI The data collection phase was guided by the "In Depth" approach of the methodology UPLCI. Kellens et al. [44] enhanced this research approach provides higher data precision and completeness, which support the identification of potential improvements in the corresponding manufacturing process, from the raised environmental hotspots obtained from the LCA. The "In Depth" approach is also useful to aggregate data for the development of future inventories and LCI datasets of manufacturing processes. After establishing the system boundaries, it was determined the Background Data would be taken from the database of the software GaBi, and the Foreground Data would be collected directly into the production site, according to the following stages: from the blanked work gear, cutting fluid, hob and compressed air, at each machine tool operating state throughout consecutive hobbing cycles, along 1.000 h; • Indirect measurement of the recyclable chips from the material removed from the blanked work gear, at each machine tool operating state throughout consecutive hobbing cycles; • Compilation of collected data to calculate the total amount of inputs and outputs of the product system, and the derived balances of mass and energy (Fig. 6). Time study The complete hobbing cycles was timed and distinguished among the different machine tool operating states. Each cycle started when the hob head moved in Y-direction towards a new blanked work gear fixed onto the machine worktable, until the moment a new work gear is fixed onto the worktable by the robot arm and machine enters the state "Ready for Processing," indicating a new cycle. The hobbing cycle was repeated 5 times consecutively to detect any variation among the different operating states of the machine tool. Power study The electric energy consumption compilation took place in indirect way, by means of the electrical power measurement in the machine tool, by means of the Three-Phase Power Quality and Energy Analyzer-FLUKE 435. This device was installed at the machine tool electric panel for the realtime measurement of consumed electric current and tension, in each of the three phases, as well, the active power. The measurement data was stored in the intern memory of the device, and subsequently transferred to an electronic spread sheet for the handling and data arrangement for the LCI. The power study was repeated five times for the current production setup, and additional seven times, in different combination between hobbing cutter feed rate and rotational speed of the spindle. To ensure feasible production Fig. 6 System "gear hobbing" boundaries conditions, the combination of feed rate and speed were limited by technical admissible parameters of the hobbing process, such as width and color of metal chips and dimensional accuracy of machined gears. Consumables study Besides the electric energy consumption, the listed consumables employed on the hobbing operation were monitored by an indirect counting method: (i) raw material, as blanked work gears; (ii) cutting fluid; (iii) hobbing cutter; and (iv) compressed air. In regard to the generated wastes, the study encompassed: (v) metal chips, (vi) contaminated cutting fluid, and (vii) worn-out hobbing cutter. The consumption measurement of raw material and solid waste-the metal chips-was carried out by controlling the gear mass, before and after the hobbing operation. Thus, the difference between values determined the raw material mass converted into metal chips. The volume of metal chips resulted from the metal chips mass measured previously, multiplied by 7895 kg/m 3 , which corresponds to the specific volume of steel alloy 20MnCr5. The direct measurement of metal chips was discarded due to the lack of precision, once there is not ensured the full content of chips will be transported by the extract conveyor to the outside collector, because the chips are thrown apart under high speed during the machining, those particles may fall down on the machine tool bed instead of the conveyor. The consumption of cutting fluid was determined by means of compilation of 1.000 h of production data that were registered by the factory maintenance field team. The cutting fluid reservoir was part of the MQL system, which was installed onto the back panel of the machine tool. Every maintenance event that resulted into addition of cutting fluid into the MQL system reservoir was taken into account to calculate the fluid consumption per produced gear. Therefore, the cutting fluid consumption was the quotient result between the cutting fluid consumption within 1.000 h of production and the quantity of gears produced within this period. Nevertheless, the machine tool operated many years with the conventional FL before the introduction of the MQL. Thus, the consumption of cutting fluid per gear was also calculated based on the historical data of maintenance department for more than 5.000 h of this machine tool operation. The resulting volume of contaminated cutting fluid after the hobbing operation was determined by the difference in weight of a mass of chips, in wet and dry conditions. One sample of 1030 g of metal chips, corresponding to the amount of material removed from one blanked work gear, was extracted by the machine tool conveyor, and weighted by a scale with resolution equal 0.1 g. Once the metal chips were contaminated with cutting fluid, the sample was dried in an oven, to cause the evaporation of the liquid adhered to the chips, and weighted again. The difference between the weighted values was taken as the net amount of contaminated cutting fluid, and it was used to both, the LCA model of gear hobbing assisted by FL and MQL. Finally, the consumption of the hobbing cutters per produced gear was calculated by the quotient between one tool and the number of gears this tool was able to machine, before it was sent to reshaping services. Emissions study The machine tool is equipped with electrostatic filter to absorb dust, fumes, and oil mists, which are formed due to excessive generation of heat in the cutting zone, chemical characteristics of the cutting fluid and its pumping pressure onto the cutting zone. The machine tool working space is completely confined, meaning the exhausted air from the hobbing machine is integrally filtered and delivered to the manufacturing plant environment, at atmospheric pressure and below 5 mg/m 3 , that is the threshold for emission of dust, fumes and oil mists in accordance with the Brazilian Regulatory Standard 15 [57]. The machine tool is also equipped with an external refrigeration unit, that operates with the fluid R134A, within the temperature range of 15 °C and 45 °C, and delivering 7.900 W of power for cooling the machine hydraulic system circulatory oil. During the conduction of the case study, the dissipated air temperature reached 38ºC, measured at 50 cm away from the refrigeration unit. This result indicated the equipment operation near to the machine tool was not classified as unhealthy working station according to the Brazilian Regulatory Standard 15 [57]. Premises about the LCIA Basing on the study case data, composed by the data collected directly at the machine tool and historical records of factory field maintenance team, the LCIA was carried in the software GaBi, version 9.2.1.68-Education Database 2020, which is suitable for academic researches. ReCiPe 2016 v1.1 Midpoint (H) [58] was the method picked to the evaluation of environmental impacts, once that method is the most adopted to conduct the LCIA of manufacturing processes, as presented in Sect. 2 LCI The data collected during the conduction of case study are presented in this section, in reference to one complete hobbing cycle of one gear, and based on the reference flow set in Sect. 3.3.1. The gathered inventories were: electric energy output of the machine tool operating in different states, generated consumables, and waste. Figure 7 presents, in a chronological sequence, the operating states assumed by the machine tool and their corresponding duration over one complete hobbing cycle of one gear, as well the total time assumed by the machine tool in each operating state, in absolute and percentual scale. The total time for hobbing one gear is 200 s, split into three distinct operating states: "Processing" (82.5%), "Warm up" (14.5%), and "Ready for Processing" (3.0%). Such utilization profile is typical for machine tools operating under batch production conditions, where the goal is maximizing the adding value activities in the productive chain. Study of the active power employed on the hobbing The active power and electrical energy consumption were measured every 5 s throughout the complete hobbing cycle of 200 s, by means of the device Fluke 435. Such procedure was repeated 5 times to measure the total energy consumption by the machine tool, and the portion corresponding to the following machine tool subunits: linear and rotational drives, hobbing cutter spindle; hydraulic unit; refrigeration unit; chips extracting conveyor and oil mist filter (Fig. 8). In addition, a final measurement was done to record the energy consumption when the machine assumed typical non-productive states, such as "Extended Standby" and "Standby" operating states. Prior to the conduction of this case study, the machine tool was assisted by the conventional FL. Under that machine setup, the maintenance department monitored periodically the overall energy consumption over two years of operation, for the same gear considered in this case study. The average consumption value obtained was 0.85 kW h. Table 2 shows the aggregated electric energy consumption for each machine tool operating state, as well the values converted according the defined reference flow, in Sect. 3.3.1. The major portion of energy is consumed in the operating state "Processing" (76.5%), once the serial hobbing process was designed to the highest productivity, provided the quality requirements of the machined gear. On the other hand, the "Warm-up" state presented the higher value when the energy consumption is converted Fig. 7 Operating states duration for the hobbing cycle of one gear into the reference flow basis. This is explained by the energy consumption peaks that take place while motion units turn into action for positioning the hob head in reference to the blanked work gear, and during the moves of robot arm. Figure 9 presents the hobbing cutter spindle speed and axial feed rates and the corresponding energy machine tool consumption resulted from the combination of both machining parameters, measured under operating state "Processing." The assumed cutting parameters are theorical and based on the tacit knowledge from the gear production department, where the study case took place. While keeping all other operating states unchanged, the variation in the machine tool total energy consumption was 9.4%, caused by the combined variation of those main machining parameters. Consumables study data The raw material consumption in form of blanked work gear from 20MnCr5 alloy, the hob cutter and the cutting fluid took place only during activation of "Processing" state, where the hob cutter exerts the cutting forces over the work gear. On this way, all the inventory results for the consumables are enclosed in this operating state of the machine tool. For every hobbing cycle in the case study, the raw material, in form of turned part with 15 kg and 1901.14 cm 3 of volume, is fed into the machine tool chamber room, or 75 g of raw material according to the reference flow basis. The consumed cutting fluid varies significantly between the conventional FL and the MQL. For the FL method, the cutting fluid is pumped into the cutting zone from a reservoir installed on the machine tool. While the cutting fluid is mixed to compressed air, and, subsequently, pumped into the cutting zone as a spray, according to MQL method. The amount of cutting fluid consumed by one machined gear was monitored over 100 h of operation, which represents 1532 gears machined in the regular production flow. The hobbing process assisted by the FL method consumed 687.5 L of cutting fluid, while the machine-tool equipped with MQL demanded 11 L of the cutting fluid. In terms of reference flow, 0.036 ml per second for FL system, and 2.24 ml per second for the MQL system. It means a reduction of 98.4% in the consumption of cutting fluid after the introduction of MQL. The consumption of hobbing cutters was monitored over 100 h of operation, with both assisting systems, FL and MQL. In the average, it was possible to machine 450 gears in conformity to the quality requirements of the product, before ordering reshaping and recoating of the tool. It means, each hobbing cycle consumes a small percentage of the hob cutter lifetime, more precisely, 0.22%. When converting that value into reference flow basis, the amount was even smaller: 0.000011 hob cutter per second. As this consumption level is not significant for the LCI, it was not included in the LCA. Emissions study data The study case considered two emissions. The first one was the filtered air exhausted from the machine tool chamber room to the Ecosphere, passing through an oil mist separator and electrostatic filtering system. The second emission was the heated air exhausted by the hydraulic oil heat exchanger. The output filtered air measurement showed mist and fume concentration under 5 mg/m 3 , while the exhaled air from heat exchanger reached 38 °C at a distance of 0.5 m away from the heater. None of those emissions surpassed the thresholds established by the Brazilian Regulatory Standard NR-15 [57], according to the periodical audit carried out by the Health & Safety department of the company, which hosted the case study. Due to that reason, those emissions were considered low relevant, and therefore, not taken into account to the LCIA phase. LCIA and interpretation The input and output data from the system "Gear Hobbing Machine," as LCI data, were combined with background system data for modelling the LCA in the software GaBi. The gear tooth hobbing process model included the inputs: raw material consumption, electric energy output by the machine tool, and the cutting fluid employed while machining the gears, as described in Sects. 4.1.2. and 4.1.3. On the other hand, the hobbing process outputs included in the LCA model were the 32-teeth machined gear, the metal chips and the amount of contaminated cutting fluid extracted together with the chips. The LCI data were extrapolated for the production of 80,000 gears in order to represent the annual volume manufactured in the gear production plant. The raw material inventory was considered into the LCA model, since the ore extraction activity, the steel making process, and the forging and turning processes to obtain the semi-finished gear were quite relevant in terms of energy consumption. Moreover, the raw material dataset was taken from the database of the software GaBi, whose steel scrap content was near 95%. The environmental impact analysis was performed according to the method ReCiPe 2016 v1.1 Midpoint (H) [58], available in the software GaBi, version 9.2.1.68 -Education Database 2020, at the time of analysis. Sensitiveness analysis The LCIA included 4 scenarios, that depicted distinct production configurations for the automotive gears, as shown in Fig. 10. The proposed scenarios provided progressive Figure 11 shows potential environmental impacts comparison for the 11 impact categories normalized according to the ReCiPe 2016 v1.1 Midpoint (H) [58], applied to the 4 proposed scenarios of gear hobbing process. The category Fossil Depletion achieved the highest score in the scale person-equivalent, showing a score 3400 times higher than the Metal Depletion score. Four from 11 impact categories comprise 80% of total aggregated scores of the 11 categories, in person-equivalent: Fossil Depletion (43%), Climate Changes excluding biogenic carbon (19%), Terrestrial Acidification (11%), and Freshwater Consumption (8%). Figure 12 also reveals the proposed 4 scenarios produced a limited effect over the reduction of environmental impacts, in the 11 mentioned categories. The major reduction was perceived in the category Fossil Depletion (12%). Once the proposed scenarios concern only the hobbing process, the inventory reduction did not reflect on the impacts related to the raw material flow, which is characterized for intense energy consumption, such as the mining and steel-making activities. Figure 12 and Fig. 13 present the contributions of product system input and output flows to the environmental impacts for the scenarios A and D, described in Fig. 10, since their comparison represents the most significant differences in terms of unitary process inventory. The raw material consumption, as 20MnCr5 alloy billets, is linked to more than 75% of environmental impacts, in the average of the 11 normalized categories for the 4 analyzed scenarios. As follow, in descending order: cutting fluids consumption (15%), raw material forging and turning energy output, and gear hobbing (10%). The search for hotspots confirmed the raw material and cutting fluids flows contributed to more than 90% of the environments impacts obtained in this case study. The most relevant resources included on those flows were the energy output for the steel production (53%), lubricants production (34%) and material resources (5%). In terms of emissions, 99% of the environmental impacts were associated to CO 2 , SO 2 e particulate emissions, all them derived from the steel-making process. The introduction of MQL, and the consequent minor cutting fluid consumption, enhances even more the relevance of raw material flows as key contributor to the potential environmental impacts, as seen in Fig. 13. Although the raw material appeared as the major hotspot of the unitary gear hobbing process, the handling and use of 20MnCr5 alloy semi-finished blanks has been optimized for decades over the gear production supply chain, in a way that improvements of that hotspot would be only feasible by means of gear redesign, which, in last instance, would demand new development & test validation efforts for the application on commercial vehicles. As the case study focused on the manufacturing process, the further environmental impact analysis was aimed exclusively at the consumption of cutting fluid and electrical energy by the machine tool. Figure 14 presents the environmental impacts results classified according in a descending order, from the major impact reduction, for the 11 categories defined on Sect. 3.3.7, in the scale person-equivalent. Four from the 11 categories enclosed more than 85% of estimated potential impacts, when scenario A is compared to scenario D. In terms of LCI, the shift from scenario A to scenario D provided the reduction of cutting fluid consumption by 98.4%, and in machine tool electric energy output by 18.7%. Without the corresponding raw material flows, the influence of machine tool energy output and cutting fluid consumption in the gear hobbing process became evident. In average of 11 impact categories, electric energy consumption is linked to 93.6% of environmental impacts, while cutting fluids accounts for 6.4% (Fig. 15). Likewise, Campitelli et al. [8] demonstrated the electric energy and cutting fluid consumption correspond, respectively, to 70% and 27% of the potential environmental impacts generated by machining processes. Figures 16 and 17 show the reduction on the potential environmental impacts, as effects from the introduction of MQL and implementation of strategies to decrease gradually the energy consumption by the machine tool. The introduction of MQL provided impacts reduction of 70.77% considering the aggregated scores of the 11 categories. Conversely, the machine tool consumed energy reached sparce 3.74% of impact reduction. Therefore, the numeric contribution of MQL at environmental impacts mitigation is 19 times higher than the effects of reducing machine tool energy output. Figure 16 shows decrease of 35 points person-equivalent of the category Fossil Depletion. This result can be attributed, in particular, to the reduction of non-renewable resources, and also to decreasing emissions to air of inorganic elements. Moreover, the category Human Toxicity, cancer stood out from other categories, showing an impact reduction of 8.03 points person-equivalent, as effect of less emissions to air of inorganic elements due to decreased consumption of cutting fluids. In the practice, after introduction of MQL, the machine tool operators were no longer exposed to the cutting fluids and their diverse chemical toxic substances. On the other hand, the reduced electric energy output during the hobbing operation did not show significant effects over the potential environmental impacts. Considering the aggregated normalized impacts of the 11 categories, the reduction reached only 3.3 points person-equivalent (Fig. 17). The reduction of timing in standby states (70%) and cutting power in processing state (12%) ended up reducing the machine tool energy output only by 14.6%. Closing remarks about LCA in gear hobbing The results obtained from the LCIA converged in general to the conclusions presented in the Literature review. Fossil Depletion, Climate Changes, and Terrestrial Acidification represent together, around 75% of the potential environmental impacts derived from the gear hobbing process. The input flow related to raw material is associated to 75% of the potential environmental impacts, according to the case study, due to energy-demanding processes like mining and steel-making. If the effects of raw material flow are put apart from the product system "Gear Hobbing," the introduction of MQL provided approximately 70% reduction on potential environmental impacts, and 8.6% when material flow is still considered in the analysis. On the opposite way, Scenarios C and D together contributed only with 3.7% in the reduction of the aggregated impacts of the 11 categories. In terms of consistency and completeness analysis of the system, the most updated datasets for the modelling of the four greener scenarios were employed. Specific datasets of electricity grid mix, lubricants at the refinery, steel billet forging and turning from Brazil, available at GaBi, version 9.2.1.68-Education Database 2020 were used in the LCA. Neither allocation methods nor transportation process datasets were considered in the product system. Furthermore, as stated in the introduction section, the generated inventory of gear hobbing can be used to supply emerging LCA databases like the "SICV Brasil," with relevant data and information in the machining processes topic. Conclusions The methodology UPLCI has proved to be, during the present research, a very practical way to build LCI data in the large-scale production environment. The UPLCI and the corresponding LCA showed the application of MQL in machining processes presents more advantages in comparison to FL, since it leads to the solid waste reduction, water consumption, machine tool energy output decrease, and occupational health risks mitigation by the virtually elimination of contact between machine tool operator and cutting fluid. The introduction of MQL provided a reduction of 98% in the cutting fluid consumption, and the implementation of new machining strategies, such as reduction of nonproductive operating states, resulted in the optimization of electric energy use by the machine tool. Those actions implemented together caused the reduction of 74.51% in the potential environmental impacts for 11 categories, taking into account only the hobbing process itself, in the gear manufacturing plant. However, under the perspective of the whole supply chain of gear manufacturing, the demand of raw material for the hobbing process contributed to more than 75% of the raised potential environmental impacts, considering the average of the 11 normalized impact categories evaluated Fig. 17 Effects of electric energy consumption reduction over environmental impacts in 4 production scenarios, while the cutting fluid use totaled 15%, and the electric energy output from the machine tool achieved 4% only. Therefore, neither the electric energy demanded by the machine tool, nor the cutting fluid use can be denominated as gear hobbing process hotspots, but the raw material of the gears. The performed case study was considered a pilot project for the hosting company, having contributed to build knowledge about LCI and LCA. It can be also scaled up to the whole gear manufacturing plant to identify manufacturing cells, which are eligible to decreasing cutting fluids use and implementing actions towards more efficient use of electric energy. Finally, the achieved results can be used to enhance more knowledge in the green manufacturing field towards supplying LCA database with more relevant data and information about inventory of manufacturing processes as well as their environmental performance.
2022-11-21T14:22:54.070Z
2021-11-16T00:00:00.000
{ "year": 2021, "sha1": "4609468256ff3f5aa3a22c00739d8db816af740d", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-663069/latest.pdf", "oa_status": "GREEN", "pdf_src": "SpringerNature", "pdf_hash": "4609468256ff3f5aa3a22c00739d8db816af740d", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [] }
260026368
pes2o/s2orc
v3-fos-license
Study of Bond Issuer Companies Listed on IDX and on PT Pefindo Rating List: The Effect of Financial and Non-Financial Factors on Bond Rating The aim of this study is to determine the effect of liquidity, leverage, profitability, bond securities, and maturity variables on bond ratings of entities that issue bonds on the Indonesia Stock Exchange and PT Pefindo's rating list for the period 2019 - 2021. The objects used in this study are all bond issuer companies that have been recorded in PT Pefindo and have complete financial reports on the Indonesia Stock Exchange for 2019 - 2021. There are 11 bond issuer companies used as samples with 3 years of observation so that a sample of 33 is obtained. Sampling is determined by using a purposive sampling, while the insightful strategy utilized is calculated relapse examination. The result shows that liquidity (CR), leverage (DER), profitability (ROA), bond securities and maturity affect on the bond rating. This shows that liquidity, leverage, profitability, bond securities, and maturity have an impact on bond ratings for bond issuer’s entities. INTRODUCTION When an organization needs sufficient funds to carry out its business activities, the capital market can be used as a link between parties with large reserves (financial sponsors) and parties who need reserves (guarantors). In the capital market, there are two types of goods that are in the greatest demand, namely stocks and securities. Bonds are more attractive to investors in terms of security than stocks (Purwaningsih, 2008). Investors in bonds need good quality financial information about the company to be used as a reference in investment decisions by the contribution reservation board. For financial backers who intend to buy bonds, the main thing that is used to gauge the quality and dangers that financial backers will face when assuming they invest resources in bonds is attention to bond ratings, which can provide information and signals to investors. There are 2 factors associated with bond ratings, namely financial, such as liquidity, leverage, and profitability, and non-financial, such as the age of bonds and guarantees. This study aims to analyze the influence of financial and non-financial variables on bond valuation with two practical benefits, namely as a reference in making bond quality valuation decisions before investing and understanding the various factors that affect bond ratings to compete in the capital market. LITERATURE REVIEW Bonds and Bond Rating According to the idx.co.id website, the bond is an adjustable medium-term liability protection that contains a guarantee from the backer to pay the difference to the buyer of the bond in interest within a specified time period and a credit principle at a later date. According to Jogiyanto (2015), bonds are personas used by ratings organizations to describe bond betting. Signaling Theory Signaling theory explains how signals management success or failure is communicated to the owner. Signal theory related to information asymmetry. The positive thing in signaling theory is where companies that provide good information will set them apart with companies that do not have good Vol. 10, No. 2, 2022: 145-150 news by informing to the market about their condition, a signal of good future performance the future provided by companies whose past financial performance is not good will not be trusted by the market (Wolk & Tearney, 1997). The signal is interpreted as a signal made by the company (manager) to outside parties (investors). These signals can take the form of various forms, both those that can be directly observed, or which must be studied more deeply to be able to find out. Regardless of the form or type of signals issued, they are all intended to imply something in the hope that the market or external parties will make a change in the valuation of the company. That is, the selected signal must contain the power of information (information content) to be able to change the assessment of the company's external parties. Signaling theory briefly explains that the company's management, as a signaling party, provides the company's financial statements and non-financial information to the selected rating agency. The bond rating agencies then carry out the rating process in accordance with the procedures so that they can issue bond ratings and make them public. This bond rating serves as a signal that a company is defaulting on debt repayments (Widowati et al., 2013). Information in the form of published bond ratings is expected to signal a company's financial health and illustrate the opportunities associated with its indebtedness (Sari, 2007). Due to the rating of the bonds, potential investors can make the right decision to buy or refuse the company's bonds. PT Pemeringkat Efek Indonesia (PEFINDO) The main goal of PT PEFINDO is an objective, impartial and credible public assessment of the credit risk of liability protection. PEFINDO uses a rating denoted by the letters idAAA, which means the highest risk on bonds, and idBBB, which means the lowest risk on bonds. Financial Factor The predicted relationship between financial statement data is measured using financial ratios to provide more meaningful information. In this study, 3 financial factors are applied: liquidity, leverage and profitability. Kasmir (2012) has shown, liquidity is a proportion that describes an item's ability to keep track of its immediate obligations (obligations). This statistic can show if a company is liquid by showing whether its current assets exceed its current liabilities. The current proportion is the share of liquidity used in this study. Leverage Ratio The share of influence indicates the ability of the organization to meet long-term obligations. The rate an organization must bear decreases as this ratio decreases. The debt-to-equity ratio, which compares equity and debt and indicates a company's ability to meet its obligations with current equity, is the leverage ratio used in this review. Profitability Ratio A company's ability to generate profits and the rate of return on investment are assessed using a rate of return. The ability of a business to make money using its resources is not entirely determined by the return on its resources. The share used in this study is the return on assets. Non-Financial Factors It consists of two non-financial elements that are considered in this study to assess whether a bond has a maturity in line with the age of the bond: the guarantee and the age of the bond. Bond Securities Bonds that include additional collateral from a third party or certain assets of the issuer are known as secured bonds. Compared to secured custody, unsecured custody will be more risky. Bond Age (Maturity) The age (maturity) of a bond is a nonmonetary characteristic that indicates how long before the maturity date of the said bond, for example, until the day when the bond holder receives a new head or the estimated value of his bond. Hypothesis The liquidity ratio is the ability of a company to meet its short-term obligations on time (Fahmi, 2011). Liquidity is determined by the size of current assets, namely assets that can be easily converted into cash, liquid securities, receivables and inventories. The higher the company's liquidity, the better its ability to meet its short-term obligations. Borrowers (lenders) use the most liquid assets as the main source of payments and interest on securities in financed assets (Joseph, 2002). Thus, the more liquid assets a company has, the more indirectly it will affect the repayment of its long-term obligations (redemption of bonds), which is expected to reduce the risk of default, so that the likelihood of a company's bond rating will improve. The results of the study by Hafidania and Hakiman. (2020) argues that liquidity has a positive effect on bond ratings. The study by Sufiyanti & Wardani (2016) show that liquidity has a positive effect on bond ratings. Thus, this study proposes the following hypothesis: H1: liquidity affects bond ratings The leverage ratio is a ratio that shows the level of the share of using debt in financing investments (Raharja, 2015). This ratio is measured using the debt-to-equity ratio. The higher the leverage ratio of the company, the greater the risk of bankruptcy of the company. The lower a company's leverage, the higher its rating (Burton, 1998). The lower the ratio, the smaller the assets financed by debt. A high level of leverage is not good because of the burden of interest on the debt. If a high level of leverage (extreme leverage) causes a company to be unable to pay all of its obligations (including bonds), the bond rating will be less good. Thus, the lower the leverage ratio (DER), the higher the rating of the bond. The results of the study by Widowati et al. (2013) stated that leverage has a negative effect on bond levels. Research by Novita (2018) shows that leverage has a negative impact on bond ratings. Thus, this study proposes the following hypothesis: H2: leverage affects bond ratings Profit ratio is a ratio that measures a company's ability to generate profits (earnings) at a given level of sales, assets, and equity. This ratio uses the return on assets. Purwaningsih (2008) argues that the higher the level of profitability, the lower the risk of insolvency or default risk. The higher the profitability, the higher the rating allows the company. The results of the study by Widowati et al. (2013) stated that the yield ratio has a positive effect on bond ratings. The results of the Kurniawan and Suwarti (2017) stated that profitability has a positive effect on bond ratings. Thus, this study proposes the following hypothesis: H3: profitability affects bond ratings Bond securities is one of the important aspects of bonds because the bonds are guaranteed, which means that the company can reduce the risk of default for bondholders. Bond securities is one of the important aspects of bonds because the bonds are guaranteed, which means that the company can reduce the risk of default for bondholders. Sumarto (2010) stated that a bond with a long maturity increases investment risk, since over a sufficiently long period there may be a risk of bad events or events that can lead to a decrease in the company's efficiency. Thus, bonds with shorter maturities are rated higher than bonds with longer maturities. The results of Magreta and Nurmayanti (2009) stated that bond securities has a positive effect on bond ratings. The results of the study by Sari and Sudjarni (2016) stated that bond securities has a positive effect on bond ratings. Thus, this study proposes the following hypothesis: H 4 : bond securities impacts bond ratings The age of a bond (maturity) is the date that the owner of the bond will receive payment of the principal or face value of the bond and the periodic interest it holds. Investors generally dislike bonds with longer maturities because the risk associated with them will also be higher. The results of Sufiyanti and Wardani (2016) study show that maturity has a positive effect on bond ratings. The results of the study by Arisanti et al. (2013) stated that maturity has a positive effect on bond ratings. Thus, this study proposes the following hypothesis: H5: maturity affects bond ratings METHOD This study focuses on financial accounting, which leads to financial variables such as liquidity (current ratio), leverage (debt to equity), profitability (return on assets) and non-financial aspects such as guarantees when valuing bonds using cash and non-financial indicators. Monetary variables: monetary as valuation (bond securities) and age of bonds (maturity). Data was collected by visiting www.idx.co.id and www.pefindo.co.id. The study was conducted on bond issuers listed on the Indonesian Stock Exchange and on the PT Pefindo rating list that published financial statements for 2019-2021. All bond issuers that have registered their bonds with PT PEFINDO and have complete financial statements on the Indonesian Stock Exchange for the period 2019-2021 constitute the aggregate of this review. To determine the sample, this review uses targeted sampling, a lowprobability sampling method that uses specific criteria for sampling. The source of the research data is secondary data from previously posted financial statements on the official website. Quantitative data in a numerical scale is the type of data used. The official website of the Indonesian Stock Exchange www.idx.co.id is used for the data collection methods in this study, namely the documentation methods. The data is presented in the form of an annual report prepared by the bond issuer for the period 2019-2021. Data Analysis Technique This study uses logistic regression analysis. This analysis is used because the dependent variable is a dummy variable Descriptive Statistical Analysis To determine the summary information used and processed in this review, including the amount of information processed, typical information, and the standard deviation of the information variables, broken down by the dimensions involved. In addition, it is possible to view the minimum and maximum scores of the data. Multicollinearity Test Multicollinearity is a condition in which independent factors are related to each other. The provisions required to test for multicollinearity are as follows: If VIF > 10, multicollinearity occurs. If VIF < 10, then multicollinearity does not occur. For stability > 0.10, there is no multicollinearity. When stability < 0.10, multicollinearity occurs. Logistic Regression Analysis The dependent variable was a dummy and an analytical tool was used to determine the degree of influence of the independent factor on the dependent variable calculated during the repeated study (somewhere between 0 and 1). In this review, the logistic regression test includes 3 analyses, namely the Hosmer-Lemeshow Test, which checks for incorrect assumptions that the correct information fits the model, tests the fit determined based on the chi-square value used to estimate the likelihood of the recursive model (there is no contrast between the model and the information so that the model can be considered appropriate). To show that the regression model fits the information, model fit and overall model fit were evaluated. The -2 log probability end value will decrease in the admissible regression model, or the -2 log probability start value is higher than the -2 log probability end value. To determine the value of the coefficient of determination of the logistic regression model, the Nagelkerke R-squared test was performed. Hypothesis Testing Test of importance of synchronous models. The Omnibus Trial of Model Coefficients (L-R Insights) table is used to view the effects of logistic regression testing, especially to observe the synchronous impact of the standalone variable on the dependent variable. Partial test of model significance. The probability value method (prob.) can be used with options when testing partial effects, the condition for accepting or rejecting the hypothesis is that H0 is accepted at a significance value > 0.05 and Ha is accepted at a significance value of 0.05. The dependent variable (Y) has a minimum value of 0.000, in Mandal Multifinance for 3 consecutive years from 2019 to 2021, it received an idBBB rating and a maximum value of 1.000, in Astra Sedaya Finance for three consecutive years from 2019 for 2021, and it has received an idAAA rating. The explanatory variables consist of 5: (X1) obtaining a minimum value of 0.120 Adhy Kaya (Persero) for the period 2019 and a maximum value of 2.250, in Mandal Multifinance for the period 2020. (X2) obtaining a minimum value of 0.800, in Mandala Multifinance for the period 2020 period 2020 and the maximum value of BJBR 10,540 for the period 2021 (X3) received a minimum value of -0.016, in PT Mandiri Tunas Finance for 2020 and a maximum value of 0.344, in PT Astra Sedaya Finance for 2021 (X4) has a minimum value of 0.000 at 5 companies and a maximum value of 1000 at 5 companies. The variable Maturity (X5) has a minimum value of 0.000 in 4 companies and a maximum value of 1.000 in 6 companies. The results showed that the model Chisquared value was 31,293 with a significance level of 0.000 > 0.05. This shows that the consequences of the logistic regression testing that are calculated affect both the dependent variable and the independent factor at the same time. Liquidity has a partial effect on bond ratings, as evidenced by the first hypothesis, according to which the Wald value is 0.000 at a significance level of 0.010 < α (0.05). According to the second hypothesis, if the Wald value is 0.000 and the threshold of significance is 0.004 < α (0.05), then leverage partially affects the bond's rating. According to the third hypothesis, if the Wald value is equal to 0.000, and the threshold of significance is 0.025 < α (0.05), then the profitability can influence bond ratings. According to the fourth theory, if the Wald value is 0.000 and the threshold of significance is set at 0.007 < α (0.05), then partial bond securities affects the rating of the bond. Based on the fifth hypothesis, if the Wald value is 0.000 and < α (0.05) has a significance level of 0.000, then the maturity partly affects the bond rating. The resulting logistic regression equation is described as follows: Bond Rating = 132.607 -92.068X 1 -9.422X 2 -45.653X3 + 51.481X4 + 41.332X5 Effect of Liquidity on Bond Ratings The results of this study show that liquidity affects bond ratings. Where, with an increase in liquidity by 1 unit, the rating of the bond will decrease by -92,068. In this case, liquidity is measured by the current liquidity ratio. The results show that the liquidity variable which is measured by CR affects bond ratings. That is, the higher the CR owned by a company, the lower the bond rating will receive. Liquidity can show how a company is able to repay its short-term debt. But companies with high liquidity will not necessarily be able to repay their obligations on time. This is because CR calculates a company's liquidity, including inventory that cannot actually be cashed out immediately, so there is a possibility that there are current assets held by loss-making companies, for example, there are inventory whose turnover is not smooth and leads to stockpiling. Effect of Leverage on Bond Ratings The results of this study show that debt-toequity ratio (DER) leverage has an impact on bond ratings. Where, if the leverage increases by 1 unit, the rating of the bond will decrease by -9,422. The leverage ratio shows the company's ability to meet long-term obligations, the lower this ratio, the less risk the company must bear. Leverage measures the share of debt used in financing investments, which is determined by the debt-to-equity ratio (DER). If the proportion of debt owned by a company is higher than equity, then the company generally has a low ability to meet its obligations. High leverage in a company indicates that the risk of a company's financial default is high. The results of this study are consistent with those of Widowati et al. (2013), Sari et al. (2016), Kurniawan and Suwarti (2017), also Novita (2018), which shows that leverage affects bond ratings. However, this study is in conflict with the results of the study by Magreta and Nurmayanti (2009) who show that leverage does not affect bond ratings. Effect of Profitability on Bond Ratings The results of this study show that profitability affects bond ratings. Based on the results obtained, the profitability variable has a negative impact on bond ratings, which means that if the variable profitability increases by 1 unit, the bond rating will decrease by -45,653. Based on the results of the profitability hypothesis test, the impact on bond ratings is that that profitability results are much better measured using ROE, since it is more likely that a company will receive a high bond rating, since the ROE calculation shows the company's ability to make a profit. Based on equity, where the company's ability to generate net income from capital is relatively high, while ROA has only a small impact and even adds value, that is, no matter the value of ROA, whether small or large, it will have no effect on rating of bonds issued by rating companies Hasan and Dana (2018). The reason that supports this result is that measuring profitability based on ROA is not practical. This is due to the fact that ROA shows the results (profitability) from the use of the company's assets, and the assessment of bond rating agencies is based on the results of the company's activities related to its core business. The results of this study are consistent with those of Widowati et al. (2013), Septyavanti (2013), Magreta andNurmayanti (2009), Siagian (2016), also Kurniawan and Suwarti (2017), which shows that profitability affects bond ratings. However, this study conflicts with the results of a study by Novita (2018), which shows that profitability does not affect bond ratings. Impact of profitability on bond ratings Test results show that profitability has an impact on bond ratings. Effect of Security on Bond Ratings Test results show that bond securities has an impact on bond ratings. If the bond securities variable increases by 1 unit, the bond's rating increases by 51,481. Bond securities affecting bond ratings indicate that high bond securities affect the assigned ratings of bonds. This is due to the fact that the level of risk contained in a bond is affected by the bond securities. Unsecured bonds carry a higher risk than guaranteed bonds. The increase in bond securities is supported by the collateral value used by the company to bond securities the issued bonds, as the collateral value used exceeds the value of the issued bonds. The results of this study are consistent with those by Arisanti et al. (2013), Magreta andNurmayanti (2009), Siagian (2016), also Sari and Sudjarni (2016), which shows that the results provide a significant positive impact on bond ratings. However, this study contradicts the results of the study by Widowati et al. (2013), which shows that bond securities does not affect bond ratings. Effect of Maturity on Bond Ratings Test results show that maturity affects bond ratings. If the maturity variable is increased by 1 unit, the bond's rating will increase by 41,332. The purpose of influencing changes in maturities, whether the bond is long or not, will have no real impact on the bond's rating. A bond's age (maturity) is the maturity level of a bond, more specifically, maturity is the period of time it takes for a bond holder to typically receive the principal or face value of the bond they hold. Bonds with shorter maturities are considered less risky than long-term bonds and this is reflected in the bond's ratings. The results of this study are consistent with those by Arisanti et al. (2013), Kurniawan and Suwarti (2017), also Siagian (2016), which show that maturity affects bond ratings. However, this study contradicts the results of the study by Widowati et al. (2013), Magret et al. (2009) and Sarifuddin et al. (2012, who show that maturity does not affect bond ratings. Conclusion Bond ratings are significantly affected by liquidity (CR), leverage (DER) and profitability (ROA) results, together with financial variables. The ratings of the bonds are highly dependent on the results of the evaluation of the bond securities and the maturity of the bonds for non-financial elements.
2023-07-21T15:07:12.898Z
2022-12-29T00:00:00.000
{ "year": 2022, "sha1": "1e115c7e75febf2f636520d03f8e2ccdbdd467dc", "oa_license": "CCBYSA", "oa_url": "https://doi.org/10.26905/jmdk.v10i2.9107", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "8ba2086f10ef7647e20373442ce2cbbcc2b078fb", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [] }
219130163
pes2o/s2orc
v3-fos-license
SUBSTANTIATION OF THE METHODOLOGICAL EXPEDIENCY TO USE THE METHOD OF WRITING ESSAYS ON SOCIAL AND PEDAGOGICAL TOPICS The article presents an attempt to provide a comprehensive substantiation of the feasibility of using the method of writing an essay on social and pedagogical topics during the teacher training process. Based on the interpretation of the method essence, its functional purpose in philosophical and linguistic literature as well as works on literary studies, they determine the adaptation foundations regarding the specific features of the effective use of the essay method in the educational process, pedagogy as an academic discipline. Specific examples are used to demonstrate the priorities of the essay method when developing social competence in particular and the social reflection of the future teacher in general. They mention the strengthening of the roles of the student’s activities related to self-education and self-assessment, development of their creative reflective thinking, and the strengthened ability to make effective decisions regarding a burning problem of both professional and social significance activity, communication and aim at enhancing educational and cognitive activity, developing creative potential of the student, productive critical thinking, are becoming especially relevant. Methods of the mentioned level include essays (Frenchessai -"attempt, test, assay"), the main purpose of which is to present the results of the independent interpretation of a problem based on the integration of the processed educational material and personal experience of its comprehension through the prism of social realia that make up the life content of the student during their professional development with a projection on to the future teaching activity. Essay, as a method, also as a learning method, is the subject of research interest for philosophers, literary critics, sociologists, psychologists, and educators. Different aspects of the problem are investigated by N. Khamitov, H. Rudenko, K. Shenderovsky, including problems of interpreting the nature of the method, its functional purpose in various spheres of activity with a particular emphasis on the educational sphere. The philosophical interpretation of the functional purpose of essayism, and hence the method of essay, can be determined based on the viewpoint of Nazyp Khamitov, who interprets the essay philosophy as the philosophy that comprehends the reality in the harmony of the conceptual and imagination principles − in the sense related image. Therefore, the essay philosophizing appears to be incomplete and open, an aphoristic journal of an individual, affecting eternal problems of the man and the world rather than acute issues [3]. Additionally, the essay as a method that induces the subjective assessment of a particular problem, the activation of creative thinking and its suitable written representation, is interpreted by H. Rudenko. "The essay implies the expression of the author's point of view, personal subjective assessment for the subject of reasoning, provides an opportunity for non-standard (creative), distinctive coverage of the information; often, it is a conversation aloud, expression of emotions and imagery. It is also a free style with possible elements of improvisation, certain pathos, and irony. However, all this results in different interpretations of this type of written work, as well as various attempts to formalize it [1, p.15]. In the context of the rationale for using the essay, K. Shenderovsky points to the development of linguistic literacy, competence of its author. "The use of an essay contributes to a clearer and competent formulation of thoughts, it helps organize thoughts in a logical sequence, involves fluency in the language of terms and concepts, reveals the depth and breadth of educational material, teaches to use examples, quotes, necessary arguments related to a particular topic, allows for the comparison of facts, approaches and alternatives, formulation of conclusions, personal assessment" [4]. The purpose of the article is to find out the professional and educational, social and pedagogical, in its context − methodical rationale for using the method of writing essays on social and pedagogical topics, based on a comprehensive analysis of its nature and general functional purpose. Representation of the basic material. The modernization of the national educational system, including vocational and pedagogical ones, takes place under the influence of the world tendencies related to the need to for the formation of an educational space, the environment that is supportive professionally and socially oriented (sociodidactic environment) as the basis for personal selfactualization of each individual, the formation of an individual trajectory of their professional development, based on personal characteristics, potential opportunities for development. The features of the competence-based education determine the need for searching not only the content, but also the forms and methods for the student's self-actualization, the formation of future professional activities on the basis of the "I-concept", its personality-centered methodology and style. The philosophical interpretation of the phenomenon has found its variants of objectification in various spheres of activity, including educational and professional ones. Its use is associated with upto-date tendencies of education development, in particular, the provision of personality and competence based orientation, the formation of conditions for the student self-actualization, the contribution to the identification of an individual trajectory for the professional development in accordance with the capabilities of everyone. According to K. Shenderovsky, "Individualization of modern social relationships and, as a result, individualization of modern vocational education are open for the application of new effective forms of activity, including an educational one. That is why the essay (a transformed, adapted, modified genre of literature) is gaining in popularity as a kind of the written independent work of the Ukrainian student, namely as a small reasoning composition with a free composition that expresses individual impressions, thoughts on a particular issue, problems, and consciously does not claim to be complete and exhaustive in interpreting the topic" [4, p.7]. 2(29), February 2020 Therefore, integrally in teaching, the essay contributes to the increased level of the personality-oriented education in general, the individualization of the trajectory for the student professional development, the formation of their social competence in particular, and it is also the subject of transformation, adaptation processes, and most importantlyoptimization of interdisciplinary interaction at the content and procedural levels. The essay is also understood as one of the effective methods of vocational and pedagogical education with a high potential of social and life creativity, and it is elaborated by means of strengthening the subjective understanding of problems, situations (an attempt of independent analysis, substantiation of a theoretical hypothesis, etc.). In terms of its form, the essay is defined as a small prose reasoning composition with a free structural and compositional organization. The essay has similar characteristics of a philosophical treatise, a reference paper, a scientific article, in literature − a composition, epic, lyric, pointing to the associative approach, verbalization of thinking, its imagery, personal orientation, the author's presentation of their personified vision of the image, problem, etc. A social and pedagogical essay is of particular value in the professional development of the teacher, which a priori comprehends the future profession in the complex of its theoretical and practical foundations, integrated into the social and cultural space by correlating the objectively given educational information with individual reasoning regarding its nature through the prism of personal ideas and the experience of interpreting a specific issue, problem, and it does not involve a full coverage of their content. Since the essay is an independent creative writing work, which is a form of the personality-oriented understanding of a given problem, it implements the functions related to the actual formation of professional competence, based on the "I-concept", resorting not only to self-determination in its context, but also to programming algorithms for searching answers to questions, solving problems, organizing activities. Within the educational activity, the essay actualizes such processes as analysis, synthesis, and creative application of knowledge about a specific professional, socially significant problem. As part of the methodical rationale for using the method of writing social pedagogical essays, from the perspective of the above mentioned priorities, it is advisable to compare the educational works that are similar in the form of presentation, such as a reference paper (the information and reproductive system of education) and essay (personality-oriented, competence-based). When the primary goal of writing a reference paper is mainly the work with educational information (its selection, structuring, synthesis, conclusions) together with deepening of knowledge about a particular topic, the essay writing shifts the emphasis on the student's self-determination in its context, positioning the variant of its personal interpretation, which is implemented through integration, interaction of objectively defined educational (scientific) information with a personal concept and format of the worldview, linking such information to the life space, building associative flows of the comprehension of information through the prism of value priorities of the current social environment, personal experience of life. Under such conditions, a pedagogical essay naturally gains features of social orientation, without which there is no the student's personal self-determination. It should be noted that it is not about the methods to replace the traditional forms of work with educational information, but about the rationale for their combination, since reference papers provide a rather effective way to form basic knowledge by means of actualizing the methods of working on literature sources, analyzing, comparing, systematizing information from various fields of knowledge, linking the theoretical material to the practical experience (as a rule, not personal, but presented in literature). If we talk about certain guiding templates in writing the above types of work, in the classical version, they are more formalized, their observance is perceived as quality criteria; in terms of innovation, they function as a recommendation and regulatory mechanisms, optimizing the methodological support for the process. The criteria are located in the plane of depth and awareness of the knowledge of the material to design an independent search for answers to questions, personal selfdetermination on the topic, without claiming its comprehensive interpretation. At the same time, the adequacy and validity of the theoretical basis, the substantiation of items, the reasoning of conclusions, the level of appealing to the personal life, social practice are evaluated. It should also be noted that a modern teacher should have a high level of pedagogical and methodological competence regarding innovative educational technologies based on analytical and synthetic activity and creative approaches to their implementation. Innovative teaching methods, which rightfully include the essay, as it was mentioned above, are of double value, since, on the one hand, they are an effective form of vocational training, and on the other hand, they form an ability, methodological and practical readiness to apply such teaching methods in the future profession by their active application in the process of teaching educational disciplines, educational work. Essays on social and pedagogical topics define the content for the student to perceive the future profession (as well as professional educational activity) as a significant component of the life and social space, associating their own social values with the professional ones, and thus perceiving the future professional activities as the basis for personal self-actualization. The existing high educational potential should be also taken into account, since the personality-oriented judgments determine the need for axiological measurements, self-analysis, self-determination within the problem, appealing to the personal qualities required to take the correct professional, social position. The theoretical analysis of the problem and the results of investigational studies made it possible to determine the methodological validity for applying to the method of essays on social and pedagogical topics in the system of the teacher professional training, to highlight the priorities of its practical use. Integrally, the essay method: − programs such a form and methodology for processing educational information, which provides for a high level of its comprehension, the ability to appropriately apply it in a context that is associated with the student's personality, life values and senses; − is a form of combining the theory and practice, personality-oriented teaching methodology, in the context of which the theoretical material is perceived as significant information that forms the foundation for independent research work, the theoretical basis for both forming personal social and pedagogical projects, and providing valid justification, substantiation, the ability to independently prove the claimed viewpoints; − contributes to the increase in the student's cognitive interest in educational activity, stimulates for its perception as an effective form of searching for answers to questions that are current, professionally and socially significant for a student; − promotes self-actualization in the course of preparatory work and writing an essay, trying to focus on the problems that are the subject of their interest, informal spontaneous reflections and searching for answers to questions; − contributes to the optimization of the methodology of developing the student's cognitive activity and its associativity by improving the ability to analyze educational information, interpret it, compare facts, construe own reflections based on theoretical foundations of the pedagogical science, integrating educational and social components, transforming the methods of reflection (analysis) for own life space. It fosters the possibility for non-standard (creative), unexpected interpretation of the educational material adapting it to the situation, content, filled with personal senses and priorities; − promotes the development of the logic and speech culture, the search for personality-oriented models, style and forms for verbalizing their position, substantiation and persuasiveness of the conclusions drawn; − has a high incentive regarding independent educational, self-educational activity and concerns the satisfaction of the need for knowledge (educational information) that lacked during comprehending the given problem; − promotes the development of social and pedagogical competencies, personal qualities that are comprehended in an appropriate professional content and are the subject of self-awareness, selfdetermination, auto-evaluation, and etc.; − brings in a variety of methods of educational activities, and thus provides a change in activities within an organized educational process, combining informational and reproductive, productive, interactive and creative teaching methods; − provides multifunctionality of the educational process, since the essay can be a form of comprehending the educational material, elaborating the variants of its practical implementation, a form of self-determination regarding current problems, a form of controlling and assessing the level of competency development, indicating the existence of knowledge, its awareness, activity, readiness for creative use in problematic non-standard contexts and situations (fluency in the language of pedagogical theory); − is a form of development of personality-oriented methodology and style of the future professional activity. In general, the essay is a way to center on the issues that are of particular relevance in the process of professional development, living, which are the object of permanent searches both in the mode of organized learning activities and independent work with various sources of information, spontaneous communication with people, connected with a common problem. In this context, we will analyze the content of essays or their elements, including those ones that have a long history and are still relevant. We consider it expedient to show the philosophical nature of individual insights of various authors regarding the pedagogical problem that necessarily has a social context, and that is, as a rule, a fixed problem without a clear-cut answer and needs to be understood based on personal senses and current context, which also tend to vary on a permanent basis. Additionally, history shows that the problems that were subject to reflection by philosophers and educators in different historical epochs, have not lost their relevance to this day, especially at the stage of understanding the new educational competence-oriented paradigm. J. Locke "On Education": "I tend to think that the knowledge we acquire in this world does not go beyond this life. Saving insight into another life does not require help of this dim twilight; but whatever it may be, I am sure that the main goal for which we must learn here is to use it for the sake of our prosperity and the welfare of others in this world. But if we lose our health, acquiring such knowledge, we work for the sake of things that will be of no use when we reach them; if, exhausting our bodies (though intending to make ourselves more active people), we lose the ability and opportunity to do the good work, which we are capable of with less talent, endowed by God, by denying us the power to improve this talent, which is available to people with a stronger body build, we greatly reduce our service to God and deprive our nearest and dearest of all the help that we, being in a healthy state, though with moderate knowledge, would be able to provide them. The one, who overloading their ship, though with gold, silver and precious jewels, sends it to the bottom, will present their master a bad report on their journey." Priorities of the essay are provided through: − interpretation of what is genuine (true) learning and its formalized variant, imitation (like going to school): the benefit or irrelevance of formal and actual learning; − self-determination in relation to the true purpose of learning, its correlation with personal viewpoints, the dynamics of development in various educational systems (school, higher educational institution); − reflection on the nature of education, its social value, significance for the person, interpretation of education in the context of human life space (pupil, student); − balance of individual abilities, talents, potential of human development and standardized norms, volume and level of its acquisition, interpretation of the nature of the individual trajectory of the educational process; − understanding the theory and basics of practical implementation of the learning differentiation principle, focusing on possible abilities of each individual; − self-knowledge in order to determine the potential for professional, personal development of personality-oriented guidelines concerning the level and personalized method for their achievement; − interpretation of the health-saving potential of learning, its relevance and methodological support. V. Rozanov in "Fallen Leaves": "The everyday rule saying that children should respect their parents, and parents should love their children should be read conversely: these are parents who should respect their children, respect their unique small world and their passionate nature that is ready to take an offence at any moment; and children should only love their parents, and they will surely love them if they feel that respect to themselves." Priorities of the essay are provided through: − reflection on the problems of relationships between parents and children through the prism of classical behavior patterns presented in pedagogical literature; − reflection on the problems of relationships between parents and children, using personal experience of family relationships, state of health, psychological comfort, conditions for selfactualization in their context; − identification of problems that contribute a conflict to the relationship between children and parents, schools and parents, if to compare with classical patterns of behavior; − determination of the fundamental nature of the concepts such as "respect" and "love", the mechanisms of their formation and interdependence in the pedagogical, social, and life continuum; − designing the models for objectivation of the constructive system of relationships, providing them with theoretical substantiation by adapting to a real socially defined life situation. Social pedagogical essays may be formed based on the ideas, viewpoints, quotations, which provide for interpretation and methodological support regarding the internalization in personal practice (professional, social, life). V. Sukhomlynsky: "The entire school life should be permeated with the spirit of humanity" [Vol. 4, p. 496]; "In the first place knowledge is needed to become a Man, a Citizen of your Motherland, an educated person, a creator, a father, a mother" [Vol. 5, p. 340]; "Teaching should have individualization: both in the content of mental labor (in the nature of tasks) and in time" [Vol. 2, p. 468]; "To educate the organic need for self-education, the desire to acquire knowledge life-long" [Vol. 4, p. 9]; "The pupil is not a passive object of teaching, but an active creative force, an active participant in the process of mastering knowledge" [Vol. 4, p. 214]; "To acquire knowledge means to discover the truth, cause-and-effect and other various connections" [Vol. 5, p. 366]; "We should put a vivid idea, the living word and creativity of the child into the basis of the education system" [Vol. 5, p. 340]; "The path from comprehending the facts, things, phenomena to deep understanding of an abstract truth (rules, formulas, law, words) lies through practical work that is precisely the mastering of knowledge" [Vol. 2, p. 456]; "In many children and adolescents, who are intelligent and gifted from birth, the interest in knowledge acquisition awakens only when their hand, tips of their fingers are included in creative work" [Vol. 2, p. 483]. Priorities of the essay are provided through: − the ability to isolate from a given context a basic problem that determined the author's viewpoint, the ability to find it during real practical work and determine its relevance; − correlation of their empirical representations about the problem, based on own experience of educational activity, revealing basic contradictions in its context and their substantiation; − provision of a theoretical basis for interpreting the nature of the problem, mechanisms, patterns, principles that determine them, forming a model for the practical implementation of the viewpoint of the educator-scientist in a particular educational situation. Conclusions. Thus, the methodological rationale for the essay is as follows: 1) it ensures transformation of the goal into the plan: development of teaching methods focusing on a specific intention (personality-oriented goal), which provides an opportunity to focus on aspects of the problem of a particular significance for the student in the process of professional development; 2) the technique of writing a creative work is worked out, directing it to a definite expected result from a problem that is burning for the student; 3) the method of work with theoretical material is optimized with the further formation of theoretical constructs, on the basis of which the work on its implementation into a practical plane on a personality-oriented basis is programmed; 4) increased level of efficiency of the methods of algorithm development for thinking activity in situation of theoretical and practical study of the problem, self-determination in its context; 5) the functions of the student's self-education, selfassessment activities are enhanced, which is manifested in the ability to make effective decisions regarding a pressing problem of both professional and social significance.
2020-04-23T09:09:31.202Z
2020-02-28T00:00:00.000
{ "year": 2020, "sha1": "b0f1af7e52d813cae4a82bd7f6a42992473986ae", "oa_license": "CCBY", "oa_url": "https://rsglobal.pl/index.php/sr/article/download/373/359", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "b570d93b55368fffd559670a545703a3e284bb19", "s2fieldsofstudy": [ "Education", "Philosophy" ], "extfieldsofstudy": [ "Sociology" ] }
220774569
pes2o/s2orc
v3-fos-license
HUMAN CAPITAL, INSTITUTIONAL ECONOMICS AND ENTREPRENEURSHIP AS A DRIVER FOR QUALITY & SUSTAINABLE ECONOMIC GROWTH The Indonesian government policy in encouraging sustainable economic growth to reduce unemployment, poverty and inequality is threatened to fail, because economic growth does not reach targets and is not of quality. The purpose of this research is to explain the four pillars of growth and development namely; human capital, social capital, institutional economics and entrepreneurship as the main drivers of quality and sustainable economic growth. This research method used primary data on entrepreneurship and SMEs in the provinces of Central Java and Yogyakarta. The correlational form of recursive model path analysis was used as analytical method. The research results show the very strong role of human capital as the main key in driving economic growth both directly and indirectly. The existence of human capital and social capital will further encourage new economic institutions, furthermore new economic institutions will encourage the competitiveness of productive entrepreneurship and high, quality, and sustainable regional economic growth. The policy implication is that high, quality, and fundamentally sustainable economic growth must be built on the four main pillars basis namely; human capital, social capital, institutional and entrepreneurship in order to be more successful in reducing development problems; unemployment, poverty and income inequality. Introduction Indonesia's economic growth is largely supported by foreign investment and inappropriate consumption sector, resulting in low, unqualified and high cost economic growth. In modern economic theory, quality economic growth is determined by technological factors and the accumulation of human capital as the main determinant in the industry and the economy as a whole (Prasetyo, 2008;Ganeva, 2010Acemoglu, 2014. The argument is because human capital is able to create efficiency, influenceiveness, creativity, innovation and better productivity. Over the last few decades much economic research has focused on the accumulation of human resources and their impact on the economy. Theoretically and empirically, human capital is conclusively believed to be positively associated with economic growth (Altinok, 2007;Hanushek, 2007;Prasetyo, 2008Prasetyo, , 2019Ganeva, 2010;Acemoglu, 2012Acemoglu, , 2014Skare, 2015;Ali, 2018;Baltgailis, 2019;Vigliarolo, 2020). That is, theoretically, the human capital factor has long been believed to be positively associated with quality and sustainable economic growth. While empirically, this said relationship does not always hold for several reasons (Afzal, 2010;Pelinescu, 2015). Afzal et al. (2010) argue that the relationship between school education and economic growth is negative in the short term. Meanwhile, Ramos et al. (2009) have explained the negative influences on unemployment can be explained by the influences of an overpopulation of tertiary education, which does not meet the needs of the regional labor market. Furthermore, Pilinescu, (2015) found a negative influence from the endowment factor of human capital on growth and unemployment, especially in agricultural areas. The argument is because some of the population of highly educated people who live in agricultural areas work elsewhere in areas close to the city. However, it can be stated that these studies still state that there is a negative relationship between human capital and economic growth because in general their data is very limited dan using simple measuring dimensions. For example the human capital factor is only measured by the level of education which is not representative to measure the dimensions of human capital. Cohen's research (2007) resulted in a strong and significant positive relationship between human capital and economic growth. The results of Cohen's research (2007) confirms that the limited and poor human capital measurement model produces poor results. In addition, the results of Estrin's research (2016) with multilevel human capital measurement dimensions found that specific entrepreneurial human capital is relatively more important in commercial entrepreneurship, and general human capital is more important in social entrepreneurship, while the influence of human capital depends on the rule of law (institutional system) . Estrin (2016) explains that since the level of information content is low in measuring the human capital dimension of the education level the previous literature has found that increasing education is not related to economic growth. Furthermore, the results of Ali's recent empirical research (2018), based on data of 132 countries over 15 years, have also found that human capital plays a positive role in GDP growth per capita, and is strongly and positively related to economic growth. The empirical fact is economic opportunities strengthen the influence of human capital, business and trade growth, domestically and internationally. Ali's research (2018) also found that the inconclusive results in the previous empirical study of human capital and growth might be due to the bias of the omitted variables, because the study did not include variables related to social capabilities. Thus, the urgency of this research result article tends to be more supportive of a positive and significant link between human capital and economic growth both theoretically and empirically. The objective of the unemployment, poverty and inequality of income distribution reduction policy strategy is difficult to achieve without quality, high and sustainable economic growth driven by the capacity of smart, skilled, knowledgeable, inclusive, creative, innovative, productive and adaptive human capital (Prasetyo, 2008;Cadil, 2014). At present, the empirical fact is that many economic opportunities require human capital, and subsequently human capital further strengthens economic growth and competitiveness and social welfare (Prasetyo, 2019). The role of human capital is an important and significant key factor in promoting quality economic growth (Cohen, 2007;Estrin, 2016;Ali, 2018;Prasetyo, 2008Prasetyo, , 2019. The novelty of the purpose of this article is to describe the important role of human capital in creating new economic institutions, which in turn encourage entrepreneurial competitiveness and quality economic growth in a sustainable manner. This research article uses fundamental micro empirical data. Novelty of the dimensions of fundamental micro empirical data for the human capital variables in this article are measured more comprehensively through the ratio dimension of education level, skill, experience, level of productivity, and level of maturity (Prasetyo, 2017(Prasetyo, , 2019. If the Government of Indonesia's policy strategy in encouraging economic growth to reduce unemployment, poverty and inequality is only driven by foreign investment and consumption levels, without paying more attention to the potential accumulation of human capital capacity, then the policy strategy will never succeed, and will clearly fail ENTREPRENEURSHIP AND SUSTAINABILITY ISSUES ISSN 2345-0282 (online) http://jssidoi.org/jesi/ 2020 Volume 7 Number 4 (June) http://doi.org/10. 9770/jesi.2020.7.4(1) 2577 again (Prasetyo, 2011(Prasetyo, , 2019. High-quality, modern and sustainable economic growth must be created by quality factors and human capital capacities that are accommodating and driven by an entrepreneurial culture (Prasetyo, 2008(Prasetyo, , 2011(Prasetyo, , 2019, since there is no significant economic growth in any country without adequate human resource development (Sankay, 2010). In addition, the results of Doran's research (2018) using macro data from the GEM (Global Entrepreneurship Monitor) show that entrepreneurial attitudes are found to stimulate GDP per capita in high-income countries only, while entrepreneurial activities are found to have negative influences on the middle and low income economy. Meanwhile, (Baudreaux, 2019) still using data from the GEM that measures entrepreneurship and institutions with the EFC (Entrepreneurial Framework Conditions) found that entrepreneurship only encourages economic growth in developed countries, but not in developing countries. Boudreaux, (2019) also found that the country's institutional environment also only contributed to economic growth in more developed countries, but not in developing countries. The results of the research (Doran, 2018;Baudreaux, 2019) actually become important arguments for the growing urgency of the article that the authors propose and future research as well. Doran (2018) also realized that different aspects of entrepreneurship were found to affect growth differently, but the micro data in GEM was not available. Doran (2018) recommends the need for further development of GEM data, at the regional level, to facilitate regional entrepreneurship analysis. Research results conducted by Dvoulety, 2018 have also stated that they have failed to prove the impact of entrepreneurship on the HDI (Human Development Index). Based on the results of the study, (Dvoulety, 2018) has also recommended that there are still many efforts that need to be made to better understand various forms of entrepreneurial activities in developing countries, such as its institutional context, and its relation to regional economic development. For the sake of this article, we try to do dispositions and novelty and the urgency of using micro fundamental empirical survey data on MSME entrepreneurship households at the regional level of the DIY and Central Java Provinces of Indonesia, to analyze macroeconomic data, specifically on the variable quality of economic growth in question in this article. Literature review Theorically and empirically, economic growth is largely determined by a number of investments. Many forms of investment, whether physical or non-physical, and increasing the capacity of human capital are included in the type of non-physical investment that requires a long process and economic freedom in order to develop better. Empirically, the combination of the use of both types of physical and non-physical investment can increase economic growth, create employment opportunities and reduce poverty, (Seran, 2018). Human capital investment theory is often based on a number of empirical evidence that "educated and skilled individuals" almost always have a tendency to produce better than others. It seems that the basic concept of the theory has now been increasingly developed to be applied in the field of entrepreneurship consistently. According to Davidsson (2003) in a theoretical perspective, understanding the relationship between social capital exploitation and human capital is an important area of future research. Davidsson (2003) has recommended advancing our understanding of the role of social capital, human capital and social relations and newborn entrepreneurial networks and learning the best ways to facilitate them is an important activity for future entrepreneurial research. In an economically free society, every individual in the community succeed or fail based on their own individual efforts and abilities. Meanwhile, free and open community institutions do not discriminate against them . The results of Boudreaux's research (2019) have provided suggestive evidence that economic freedom not only channels individual efforts to productive entrepreneurial activities, but also influences the degree to which individual socio-cognitive resources tend to be mobilized and lead to entrepreneurship and high economic growth. Human development and democratic progress are the main keys in economic freedom, . Feldmann (2017) has empirically studied the impact of economic freedom on human capital investment. As a result, there is a strong correlation between the two and economic freedom increases investment in human capital. Hindle et al. (2009) have emphasized that the entrepreneurship development process is shaped by human resources. The capacity of human resources with knowledge, skills, and self-efficiency can lead to entrepreneurial behavior, Based on the results of recent literature studies it can be learnt that human capital and institutional factors encourage entrepreneurial opportunities to achieve higher levels of economic growth (Aparicio, 2016;Bjornskov, 2016;Bosma, 2018;Acs, 2018;Chitsaz, 2019). Research results by Aparicio, 2016 found that informal institutions have a higher impact on entrepreneurial opportunities than formal institutions do. Regarding policy implications, Aparicio's research results also show that it can be possible to obtain economic growth that encourages the right institutions to increase entrepreneurial opportunities. The results of Bosma's research (2018) have examined the extent and how the quality of institutions in encouraging productive entrepreneurship, which in turn is able to encourage economic growth. Furthermore, the results of Bosma's research (2018) show that a quality economic growth model can be significantly improved in that direction, taking into account the quality of institutional and joint entrepreneurial activities. The growth of UMKM entrepreneurship is also increasingly regarded as the main engine of long-term local economic growth than any large foreign company that previously existed (Bell, 2013). Acs' research results (2018) found support for the role of the entrepreneurial ecosystem in economic growth; where the results of the Acs' research (2018) shows that NSE (National Systems of Entrepreneurship) is positively and significantly related to economic growth. Whereas, the results of Bjornskov's research (2016) have found substantial evidence to support the claim that entrepreneurial activities have positive long-term economic consequences in terms of wealth, productivity and economic growth. Meanwhile, the results of Chitsaz's research, (2019) have used two types of human and social capital to study entrepreneurship; in which to evaluate social capital communicative, structural and cognitive dimensions are used. Meanwhile, to investigate the human capital focus knowledge, skills and self-efficacy dimensions are used. According to Chitsaz (2019), entrepreneurship development is a complex, long-term, and comprehensive procedure with a major role in developing the country's economy. The results of Chitsaz's research (2019) show a significant influence of the dimensions of human and social capital on entrepreneurial activities. Furthermore, Ehrlich, (2017) has modeled investments in Entrepreneurial Human Capital (EHC), which are allocated in commercial and innovative industry knowledge. The results of Ehrlich's research (2017) have specifically found that human capital drives economic growth. Ehrlich's (2017) research model shows that, institutional factors that support the free market for goods, ideas and higher educational attainments from employers and workers are able to increase edogenous economic growth by increasing investment efficiency in the EHC rather than exclusively on their own. Furthermore, Vide (2016) has explored State competitiveness and entrepreneurship as drivers of economic growth. Meanwhile, the results of Boudreaux's research (2019) found three things: (1) entrepreneurship encourages economic growth, but not in developing countries; (2) the institutional environment of a country as measured by Entrepreneurial Framework Conditions (EFCs) contributes to economic growth in developed countries, but not in developing countries; (3) entrepreneurship driven by opportunities encourages economic growth in developed countries, while entrepreneurship driven by needs impedes economic growth in developing countries. However, all sources of literature mentioned above are still only partially explaining about the human capital, social capital, institutional economics and entrepreneurship roles on economic growth. In this article the empirical disposition and basic theory are combined into an original theoretical basis from R.M. Solow (1956) and J.A. Schumpeter (Elliott, 2017), as well as examined by the path analysis model approach, so that both the theoretical basis and the method approach of this article are more comprehensive. In addition, the novelty critical disposition of this article is that a more representative dimension of measurement is used by utilizing the Gini ratio index, which is generally the basic concept familiar to the reader. Meanwhile, empirical data sources were obtained with various disciplinary approaches: socio-economic-cultural and institutional. Research method This article is the result of an empirical study that was examined using a descriptive-analytic-quantitative research method using the correlational form of recursive model path analysis. Main data sources are used as primary data and secondary data is used as supplementary data. Quantitative data material was obtained by doing field survey on 125 respondents of entrepreneurial household samples which were taken responsively using simple random sampling technique. Quantitative and qualitative empirical data material in this article were collected with various disciplinary approaches, namely economic-sociology, economic-geographical, economic-cultural and institutional economics. Meanwhile, for interpreting the data obtained, the basic concepts of the approach of economic freedom of local wisdom which are humanist, and economic-social-cultural, especially; sociology economics, informal economics, institutional economics, political economics, as well as cultural economics and geographical gravity economics are preferable to use. In theoretical and methodological concepts, this research method is a better integration research method, because it is a method of integration of related and broader various disciplines in socio-economic related fields, as well as integration of the original theoretical approaches to economic growth in R.M. Solow (1956) and the original theory of economic development of J.A. Schumpeter (Elliott, 2017). The measurement dimension of all variables in this research is used to measure the modified model dimensions of the Gini ratio or Gini Index (GI). The argument is that the general basic formula of IG values is simple, useful and widely known. The formula is as follows: Where; IGx (the index value of the variable Xn used); fi is the percentage (%) of variable income of the i-class entrepreneurship household group; Yi is the cumulative percentage (%) of income or expenses in the i-class entrepreneurship household. Thus, some of the main Xn variables referred to and used in this research article are measured by Human Capital Index (HCI), Social Capital Index (SCI), Social Entrepreneur Index (SEI), Intitutional Economic Index (IEI), Entrepreneurship Competitiveness Index (ECI), and Quality Economic Growth (QEG) dimensions. Furthermore, the final value of the magnitude of the variable is between zero to one, according to the standard value on the original Gini index mentioned. After knowing a number of variables that are used in the path analysis model, then it must first be arranged a structural equation model to find out the value of the path analysis coefficient. The purpose of this path analysis method is to trace the real role of the main explanatory variables namely human capital and social capital exogenous variables towards endogenous variables of economic growth quality, both directly and indirectly through variables of economic institutions and entrepreneurship competitiveness and their total influence. Meanwhile, the meaning of the form of the structure of the reconciliation system in question is the relationship and the direction of the path between the exogenous variables to the endogenous variables, so that they are easier to understand. Meanwhile, the form of the structural path analysis system equation model referred to is arranged as follows: The theoretical basic concept built on the framework of this path analysis model in Figure-1, is an amalgamation of the two basic concepts of the original theory of modern economic growth of The New Growth Theory R.M. Solow (1956) and The Theory of Economic Development of Joseph A. Schumpeter (Elliott, 2017). The keywords of Solow's original theory of economic growth (1956) were mainly explained from the factors of increasing human capital capacity and internal institutional factors. Whereas, the key factors in Joseph A. Schumpeter's original theory of economic development (Elliott, 2017) were mainly explained by external institutional and entrepreneurial factors. Furthermore, the model value of parameter values in the form of path analysis can be formed and generated from the correlation values and standard regression coefficients, so that the path coefficient values have a standardized quantity value. Furthermore, based on the path analysis Figure in Figure-1 above, it can be clearly described the direction and magnitude of the value of the path analysis coefficient, both direct influence, indirect influence and total influence. Where, the value of the highest total influence of the exogenous variables on endogenous variables is considered as the most important, dominant, core and strong factor contributing to quality economic growth. Results The results of the research from the four structural equation form models of path analysis regression above (model: 1-4), the complete results can be seen in Table 1. Based on Table-1 it appears that the value of the standardized coefficients of the regression will be interpreted and examined further in this article. Next, the standard regression coefficient values in Table-1 together with the partial correlation coefficient values in Table-3 are used to form the path analysis coefficient results in Figure-2 Table-4. Meanwhile, the value of the results of the research in Table-2 is the value reflected from the model used to determine the total strength of the model. Based on the results in Table-2 it can be seen that the strength of the model proposed in the path analysis is good and strong. The argument, because based on the coefficients reflected in Table-2, it is known that the Rmultiple value is above 80%, and the average R-square value is greater than 70%, and the R-square adjusted value is close to the R-square value, then the model is declared good, strong and credible. Then, after the model is declared good, the results of the research in Table-1 and Table-2 can be employed to build the path analysis model as referred to in Figure-2 and Table-4. In addition, the results of the regression research in Table-1 and Table-2 are also known to be consistent, and the results of the research in Table-2 and Table-3 are also consistent with the correlation value, so that the results of this research can be stated consistent and getting better and credible. ENTREPRENEURSHIP AND SUSTAINABILITY ISSUES ISSN 2345-0282 (online) http://jssidoi.org/jesi/ 2020 Volume 7 Number 4 (June) http://doi.org/10.9770/jesi.2020.7.4(1) Based on Table 1, the research results from structural equations in Model-1, Model-2 and Model-4, all exogenous variables that are used have are positive influence on endogenous variables and significant at confidence levels above 95% or at the level of 5% significance level. That is, the model is theoretically good and acceptable. Meanwhile, in Model-3 it appears that the exogenous variables of human capital and entrepreneurship still have positive and significant influence on the 99% confidence level of the endogenous variables of quality economic growth. Meanwhile, the conditions for exogenous social capital variables in Model-3 appear to have a positive and not significant influence on quality economic growth. However, if we look back at Model-4, when the social capital variable is moderated by economic institutional variables, its role remains positive and becomes significant again towards quality economic growth. This shows the meaning that the role of the quality of community economic institutions is quite successful and needed in regulating social order entrepreneurial behavior of the local community to encourage high-quality, sustainable and sustainable economic growth in the region. Meanwhile, the significance value of constants in Model-3 and Model-4 is not significant, because the research model in Model-3 and Model-4 is a conditional model (see Table 2). If seen from the correlation values determined by R-muliple in Table-2 and partial correlation in Table 3, the results of the research appear to be consistent; there is a positive and strong correlation between exogenous variables and endogenous variables used in this analysis, both determinant and partial. In Table-3, it appears that there is a positive partial correlation and the strongest is the correlation between the entrepreneurship and the institution that is equal to 84.10%. The value of the second largest correlation correlates between social capital and social entrepreneur, which is 83.0%. However, the association of social entrepreneur factors with other factors is weak and has been detected when during the experimental model, which is known that the influence is not significant on economic growth, so that the social entrepreneurial variables are not included further in the next stage of the analysis model selection. Meanwhile, there is a strong partial correlation between institutional factors with entrepreneurship that can be felt that the two are interrelated to develop; where institutions that occur both externally and externally will be endeavored to encourage stronger entrepreneurship and vice versa, the productive entrepreneurship will further strengthen the quality of existing institutions. Table-1 and Table-3 Table-4. In Figure-2, it can be seen that the beginning of the largest arrow indicates the strength of the role of the exogenous variable against the endogenous variable. Meanwhile, the values of the path analysis coefficient quantities in Table-4 show the magnitude of the direct influence, indirect influence and the total influence of each exogenous variable on endogenous variables of high-quality and sustainable economic growth in the regions. The values of the results of the path analysis research in Table 4 are the results of research built based on the results of the research in Table-1, Table-3 Table-4, it can be seen the value of the influence of exogenous variables in total by 87.5%, direct displacement of 51.2% and indirect influence of 36.3% on endogenous variables of economic growth in the model on the path Figure-2. In Table-4, it appears that the greatest total major influence is obtained from the human capital variable, which is 32.9%. These results prove the author's statement above, that human capital is the first and foremost key in promoting quality and sustainable regional economic growth in Indonesia. The argument, because each arrow in the path of path analysis is the first largest begins from the contribution of human capital factors both direct influence and total influence is the first largest total. The magnitude of the influence of total human capital on economic growth is 32.9% which consists of a direct influence of 21.7% and an indirect influence of 11.2%, and more interesting is a direct influence greater than indirect. Likewise the entrepreneurial factor is able to provide a second contribution in total to economic growth, which is 26.2% consisting of a direct influence of 18.5% and an indirect influence of 7.7%, and interestingly the same, its direct influence on growth greater economy. This empirical fact shows that the factors of human capital and entrepreneurship are the main determinants in driving quality, high and sustainable economic growth. However, these two main factors still gain competition from institutional and social capital factors. Furthermore, the sequence of the next total influence on economic growth is the third economic institutional factor of 20.3%; and the fourth or the last factor from social capital, which is 8.1%. Meanwhile, the magnitude of the influence of institutional factors and social capital factors actually the value of the contribution seems to be greater indirectly on economic growth than the direct influence. This shows that the facts of the phenomenon of the results of the above research strengthen the arguments of the two main factors of human capital and entrepreneurship, so that this fact phenomenon is more interesting to be further discussed in the next sub-topic. Discussion Based on the results of empirical research in figure-2 and table-4, we have found directly and indirectly, in terms of microeconomics, with the capacity and quality of human capital possessed, it has shown morale, motivation, better creativity and innovation, thereby increasing the productivity potential of workers as well as increasing their and family's income. Furthermore, in terms of macroeconomics, human capital which has the potential for productivity in all these entrepreneurial businesses has had a positive impact in increasing economic quality growth. That is, the basic theory of economic growth R.M. Solow, which states that human capital has a positive impact on economic growth empirically is still proven to be true. Likewise empirically for Schumpeter's theory of economic development which states that entrepreneurship must have a positive influence in society such as economic growth is also true. Thus, the results of this empirical research still support the original theory of The New Growth Theory of R.M. Solow and support the original The Theory of Economic Development of J.A. Schumpeter. Likewise, through Model-4 it has been proven that the role of existing institutions both from the internal side (RM Solow) and the external side (JA Schumpeter) has been proven to significantly encourage high A very important and new thing from this article is that both human capital and entrepreneurship have been empirically proven to have a positive and strong and significant correlation in driving quality economic growth in Indonesia. Thus, the results of this research do not fully support the results of previous research conducted by (Afzal, 2010;and Pelinescu, 2015). It is possible that the previous research was conducted in a micro-scale with very narrow data and measurement dimensions. If human capital is only measured in one rural area and only based on one dimension of education level with a small sample, human capital does not always have a positive influence and can even have a negative influence (Prasetyo, 1998). However, the results of this latest empirical research support research conducted by (Cohen, 2007;Estrin, 2016, Ehrlich, 2017Ali, 2018;Chitsaz, 2019) which confirms that human capital and entrepreneurship have a positive and strong influence on a country's economic growth. Meanwhile, on the other hand, if Indonesia is still included in a middle or low income country, this research also does not fully support previous research conducted by Doran (2018) and Baudreaux (2019) which states that entrepreneurship only has positive influence and significant on economic growth for highincome countries and not on middle and low-income countries. However, it is also realized that we as researchers cannot provide further and more powerful documentation, because this level of research is empirical research which is limited to the case of one country in Indonesia and not to compare to high, middle and low income countries. In Figure 2 we have also examined the role of human capital in influencing economic institutions and subsequently these institutions are able to encourage competitiveness of productive entrepreneurship and encourage economic growth. In addition, we also examined how the quality of institutions in encouraging productive entrepreneurship competitiveness, which in turn encourages economic growth. The results of our research have obtained better estimation values about the role of human capital in encouraging the formation of new institutions. Furthermore, these institutions encourage competitiveness of productive entrepreneurship and better economic growth. That is, the results of our research support the results of previous research conducted by Bosma, (2017) even though the facts are slightly different. The research results in this article are in fact explicitly the same; having considered entrepreneurial channels through institutional quality in influencing economic growth. The difference is, in this research the importance of the role of human capital both directly and indirectly and the total role is still large, even though the institutional role of the contribution of human capital to economic growth has strengthened. In this article, through the entrepreneurial channel research model whose role has been shown to be reduced to economic growth is merely a factor of social capital. However, the fact (in Model-4) is the role and function of the correlation of the quality of economic institutions increasingly able to strengthen the role of human capital and social capital in driving quality and sustainable economic growth significantly. Meanwhile, in Bosma's research results (2017) the role of human capital has been proven to be reduced. Thus, it can be reiterated that what is very important in encouraging quality economic growth, staying high and sustainable in Indonesia is the quality factor and the capacity of human capital as the first key factor, as well as the quality factor and the capacity of human capital and productive entrepreneurship competitiveness as the main factors. Meanwhile, the next very important factor in encouraging and maintaining quality and sustainable economic growth are institutional quality and social capital factors. In other words, there are four main capacity pillars in promoting quality economic growth to remain high and sustainable in Indonesia, namely human resources, entrepreneurship, institutional and social capital capacities. Where high quality, remains high and sustainable economic growth is more driven by factors of human capital and entrepreneurship. Meanwhile, quality and sustainable economic growth is more encouraged and maintained by institutional quality factors and social capital. Thus, all elements of the Indonesian nation must be self-aware and must have a strong joint commitment to continually strive to build the capacity of human resources to produce quality production outputs and work productivity potentials that continue to improve and be able to have strong competitiveness. To achieve this ENTREPRENEURSHIP AND SUSTAINABILITY ISSUES ISSN 2345-0282 (online) http://jssidoi.org/jesi/ 2020 Volume 7 Number 4 (June) http://doi.org/10. 9770/jesi.2020.7.4(1) 2585 desire, of course, a new and more credible, new economic institution (NIE) must be needed, and able to encourage increased economic freedom. Based on the data in Table 5, the real conditions of institutions in Indonesia in terms of government integrity, investment freedom, and labor freedom are still repressed. That is, cases of corruption that occurred in Indonesia have had a negative impact especially on government integrity, freedom of investment and freedom of workers who are increasingly depressed, and institutions that are not qualified. This must be overcome immediately so that Government spending can be better utilized, precise and well targeted for the prosperity of all Indonesian people and not for corruption. The results of this research indicate that government spending that has occurred so far, both directly and through bank credit is not able to encourage entrepreneurial and MSMEs growth and economic growth. Government spending so far in Indonesia, besides being corrupted a lot, also only benefits the banking service sector and does not have a positive effect on economic growth, investment, employment opportunities, as well as industry and MSMEs. The score for the index of ranking of world economic freedom for Indonesia in 2019 is 65.8 and is in 11 regional ranks and ranks 56th in the world of 180 countries measured. Although there is a slight increase of 1.6 points from that of the 2018, it is still considered moderately free, and it is still far from being free. The highest index value is obtained from Government spending (91.4), and the lowest score of 39.5 actually occurs in the Government integrity sector and is classified repressed; and in Indonesia there are still three sectors that are still ENTREPRENEURSHIP AND SUSTAINABILITY ISSUES ISSN 2345-0282 (online) http://jssidoi.org/jesi/ 2020 Volume 7 Number 4 (June) http://doi.org/10.9770/jesi.2020.7.4(1) classified as repressed namely government integrity (39.5), investment freedom (45.0) and labor freedom (49.3). Economic freedom that is expected to encourage the quality of human capital, economic institutions and entrepreneurship in Indonesia has not yet occurred. In addition, even though fiscal health can be said to be good, there is a heavy tax burden and coupled with very high bank loan interest rates, and rigid banking services often become a burden and make it difficult for new investment and entrepreneurship to grow in Indonesia. Thus, the existence of credible new institutional quality in every line of society and sectors that have high and strong integrityis highly necessary. Based on the values in Table-5, it can be interpreted that it is urgently needed to increase the integrity and capacity of human capital investment, especially in terms of the human character of all elements of the nation in Indonesia. With the increasing integrity, capacity and quality of the character of Indonesian human resources, there will also be increased economic freedom in terms of labor, investment and trust in the government and vice versa. Thus, the issue of investment in human capital or capacity building and quality of human resources in Indonesia is a must do job and cannot be negotiable and replaced again. The development policy of increasing human capital investment in Indonesia is very urgent to be carried out immediately and is always be improved. If human capital capacity building is getting better, then economic freedom will also be better and vice versa. The better social benefits of economic freedom will be better able to help reduce unemployment, poverty and inequality problems. Countries with higher levels of economic freedom with index values above 80.6 such as Hong Kong, Singapore, New Zealand, Switzerland and Australia, they have been able to enjoy a higher level of development of overall human capital capacity. Therefore, the policy of the Indonesian government must be immediately directed to further improve literacy, education, and economic literacy of its citizens in a higher standard of living for all citizens and not just its officials. Because, the higher capacity and quality of human capital the higher the economic freedom and the sooner of achieving the prosperity of the State stated in the fift principle of Pancasila. If a joint policy and commitment to build the capacity building and quality of the Indonesian human resources cannot be done immediately, then the goal of achieving a developed country in 2045 supported by the golden generation of Indonesia will be threatened with failure and only a mere dream. Conclusion In this article, we have discussed four very important pillars for regional economic development in Indonesia through high, quality, and sustainable economic growth. The four important pillars namely; human capital, social capital, institutional and entrepreneurship. We conclude; firstly, there is a very strong correlation and positive and significant influence between human capital and quality economic growth, so it can be concluded that human capital is the main and first key in encouraging quality, high and sustainable economic growth. Second, entrepreneurship as measured by the dimensions of competitiveness of productive entrepreneurship is a key factor in driving high and sustainable economic growth. Third, the important role of social capital more appears to be prioritized to maintain remains high and sustainable economic growth rather than encourage quality economic growth. Fourth, institutions as measured by the dimensions of the quality of new economic institutions that function as rules of the game or act as facilitators and dynamists have been able to bridge and further strengthen and complex interdependence with factors: human capital, social capital and entrepreneurship for encouraging economic growth quality, hight, and sustainable. Fifth, better economic freedom is very needed to improve the quality of existing institutions, human capital capacity and productive entrepreneurship competitiveness and vice versa. The argument is that human development and democratic progress are the main keys to economic freedom, . This article provides policy implications that in order to encourage economic growth that remains high, quality and fundamentally sustainable it must be driven through a policy of capacity building and quality of the four main pillars of development namely human capital, entrepreneurship, institutional and social capital. Furthermore, if the policy is successful, then the achievement of economic growth will be increasingly able to reduce the unemployment, poverty and inequality problems.
2020-06-25T09:04:19.879Z
2020-06-01T00:00:00.000
{ "year": 2020, "sha1": "e72df7b8c8e53309936557784274be4710162f84", "oa_license": "CCBY", "oa_url": "https://jssidoi.org/jesi/article/download/540", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "c3457770e95716f84b89b829729994dcb122e601", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics" ] }
13329939
pes2o/s2orc
v3-fos-license
Activation of endogenous p53 by combined p19Arf gene transfer and nutlin-3 drug treatment modalities in the murine cell lines B16 and C6 Background Reactivation of p53 by either gene transfer or pharmacologic approaches may compensate for loss of p19Arf or excess mdm2 expression, common events in melanoma and glioma. In our previous work, we constructed the pCLPG retroviral vector where transgene expression is controlled by p53 through a p53-responsive promoter. The use of this vector to introduce p19Arf into tumor cells that harbor p53wt should yield viral expression of p19Arf which, in turn, would activate the endogenous p53 and result in enhanced vector expression and tumor suppression. Since nutlin-3 can activate p53 by blocking its interaction with mdm2, we explored the possibility that the combination of p19Arf gene transfer and nutlin-3 drug treatment may provide an additive benefit in stimulating p53 function. Methods B16 (mouse melanoma) and C6 (rat glioma) cell lines, which harbor p53wt, were transduced with pCLPGp19 and these were additionally treated with nutlin-3 or the DNA damaging agent, doxorubicin. Viral expression was confirmed by Western, Northern and immunofluorescence assays. p53 function was assessed by reporter gene activity provided by a p53-responsive construct. Alterations in proliferation and viability were measured by colony formation, growth curve, cell cycle and MTT assays. In an animal model, B16 cells were treated with the pCLPGp19 virus and/or drugs before subcutaneous injection in C57BL/6 mice, observation of tumor progression and histopathologic analyses. Results Here we show that the functional activation of endogenous p53wt in B16 was particularly challenging, but accomplished when combined gene transfer and drug treatments were applied, resulting in increased transactivation by p53, marked cell cycle alteration and reduced viability in culture. In an animal model, B16 cells treated with both p19Arf and nutlin-3 yielded increased necrosis and decreased BrdU marking. In comparison, C6 cells were quite susceptible to either treatment, yet p53 was further activated by the combination of p19Arf and nutlin-3. Conclusions To the best of our knowledge, this is the first study to apply both p19Arf and nutlin-3 for the stimulation of p53 activity. These results support the notion that a p53 responsive vector may prove to be an interesting gene transfer tool, especially when combined with p53-activating agents, for the treatment of tumors that retain wild-type p53. Background For those tumors that retain wild type p53 (p53 wt), maintenance of the tumor phenotype depends on its ability to hold p53 wt in an inactive form even long after the initial transformation events [1][2][3][4]. These studies showed that reactivation of p53 impeded tumor growth, rekin-dling interest in this treatment approach. However, the induction of p53 function may meet with significant barriers, such as loss of p19Arf (p14ARF in humans) or overexpression of mdm2 (HDM2 in humans). For melanomas, p53 wt is present in 90% of cases, overexpression of HDM2 is found in 56% cases [5,6] and loss of the CDKN2A locus (where p14ARF resides) occurs in some 50% of primary melanomas [7]. In comparison, primary human gliomas retain p53 wt in 70% of cases [8], the loss of p14ARF appears to be a reciprocal event [9] and 50% of cases over-express HDM2 [10]. These findings indicate that maintenance of inactivated p53 wt in melanomas and gliomas is directly associated with p14ARF/HDM2 status and that this axis may serve as a therapeutic target [11]. Re-activation of p53 may be achieved by gene transfer or pharmacologic approaches. Gene transfer studies have shown that introduction of p14ARF can activate p53 [12][13][14]. Drug treatment with nutlin-3, a small molecule compound that specifically blocks the interaction of mdm2/ HDM2 with p53, results in the protection of p53 from proteolytic degradation [15,16]. However, not all tumor cells with p53wt are sensitive to nutlin-3 treatment, an effect thought to be related to mdmx/HDMX activity [17,18]. To the best of our knowledge, there are no reports in the literature where p19Arf gene transfer was combined with nutlin-3 drug treatment. Since p19Arf has been shown to interact with and inhibit mdmx [19,20], this may provide an additional mechanism for establishing nutlin-3 sensitivity. We present here the use of a p53-responsive retroviral vector, pCLPG, for the transfer of the p19Arf cDNA. We have shown previously that pCLPG provided p53-specific expression that was, in some cells, stronger than the expression level of the parental, non-modified vector, pCL [21]. When the pCLPG vector was employed for transfer of the p53 cDNA, a positive feedback regulatory mechanism was established that both drove vector expression and also blocked tumor cell proliferation [22]. Here the pCLPG vector was used to introduce the p19Arf cDNA in cells harboring endogenous p53 wt in order to explore the interplay between the vector, the transgene and cellular p53. The pCLPGp19 virus was used to treat cell lines that carry wild type p53, B16 (mouse melanoma, p19Arf-null) and C6 (rat glioma, p19Arf-null). Since p19Arf can work in conjunction with the p53 pathway, we proposed that introduction of exogenous p19Arf may functionally activate endogenous p53, impacting both vector expression and tumor cell proliferation. These gene transfer experiments were performed with or without additional drug treatment (doxorubicin, nutlin-3) to determine if combined genetic and pharmacologic therapies could overcome the barriers to p53 function in these cells. Construction of vectors The construction of the pCLPG vector containing eGFP has been described previously [21,22]. In these studies, we showed that expression from the pCLPG-ΔU3 construct offered the strongest response to p53 and was chosen for further study, though it is referred to here simply as pCLPG. The p19Arf cDNA (kindly provided by Charles Sherr, St. Jude's Children's Hospital, Memphis, TN) was first subcloned into pBluescript (Stratagene) and re-isolated as an 800-bp BamHI fragment. This fragment was then inserted in the BamHI site of the pCLPG vector. Virus production To produce virus-containing supernatant, the appropriate viral vectors were co-transfected in 293T cells as described [24], except using pCMV-gag-pol and pCMV-VSVg packaging vectors (kindly provided by Richard Mulligan, Harvard Medical School, Boston, MA, USA and Jane Burns, University of California, San Diego, USA, respectively). The virus-containing supernatant was collected 24 hours post-transfection, centrifuged for 5 minutes, 1000 × g, and the supernatant was aliquoted and stored at -70 °C. Titration was performed either by endpoint dilution determined by G418 resistance or, when possible, by counting eGFP-positive cells by flow cytometry. These protocols have been described previously [25]. Typical titers were in the range of 1 to 5 × 10 6 colony forming units (cfu)/ml. Growth curve The indicated cell type was plated, 7.5 × 10 5 cells/6 cm dish, and transduction was initiated the following day with equal quantities and concentrations of the indicated viruses (1 × 10 6 particles in 1.5 ml) or mock transduced in the presence of polybrene, 8 μg/ml. The transduction was allowed to proceed for 8 hours before a second round of transduction was initiated and allowed to proceed overnight. A third round of transduction was initiated the following morning and allowed to proceed for 6 hours. At the end of the transduction, the cells were trypsinized, counted and replated, 1 × 10 4 cells of each transduction were plated in each of the wells in a 12-well dish with complete medium (DMEM for C6 or RPMI for B16). Two wells per day, where day 1 represents 24 hours post replating, were trypsinized and counted manually. Colony formation assay The indicated cell type was plated, 2 × 10 5 cells/well in 6well dishes, and transduced the following day with equal quantities and concentrations of the indicated viruses (4 × 10 5 particles in 600 μl) or mock transduced in the pres-ence of polybrene, 8 μg/ml. After 4 hours incubation at 37˚C, the virus supernatant was replaced with fresh medium. The next day, cells were harvested, counted, and re-plated at 7.5 × 10 3 , 1.0 × 10 4 , 2.5 × 10 4 and 5.0 × 10 4 cells/dish in 6 cm dishes. The following day, the medium was replaced with fresh, complete medium (DMEM for C6 or RPMI for B16) containing 800-1000 μg/ml of G418 and cells were incubated until mock transduced cells had died, about 7 days. Then, medium was replaced and colonies allowed to form until clearly visible, usually an additional 7-14 days. Cells were fixed with 0.5% paraformaldehyde and stained with crystal violet. For quantification, the crystal violet was recovered in 10% acetic acid and the absorbance read at 590 nm using a spectrophotometer. The positive control, the colonies resulting from the empty pCLPG vector, was considered as 100%. Immunofluorescence detection of p53 and p19Arf Cells were transduced as per the growth curve assay and then replated on 13 mm round glass coverslips, 5 × 10 4 cells/well, in 24 well dishes. Following the final round of transduction, either fresh medium, medium containing 100 ng/ml doxorubicin or medium containing 10 μM nutlin-3 was used and the cells incubated for an additional 24 hours. The cells were fixed with cold methanol, blocked with bovine serum albumin, then probed with a polyclonal antibody for p19Arf (AB-1, Cal-Biochem) followed by an Alexa-488 labeled anti-rabbit secondary antibody. Staining for p53 was performed with a pan-p53 monoclonal antibody (clone G59-12, BD Biosciences) followed by Cy3 labeled anti-mouse secondary antibody. Nuclear staining was performed with Hoechst 33258, 20 μg/ml. Cells were visualized by confocal microscopy at either 20 × amplification (B16) or 20 × plus 4 × zoom (C6). Northern blot For the Northern blots, 2 6-cm dishes of each cell line were transduced as per the growth curve assay. The transduced cells were then treated overnight at 37˚C with complete medium or medium plus 100 ng/ml of doxorubicin. Total RNA was purified using Trizol reagent (Invitrogen Life Technologies, USA) according to the manufacturer's instructions and samples were analyzed as described previously [21]. Western blot detection of p19Arf Cells were transduced and treated with drugs as described for the p53 activation assays. Protein lysates were made 24 hours after initiation of drug treatment and western blot analysis was performed. Briefly, RIPA buffer (1% NP-40, 0.1% SDS, 0.5% Sodium Deoxycholate in 1 × PBS) supplemented with complete mini protease inhibi-tor cocktail (Roche) was used to lyse cells, the protein concentration was determined and then 20 μg was subjected to SDS-PAGE before transfer to Hybond ECL membrane (GE Lifesciences) and probing with an anti-p19Arf antibody (Ab-1, CalBiochem), anti-p21 (sc-756, Santa Cruz Biotechnology) or β-Actin (A5441, Sigma). Secondary antibodies labeled with horse-radish peroxidase were applied and detected with ECL-Plus reagent according to the manufacturer's protocol (GE Lifesciences). p53 activation measured in a reporter assay For the reporter assays, cells were transduced with pCLPG, pCLeGFP or pCLPGeGFP viruses and selected for G418 resistance, as reported previously [26]. Cells were replated, 1 × 10 6 cells/6 cm dish, and transduced (as described for the growth curve assays) with the indicated virus. At the end of the transduction, cells were replated at approximately 50% density in 6-well dishes. The medium in duplicate wells was changed the next day (DMEM for C6 or RPMI for B16) or replaced with medium containing 100 ng/ml doxorubicin or 10 μM nutlin-3 (N6287, Sigma, USA). The cells were incubated for 24 hours before harvesting for flow cytometric assessment of eGFP expression and, in parallel, analysis of cell cycle as revealed by propidium iodide staining as described previously [26]. The median intensity of eGFP expression was determined by the FACS software and then normalized considering the positive control (pCLeGFP) as one. Cell viability assay Cells were transduced as described for the growth curve. For the MTT assay, 96-well dishes were seeded with 1 × 10 4 cells from each transduction using complete medium (DMEM for C6 or RPMI for B16) in quadruplicate wells. The next day, fresh medium or medium containing the indicated quantities of drug was applied to the dishes. Cells were incubated for an additional 48 hours before determination of cell viability. Plates were incubated with 25 μl of MTT solution (5 mg/ml in 1 × PBS), 37°C during 4 hours. The dish was then removed and the precipitate solubilized by the addition of 100 μl lysis buffer (20% SDS in 50% DMF/2% acetic acid, pH adjusted to 4.7) before analysis using a plate reader at 590 nm. Animal model Procedures and conditions for these experiments were approved by the Scientific and Ethics Committee of the Intituto do Coração (2833/06/128) and Hospital das Clinicas (735/06), University of São Paulo School of Medicine. B16 cells were transduced ex vivo, as described for the growth curve assay, with pCLPG or pCLPGp19 virus, in 10 cm dishes. Upon completion of the final round of transduction, the cells were replated in triplicate 10 cm dishes and allowed to reach 80% density before medium was replaced with fresh RPMI containing no drug or RPMI containing 25 ng/ml doxorubicin or 10 μM nutlin-3. The next day, cells were harvested, counted and injected subcutaneously in the flank, 1 × 10 6 cells per C57BL/6 mouse (n = 4), in 100 μl PBS. Tumors were allowed to develop during 15 days, then all animals were injected i.p. with 100 mg/kg bromodeoxyuridine, BrdU, maintained for an additional 4 hours, then sacrificed and tumors were collected and analyzed. One half of each tumor was submitted to frozen sectioning while the remaining half was fixed in paraformaldehyde 4% followed by inclusion in paraffin. Histologic sections, 3-5 μm, were prepared and stained with hematoxylin and eosin (HE). Necrotic area was identified visually in 3 sections from each tumor and quantified using ImageJ software (15-20 fields for each section) and the ratio of necrotic/non-necrotic tissue was determined. BrdU staining was performed using paraffin embedded sections and following the protocol supplied with the BrdU peroxidase staining kit (Zymed laboratories). TUNEL staining was performed using frozen sections and following the protocol supplied with the In situ cell death detection kit, Fluorescein (Roche Applied Biosciences). Reliable expression of p19Arf when delivered by the pCLPG retrovirus to cell lines harboring wild type p53 We sought to target endogenous p53 wt to both drive vector expression and inhibit tumor proliferation by including the p19Arf cDNA in the pCLPG retrovirus, a vector that contains a p53-responsive promoter used to control expression of the transgene (Figure 1). To confirm transgene expression, we first performed immuno-fluorescence staining of B16 (mouse melanoma, wildtype p53, p19Arf null) and C6 (rat glioma, wild-type p53, p19Arf null) cells transduced with pCLPGp19 or, as a control, with the empty pCLPG retrovirus. For both cell lines, we readily detected exogenous p19Arf localized in the nucleolus only in the presence of the pCLPGp19 virus ( Figure 2). The presence of endogenous p53 was also examined by immunofluorescence staining. The treatment of C6 cells with pCLPGp19 facilitated the detection of endogenous p53 which was localized in close proximity to p19Arf. Similar treatment of B16 did not reveal endogenous p53 (Figure 2). Northern blots showed that viral expression from the pCLPG vectors in B16 cells was generally weaker than that seen for C6 cells, but the presence p19Arf aided viral expression and viral transcripts were readily detected ( Figure 3). Treatment of the cells with doxorubicin (an inhibitor of topoisomerase II and DNA damaging agent well known for its ability to activate p53) did result in increased viral expression in both cell lines. At least in this assay, the p19Arf transgene appears to have enhanced vector expression. Prior to Western blot analysis, B16 and C6 cells were subjected to gene transfer with or without exposure to doxorubicin or nutlin-3 (an inhibitor of mdm2). As seen in Figure 4, transduction of either cell line with pCLPGp19 yielded readily detectable levels of p19Arf, but the protein level was not increased upon drug treatment. Induction of p21 (Cdkn1a) expression was accomplished by either p19Arf gene transfer or drug treatment. Though p21 is a known p53 target, we cannot rule out its activation by p53-independent mechanisms. In all, these assays indicate that the expression of p19Arf from the pCLPG vector was reliable in p53 wtpositive cells. Figure 1 Schematic representation of the p53-responsive pCLPG retroviral vectors. These vectors contain a p53-responsive element, called PG, inserted in the retroviral long terminal repeat. This modification results in p53-dependent transgene expression [21,22]. R U5, native regulatory elements of the Moloney Murine Leukemia Virus long terminal repeat; SV40, simian virus 40 promoter; Neo R , neomycin phosphotransferase cDNA which confers resistance to the antibiotic G418; eGFP, enhanced green fluorescent protein; p19, mouse p19Arf cDNA. Proliferation of B16 cells is not altered by treatment with pCLPGp19, yet C6 cells are inhibited In a growth curve assay, the p53-responsive pCLPG vectors were tested for their ability to inhibit the proliferation of B16 or C6 cells. Treatment of B16 cells with the pCLPGp19 vector did not confer a reduction in proliferation, but was successful for C6 (Figure 5a). Alterations in growth were not detectable when either B16 or C6 cells were treated with pCLPGp53 or pCLp53, a retroviral vec- Figure 2 Exogenous p19Arf detected by immunofluorescence. Cells were transduced and, 24 hours later, fixed with methanol and exposed to a polyclonal antibody for p19Arf and a monoclonal antibody for p53 and detection with Alexa-488 anti-rabbit and Cy3 anti-mouse secondary antibodies. Cells were then stained with Hoechst 33258. For B16, we show only the 20 × magnification confocal photomicrographs, no staining of p53 was visible at higher magnification. For C6, the appearance of p19Arf at 20 × magnification was comparable to that shown for B16, however endogenous p53 staining was observed in C6. We show C6 at 20 × magnification plus 4 × zoom in order to emphasize p53 staining. All photos for either B16 or C6 were captured using identical settings. Merge refers to the combined images of the green, red and blue channels. B16 C6 p19Arf Merge p19Arf p53 Merge p53 pCLPG pCLPGp19 Figure 3 Northern blot analysis reveals that presence of p19Arf aids pCLPG vector expression. B16 (A) or C6 cells (B) were mock transduced or transduced with equal quantities of pCLPG, pCLPGeGFP or pCLPGp19 in duplicate dishes. One dish from each duplicate was treated with 100 ng/ml of doxorubicin (Dox) and the other was maintained in fresh medium for 24 hours before harvesting total RNA used to perform the Northern blot analyses. Membranes were probed sequentially using the radiolabeled cDNAs of neomycin phosphotransferase (Neo), p19Arf (p19) or, as a control for loading, βActin. tor with constitutive expression driven by the native LTR (data not shown). Similarly, a colony formation assay showed no effect when B16 cells were transduced with pCLPGp19 as compared to the control, yet colony formation in C6 cells was efficiently reduced in the presence of pCLPGp19 ( Figure 5B). B16 cells were completely resistant to treatment with pCLPGp53 or pCLp53 in this assay (data not shown). In comparison, both pCLPGp53 and pCLp53, inhibited colony formation in C6 cells by about 50% (data not shown). Since expression from the pCLPGp19 vector was reliable in both cell lines, yet each responded differently, we explored whether the barrier to proper p19Arf function involved the activation of p53. Combined pCLPGp19 and drug treatments induce p53 function in an additive manner We set up a quantitative assay to measure the impact of combined gene transfer and drug treatment on p53 activity. For this, activity of p53 was measured in cells where pCLPGeGFP had been introduced to serve as a reporter and selected for G418 resistance. These cells were then transduced with a second pCLPG vector, subjected to drug treatments and eGFP reporter activity was quantified by flow cytometry. The introduction of pCLPGp19 resulted in weak induction of p53 activity in B16 cells, an approximate 1.75-fold increase as compared to the pCLPGeGFP reporter activity in the absence of either drug or genetic alteration (Figure 6A). Pharmacologic induction of p53 activity could be achieved in B16 cells with doxorubicin, with or without pCLPGp19. In contrast, nutlin-3 treatment alone was not sufficient to stimulate significant p53 activity. Interestingly, p53-dependent reporter activity could be induced 2.5-fold by the combined treatment with pCLPGp19 and nutlin-3. This result shows that combin-ing genetic and pharmacologic treatments had an additive effect in activating p53 in B16 cells. In comparison, p53 activity in C6 cells was more efficiently induced by individual pCLPGp19 gene transfer or nutlin-3 treatments and yielded an additive effect when combined ( Figure 6B). Cell cycle analysis was performed in parallel with the experiments described above (Figure 6C and 6D). We observed that the combined treatment with pCLPGp19 plus drugs resulted in profoundly altered cell cycle patterns. For example, the combined treatment of B16 cells with pCLPGp19 and nutlin-3 resulted in a marked G1 arrest, whereas individual treatments produced little change in these cells. Treatment with doxorubicin produced a G2 arrest, yet the introduction of p19Arf in combination with doxorubicin yielded a G1 arrest. Cell cycle alterations in C6 cells followed a similar pattern, but with some subtle differences. In C6, treatment with nutlin-3 caused a pronounced G1 arrest that was slightly enhanced in the presence of pCLPG19. We interpret these results as in indication that B16 cells are more resist to the activation of p53 than C6 when using either p19Arf gene transfer or nutlin-3 treatment. However, the combined treatments resulted in more highly activated p53 and marked cell cycle alterations. pCLPGp19 gene transfer pre-sensitizes cells to drug treatment To determine whether the functional activation of p53 by the combination of pCLPGp19 gene transfer plus drug treatment is associated with a decrease in cell viability, we used a standard MTT assay. Cells were plated and then transduced with the pCLPG viruses. The next day, cells were collected, counted and replated in 96-well dishes. For these assays, drug treatments were allowed to proceed for 48 hours before MTT staining of viable cells. B16 Figure 4 Western blot analysis of p19Arf and p21 (Cdkn1a) expression upon gene transfer and drug treatments. B16 (left) and C6 (right) cells were mock transduced or transduced with pCLPG or pCLPGp19 followed by treatment with 100 ng/doxorubicin or 10 μM nutlin-3 (lanes labeled D or N, respectively) or no drug (-). Exogenous p19Arf, endogenous p21 and β-Actin were revealed using specific primary antibodies followed by secondary HRP-conjugated antibody/ECL detection. cells previously transduced with pCLPGp19 were rendered more sensitive to treatment with either doxorubicin or nutlin-3 ( Figure 7A). Consistent with the previous assays, the combined treatment with p19Arf and nutlin-3 reduced the viability of B16 cells. In C6 cells we observed that treatment with pCLPGp19 alone reduced viability by about 50%, consistent with the growth curve assays ( Figure 7B). Though C6 cells were quite sensitive to either nutlin-3 or doxorubicin, a subtle but consistent additional reduction in viability was seen by combining p19Arf gene transfer with pharmacologic therapies. So far, our results suggest that re-activating p53 by a combination of p19Arf gene transfer along with pharmacologic agents may present an interesting option for tumor cell inhibition, especially in cells that retain wild-type, but functionally inactive, p53. Combined p19Arf and nutlin-3 treatment induces death of B16 cells in vivo In an attempt to assess the impact of gene transfer and drug treatment in an animal model, B16 cells were transduced with pCLPG or pCLPGp19 and treated with doxorubicin or nutlin-3 ex vivo. Cells were then implanted subcutaneously in C57BL/6 mice and tumors were recovered on day 15. Reduction in tumor size was conferred by drug treatment, yet gene transfer did not to contribute to this result ( Figure 8). As shown in Figure 9, tissue sections stained with HE revealed a statistically significant increase in areas of necrosis when the cells had been treated with the combination of pCLPGp19 gene transfer and nutlin-3, but not by either treatment alone (student's t-test, p = 0.0012). Tumor cell proliferation was revealed by incorporation of BrdU prior to sacrifice followed by its immunohis- tochemical detection in the histologic sections. As shown in Figure 10, BrdU staining was greatly reduced when the cells had been treated by combined pCLPGp19 gene transfer plus doxorubicin or nutlin-3 treatments, yet application of these treatments individually did not appreciably alter BrdU staining. TUNEL staining was performed, but no difference was observed among the experimental conditions (data not shown). Discussion Both here and in previous studies [22,27], we saw that B16 and C6 are particularly resistant to p53 treatment. Since these cells harbor endogenous p53wt, they must possess a mechanism for maintaining p53 in an inactive state, such as the loss of p19Arf or over-expression of mdm2. Treatment approaches in this case include the reactivation of p53 by gene transfer or drug treatment. For Figure 6 Induction of endogenous p53 activity by combined gene and drug therapy. Cells were transduced with pCLeGFP as a control or the p53-responsive retroviral reporter pCLPGeGFP and selected for G418 resistance. For the assays, the B16 (A) or C6 (B) reporter cells were either mock transduced or transduced with pCLPGp19, replated in 6-well dishes and then treated with drug as indicated. The following day, cells were harvested for flow cytometric analysis of eGFP expression. The mean eGFP intensity was determined by the FACS software and the value for pCLeGFP used for normalization. Shown is the average and standard deviation of duplicate samples performed in three independent experiments. Cell cycle analysis (C and D) was performed in parallel with the assays shown in A and B. example, introduction of p19Arf can activate p53 and complement its function. We set out to show that the p53-responsive pCLPG retroviral vector could be used to introduce p19Arf, uniting transgene function and control over its expression. That is to say, exogenous p19Arf should activate endogenous p53 and, in turn, reinforce expression from the pCLPG vector, resulting in even higher levels of exogenous p19Arf. We had hoped to observe that p19Arf gene transfer would be sufficient to inhibit proliferation, as was the case with C6 cells. In contrast, B16 cells were more resistant to the effects of p19Arf and required additional drug treatment in order to reduce viability. Using a p53-responsive vector for the delivery of p19Arf to cancer cell lines that harbor p53wt established interplay between the vector, transgene and cellular components of the p53 pathway. However, the introduction of the pCLPGp19 vector resulted in distinct responses between the B16 and C6 cell lines. B16 cells were relatively resistant to either genetic or nutlin-3 treatments when applied individually, but their combination led to the activation of endogenous p53 and reduction in cell viability both in culture and in an animal model. In contrast, C6 cells were permissive to the activities of either p19Arf or nutlin-3 treatments as shown by the functional activation of endogenous p53. The treatment of B16 cells with pCLPGp19 alone did not result in the reduction of proliferation, yet viral expression was reliable. Exogenous p19Arf was readily detected and was localized to the nucleolus. Since Northern and Western blots also confirmed viral expression, we proposed that p53 was not efficiently activated in B16 cells and that this may have been responsible for the continued proliferation of these cells. Holding p19Arf in the nucleolus due to its interaction with nucleophosmin is thought to prevent interaction of Arf with mdm2, which occurs in the nucleoplasm [28], though this is reversed when DNA damage is induced [29]. We observed that treatment with doxorubicin did not noticeably alter the localization of p19Arf (data not shown), though we used a relatively low drug dose. However, the treatment with the DNA-damaging agent doxorubicin was sufficient to activate p53 in both cell lines. Gene transfer studies have shown that introduction of exogenous p14ARF could activate endogenous p53 wt more efficiently when both p14ARF and p53 were introduced simultaneously by adenoviral vectors [12,13]. Similarly, we observed that introduction of p19Arf was not sufficient to activate p53 in B16 cells. Interestingly, B16 cells were pre-sensitized to drug treatment by prior exposure to pCLPGp19. p19Arf gene transfer followed by treatment with nutlin-3 lead to markedly increased p53 activity, cell cycle alteration and reduced viability. Though C6 cells were more permissive to p19Arf or nutlin-3 treatments, their combination was beneficial in increasing the response to p53 stimulation. We propose that p53-responsive vectors may prove beneficial in functional and therapeutic studies of gene transfer in tumor models that present p53 wt, especially when combined with drug treatment. Multiple pathways are involved in regulating p53 activity. Since C6 was sensitive to nutlin-3 treatment, we expect that p53 activity was squelched primarily through mdm2. In contrast, B16 cells probably possess additional mechanisms of abrogating p53 activity since these cells were quite resistant to nutlin-3. Reports in the literature point out that mdmx (mdm4) can render cells resistant to nutlin-3 treatment [17,18] and the interaction of p19Arf with mdmx is thought to inhibit mdmx activity [19,20]. Western blot analysis of mdmx in presence or absence of pCLPGp19, doxorubicin or nutlin-3 did not reveal any alteration in protein level or mobility (data not shown). Therefore, the mechanism for the resistance of B16 cells to nutlin-3 as well as the reason for p53 re-activation by the combination of p19Arf and nutlin-3 remains to be determined. The treatment with either p19Arf or nutlin-3 should result in the neutralization of mdm2. However, each acts through a distinct mechanism. Nutlin-3 occupies the site on mdm2 where p53 would otherwise be bound. p19Arf, on the other hand, blocks the E3 ubiquitin ligase activity of mdm2. In this case, the influence of p19Arf may reach beyond p53 and include other factors with which mdm2 interacts, such as Hif1α [30] and PML [31]. Since treatment with pCLPGp19 was quite effective in C6 cells, we conclude that it is possible to use endogenous p53 to drive viral expression of p19Arf and bring about an increase in p53 activation and the concomitant reduction in cell proliferation. The resistance of B16 to p53 activation upon pCLPGp19 or nutlin-3 treatment may be particular for this cell line. However, B16 cells are widely studied as a model for melanoma and are particularly Figure 8 Volume of B16 tumors. B16 cells were transduced with either pCLPG (black bars) or pCLPGp19 (gray bars) and then treated with 25 ng/ml doxorubicin (Dox) or 10 μM nutlin-3, as indicated, before subcutaneous injection of 1 × 10 6 cells in C57BL/6 mice (n = 4/condition). After 15 days, the animals were sacrificed, tumors removed and measured. Tumor volume was determined using the formula V = π/6 × D1 × D2 × D3 (where D is the dimension in mm). Presented is the mean and standard deviation of the observed tumor volumes. Figure 9 Animal model of B16 tumor formation reveals increased necrosis when cells were treated with both pCLPGp19 and nutlin-3. B16 cells were transduced ex vivo with pCLPG or pCLPGp19 followed by treatment with 25 ng/ml doxorubicin (dox) or 10 μM nutlin-3 (Nutlin) before subcutaneous implantation of 1 × 10 6 cells in C57BL/6 mice (n = 4/condition). After 15 days, the animals were sacrificed and tumors were analyzed histologically upon staining with HE. The ratio between necrotic and non-necrotic areas in photomicrographs of the HE sections was determined using ImageJ software and is presented as the mean and standard deviation of 15 to 20 fields from each of 3 sections from each tumor (*, student's t-test, p = 0.0012). interesting since they can be studied in a syngeneic, immunocompetent animal model. Analysis of tumor formation after ex vivo gene transfer and/or drug treatment revealed findings consistent with those described for the in vitro assays. Here too, inhibition of B16 proliferation and viability was revealed upon combined p19Arf gene transfer and nutlin-3 treatments. At the end of the 15 day observation period, tumor size was not a reliable indicator of the impact of treatment. However, a significant increase in necrotic tissue was revealed in tumors derived from B16 cells treated with both pCLPGp19 and nutlin-3. We did not find increased TUNEL staining upon gene transfer or drug treatments, possibly due to the kinetics of the treatment or due to the Figure 10 Decreased proliferation, as revealed by BrdU incorporation, was associated with double treatment conditions. Prior to sacrifice, animals were injected i.p. with a solution of BrdU. Paraffin embedded sections of tumors were probed with an anti-BrdU antibody conjugated to alkaline phosphatase and revealed through a peroxidase reaction. advanced stage of necrosis. A concomitant reduction in proliferation, as revealed by BrdU staining, was also seen in tumors arising from the doubly treated cells. Conclusions In this study, we have explored the functional activation of endogenous p53 when p19Arf was introduced by a p53-responsive vector. In the absence of drug treatment, we observed reliable expression of exogenous p19Arf driven by endogenous p53. C6 cells were quite sensitive to either p19Arf gene transfer or nutlin-3 treatment alone and when combined these treatments yielded in additive effect. B16 cells were generally quite resistant to p19Arf or nutlin-3 treatments, though the combination of these resulted in markedly increased p53 activity as well as reduced viability. To the best of our knowledge, this work represents the first attempt to unite p19Arf gene transfer and nutlin-3 treatments. Here we have shown that combined treatments activated p53 and reduced the viability of B16 cells.
2016-05-12T22:15:10.714Z
2010-06-22T00:00:00.000
{ "year": 2010, "sha1": "63d4b84b41a0bf8960dc95bebc03fa64fbab2ec2", "oa_license": "CCBY", "oa_url": "https://bmccancer.biomedcentral.com/track/pdf/10.1186/1471-2407-10-316", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "65964d01ae415afa2327638a61248701f13a5192", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
164758564
pes2o/s2orc
v3-fos-license
Timing between age at first sexual intercourse and age at first use of contraception among adolescents and young adults in Niger: What role do education and place of residence play? Background: Low contraceptive use among women in Niger is one of main causes of early childbearing and unwanted pregnancies, which affect maternal and child health. Education and place of residence have been cited as factors affecting modern contraceptive use. Methods: We investigated the separate and joint effects of the place of residence and education on the time to modern contraceptive uptake among women aged 15-24 in Niger. The study used data from the second round of the 2016 Niger Performance Monitoring and Accountability 2020 (PMA2020) project. Survival analysis was applied for 830 women. Results: Nelson-Aalen curves show that urban women had higher hazards of (and shorter delays in) modern contraceptive uptake as compared to their rural counterparts. Also, the higher the level of education, the higher the hazards of (and the shorter the delays in) modern contraceptive uptake. Findings from the multivariate (survival) analysis confirms these figures and provides the net effect of the place of residence on modern contraceptive uptake. Whether living in urban or rural areas of Niger, what matters more is the level of education. Conclusions: Family planning programmes concerning adolescent and young women should focus more on women with no education and those that are illiterate. Introduction Niger presents worrying characteristics for youth sexual reproductive health. The country has the highest fertility rate as well as the lowest age for marriage and childbearing in Africa and the world (Barroy et al., 2015). With a total fertility rate of 7.6 children per women and a population growth rate of 3.9%, Niger's population has increased from 7,292,000 in 1989 to 21,311,000 in 2018(United Nations, 2019. Early marriage and, especially, childbearing are the main causes driving the high population growth and fertility in Niger (Nouhou, 2016). In general, early marriage and childbearing is also associated with increased risk of maternal death and new-born death (Gibbs et al., 2012). The 2018 Niger Performance Monitoring and Accountability 2020 (PMA2020) family planning brief reports that more than 70.9% of women aged 18-24 were married by age 18 (INS -Niger, 2018). The brief also reports that one-third of Niger women aged 18-24 had their first birth by age 18. Moreover, estimates from UNICEF (2008) show that a woman's lifetime risk of dying due to complications caused by pregnancy or childbirth was one in seven. This represents 14,000 deaths among Nigerien mothers from pregnancy-related causes. The maternal mortality rate in Niger was 553 per 100,000 live births in 2015 (Roser & Ritchie, 2019b) and the under-five mortality rate was 84.5 deaths per 1,000 live births in 2017 (Roser, 2019a). These figures reflect the poor health of both mother and child in Niger. One of the major causes of maternal and childhood mortality remains the early age at first birth which typically depends on the age at first sexual intercourse. Evidence has shown that there is an inverse association between age at first sexual intercourse and early childbearing (Gebreselassie & Govindasamy, 2013;Khatiwada et al., 2013;Meekers, 1993;Westoff, 2003). Data from the 2012 Niger Multiple Indicator Cluster Survey (MICS) and Demographic and Health Survey (DHS) show that 28% of women had already had their first sexual intercourse by age 15, and 45% had given birth by age 18 (INS -Niger et al., 2013). These figures are among the highest in sub-Saharan Africa (ICF International, 2018). This is an age at which adolescents are most susceptible to sexually transmitted infections, including HIV/AIDS and human papillomavirus and other health complications (Fagbamigbe et al., 2015). The modern contraceptive is one of the means of promoting good reproductive health, especially among adolescents and young women. Sub-Saharan Africa has one of the highest rates of teenage pregnancy (Yakubu & Salisu, 2018) and the lowest contraceptive prevalence rates (Radovich et al., 2018) in the world. The low contraceptive use among Niger adolescents (4%) and youths (14%) (ICF International, 2018) is one of the main causes of early childbearing and unwanted pregnancies, which have many consequences including unsafe abortion, high maternal and child mortality and reduced earning potential and educational achievements. This calls for an increasing interest in expanding contraceptive use among adolescents and youths as advocated by the Family Planning 2020 global partnership (Family Planning 2020, 2015. The research literature has shown that the sooner the use of contraceptive methods, especially modern methods (pill, intrauterine device, injectables, female condom, male condom, female sterilization, male sterilization, implants and lactational amenorrhea), the higher the odds of preventing early childbirth (Abma & Martinez, 2017;Guleria et al., 2017;Shu et al., 2016). Many factors affecting the use of contraceptive methods in the sub-Sahara African context have been observed. Many of them repeatedly cited women's level of education and place of residence as major determinants of contraceptive use (Fawole & Adeoye, 2015;Nair & Devi, 2017;Ochako et al., 2017). Most of these studies have examined the separated effect of the place of residence and education on contraceptive use. Likewise, few studies have been carried out on factors affecting the timing between first sexual intercourse and first use of contraceptive methods. Furthermore, the third target of Sustainable Development Goals (SDGs) aims to reduce maternal mortality and under-five mortality to less than 70 deaths per 100,000 live births and 25 deaths per 1,000 live births (Gostin & Friedman, 2015). There is an expectation that through the use of modern contraceptive methods (from the very first sexual initiation), there will be an improvement in maternal and child health in Niger. This study aims to investigate both separated and joint effects of the place of residence and education on the timing between first sexual intercourse and first use of modern methods of contraception. Data The study draws on data from the Performance Monitoring and Accountability 2020 (PMA2020) project. PMA2020 supports low-cost, rapid-turnaround survey monitoring key indicators for family planning, water, sanitation and hygiene, and other health and development indicators in 11 low to middle income countries (for more details concerning the methods and objectives of PMA2020, see Johns Hopkins University, 2013). The PMA2020 project received approval to collect data from Institutional Review Boards at Johns Hopkins Bloomberg School of Public Health and the Niger Institute of Statistics. We use data from the second round of Niger PMA2020 survey conducted from February to April 2016 (Performance Monitoring and Accountability 2020 Project, 2016). The 2016 Niger PMA2020 survey used a two-stage cluster sample design with Niamey (the capital city of Niger), urban areas outside of Niamey, and rural areas as strata. This sample design enabled national and sub-national estimates to be produced. The study selected 84 enumeration areas (EAs) based on the 2012 Niger census' sample frame, and randomly selected 35 households within each EA. Then, all resident women of reproductive ages within the sampled households were interviewed. The final dataset included 2,785 households (96.2%) and 3,048 women of reproductive ages (95.5%). Our study population consisted of 1,238 women aged 15-24 at the time of the survey. Several studies in the areas of sexual and reproductive health have defined young women as those falling within the ages of 15-24 years (Ali & Cleland, 2005;Dellar et al., 2015;Gouws et al., 2008;Lauby & Stark, 1988). Furthermore, young women undergo significant transitions in lifestyle, maturity, and legal rights between these ages; which places them at different vulnerabilities at different time points (Dellar et al., 2015). Of the 1,238 women aged 15-24, 749 (60.5%) had already had their first sexual intercourse. Among these 749 women, we excluded 31 who reported having their first sexual intercourse before age 4, 26 who did not know when they first had sex, and 18 missing values. This resulted in a sample of 674 young women. From these, 671 were able to provide information on whether and, if so, when they have used a modern method of contraception for the first time. The corresponding weighted number of women aged 15-24 who had ever had sex is 830 1 (69.6% of women aged 15-24). None of the remaining 567 women (who had never had sex or did not give any consistent answer about the experience of the first sexual intercourse) use contraception. Measures The 2016 Niger PMA2020 survey questionnaire was extensive and included sections on reproduction, pregnancy and fertility preferences, contraception, sexual activity, menstrual hygiene, and location. In particular, the questionnaire provided data on family planning use, of the type gathered in the Demographic and Health Surveys. Throughout this study, pill, intrauterine device, injectables, female condom, male condom, female sterilization, male sterilization, implants and lactational amenorrhea are referred as modern methods of contraception. The survey also collected information on the respondent's socioeconomic characteristics, including educational attainment, marital status, and location. The dependent variable of the study is the time to uptake of a modern contraceptive use since the first sexual intercourse. The time to uptake of a modern contraceptive use is calculated in years and is equal to the age at first modern contraceptive use minus the age at first sexual intercourse for modern contraceptive users. For non-users, the time to uptake of a modern contraceptive use is equal to the current age minus the age at first sexual intercourse. PMA2020 survey asks four questions to determine the time to uptake of a modern use: "How old were you when you first had sexual intercourse?", "Have you ever done anything or tried in any way to delay or avoid getting pregnant?", "How old were you when you first used a method to delay or avoid getting pregnant?", and "Which method did you first use to delay or avoid getting pregnant?". The main independent variables are the place of residence and education. During the 2016 Niger PMA2020 survey, women were asked the following question about their educational attainment: "What is the highest level of school you attended?". The possible response categories to this question were: 'Never attended', 'Primary', 'Secondary', and 'Higher'. Only a very small proportion (0.4%) of young women had a higher level of education. To prevent non-accurate p-value for the test analysis, we combined the 'Secondary' and 'Higher education' categories into one ('Secondary or higher'). The place of residence is divided into two categories: urban and rural areas. As mentioned above, the odds of modern contraceptive use usually depend (non-exhaustively) on the availability by location (urban/rural residence) and knowledge (education) (Kandala et al., 2015;Kravdal, 2002;Rutenberg et al., 1991). In addition to education and place of residence, other variables (of control) such as age and household wealth tertile. Empirical data point out wealth as affordability factor, and age as confidence-to-buy factor affecting differences in modern contraceptive use across social groups (O'Regan & Thompson, 2017;Wambui et al., 2009). Analytic methods We used survival analysis since the main outcome of this study (time to modern contraceptive uptake) is a form of time to event. The survival time is assumed to begin when a woman has her first sexual intercourse until when she starts using modern contraceptives. Young women who have never used any modern contraceptive at the time of interview are right censored as of the date of the survey. Let T be a non-negative random variable representing the duration from first sexual intercourse to modern contraceptive uptake. The subjects at risk are the 671 (830 weighted) women aged 15-24, followed-up since their first sexual intercourse. The observation continues until time t. If a woman has taken up modern contraceptive, the time t is the time of modern contraceptive uptake; otherwise, the time t is the time of the survey (in 2016). The observation ends for a woman at time T = t if she has started using any modern contraceptive method. For example, T = 0 if a woman started using modern contraception before one year after her first sexual intercourse. Assuming that T is a continuous random variable with probability density function f(t), the cumulative distribution function (c.d.f), giving the probability that a woman has taken up modern contraceptive by duration t is: The complement of the c.d.f (S(t) = 1 -F(t)), which represents the probability that a woman is yet to take up modern contraceptive by duration t, is the survival function. In this study, we defined the survival function in terms of the hazard function, which is the instantaneous rate of occurrence of the event of interest (here, the uptake of modern contraceptive since first sexual intercourse). The hazard function is defined as: The numerator of this expression is the conditional probability that a woman takes up modern contraceptive in the interval [t,t + dt) given that she has not used modern contraceptive before. The denominator is the width of the interval. Dividing one by the other, we obtain a rate of uptake of modern contraceptive per unit of time (i.e. per year). Taking the limit as the width of the interval goes down to zero, we obtain an instantaneous rate of uptake of modern contraceptive. The survival analysis for this paper proceeded in two steps. First, we used Nelson-Aalen nonparametric estimates of cumulative hazard rate function for the descriptive analysis. Especially, we drew cumulative hazard rate curves of time to modern contraceptive uptake for all women aged 15-24, and by groups (stratifying by education and place of residence). We used logrank tests to test the equality of the cumulative hazard rate functions across groups. Second, we employed (multivariate) Cox semi-parametric proportional-hazards models to predict how the place of residence and education affect the time to uptake of modern contraceptive. The hazard rate in a Cox proportionalhazards model is defined as: where x is the vector of independent variables (education, place of residence, age, and wealth tertile), h 0 (t) is the baseline hazard function, and β is the vector of coefficients. The multivariate analysis consisted of three different models. All models were controlled for age and wealth tertile. The first model (Model 1) presents the effect of the place of residence on the time to uptake of modern contraceptive use. The second model (Model 2) is equal to Model 1 plus the education variable. The third model (Model 3) shows how the interaction between the place and residence and education influences the timing to modern contraceptive uptake. We applied sampling weights for the analysis. Stata 13 was used to analyse the data. Results We first report descriptive statistics of the variables used in the analysis in Table 1. Mean age at the first sexual intercourse among women aged 15-24 was 19.7 years. Just under one-quarter (197 out of 830; 23.7%) of the respondents (ever-sexually-active women aged 15-24) had used contraception. Over three-quarters of (first-time) contraceptive users (156 out of 197) had practiced modern methods. The pill is the most widely used modern method, accounting for more than three-fifths of modern contraceptive use and 11.8% of the respondents. Of all women aged 15-24, very few (0.6%) had used condoms for their first contraceptive experience. Table 1 also indicates that the vast majority of the respondents (86.4%) were living in rural areas. At the time of the survey, only 12.2% of ever-sexually-active women aged 15-24 had attained secondary or higher level of education, whereas more than two-thirds (68.3%) had never attended school. The mean time between first sexual intercourse and first use of modern contraceptive was 3 years with a standard deviation of 2.1 years. We then compared the incidence rate of modern contraceptive uptake among ever-sexually-active young women by place of residence ( Figure 1B) and education ( Figure 1C). Figure 1B reveals two important findings. First, the incidence rate of modern contraceptive uptake increases very fast in the first six years after sexual initiation. Between six and eight years post sexual initiation, the incidence rate continues to rise steadily though slower than the first six years. Secondly, the incidence rate of modern contraceptive uptake since sexual initiation was higher among urban women than rural women. In addition, the cumulative hazard curve of modern contraceptive uptake reached its peak faster among urban women (61.8% at the tenth year) than rural women (37.8% at the fourteenth year). As expected, the incidence rate of modern contraceptive uptake is higher among women with higher education than those with no education ( Figure 1C). Also, the higher the educational level, the faster the modern contraceptive uptake. Figure 1D shows how interaction between place of residence and education is associated with the chances of modern contraceptive uptake. Urban women with primary or higher education and rural women secondary or higher education had the highest incidences of and shorter delays in modern contraceptive uptake. The Log-rank test for equality of survival functions shows significant differences in the cumulative hazard function of place of residence and education. Table 2 presents the results of Cox semi-parametric proportional-hazards models of the effects of the place of residence and education on time to modern contraceptive uptake. We fitted three models for the survival data. Model 1 was restricted to place of residence (with wealth and age as control variables). Significant differences are observed between urban and rural women. The probability of modern contraceptive uptake was 1.73 (95% CI: 1.09-2.73) times higher for urban women as compared with rural women. According to the wealth status, women in the richest wealth tertile were more than twice as likely to take up modern contraceptive than women in the poorest wealth tertile. The chances of modern contraceptive uptake increased by 11% per one-year increase in age, all other factors being equal. In Model 2, we introduced education, which was not included in Model 1. There was no significant difference in modern contraceptive uptake between urban and rural women after including education. Findings from Model 2 in Table 2 show that the higher the education the higher the hazard of modern contraceptive initiation. The hazard ratio (HR) of modern contraceptive uptake was higher among women with primary education (HR = 1.80, p < 0.01, 95% CI: 1.20-2.72) and women with at least secondary education (HR = 2.30, p < 0.01, 95% CI: 1.43-3.70) compared with their peers with no education. Model 3 included the interaction terms between place of residence and education without the separated variables (education and place of residence). Urban women with primary education had the highest hazard of modern contraceptive uptake (HR = 3.17, p < 0.01, 95% CI: 1.54-6.53). They are followed by their peers with secondary or higher education (HR = 2.87, p < 0.01, 95% CI: 1.42-5.79) and rural women with secondary or higher education (HR = 2.82, p < 0.01, 95% CI: 1.60-4.96). Also, women with no education had similar hazard rates of modern contraceptive uptake whether living in urban or rural areas. In other words, the educational status is the most discriminating factor for modern contraceptive uptake. Discussion Identifying investments needed to improve family planning programmes in developing countries remains a source of concern for health advocates and professionals, policies makers and funding agencies (Sedgh et al., 2016). Improvement in modern contraception use should enable women-irrespective of their place of residence and socioeconomic status-to prevent reproductive health complications. This study used data from a nationally representative sample of Niger young women (aged 15-24 years) to examine how the place of residence and education, as well as their interaction, affect the time to uptake of modern contraceptives. Findings from the analysis show that residing in urban areas, having a high educational level and living in the richest wealthy tertile are associated with higher hazard rates of and shorter delays in modern contraceptive uptake. Moreover, the hazard ratio of using modern contraceptive among young women also increases with age. Findings from multivariate (survival) analysis showed a minor effect of the place of residence on the time to modern contraceptive uptake when adjusted for education (and the main control variables: age and wealth tertile). Indeed, the difference between urban and rural women in modern contraceptive uptake become non-significant after adjusting for the level of education. Thus, the level of education seems to counterbalance the effect of increased time to uptake of modern contraceptives among rural women (ages 15-24) in Niger. Therefore, the positive relationship between education and the hazard rates of modern contraceptive uptake confirms previous work on the association between education and contraceptive use (Khan & Mishra, 2008;O'Regan & Thompson, 2017;Selassie, 2017). Better education is not only positively associated with early use of contraception, but also contributes to empowerment of women (Family Planning 2020, 2014Tadesse et al., 2013;Tuladhar et al., 2013). Also, women who attend at least a primary level of education (and those who are literate) are more likely to know about family planning services through media (Fagbamigbe et al., 2015;Shapiro & Tambashe, 1994). Finally, we further our analysis by introducing interaction terms between the place of residence and education. In so doing, statistically significant differences emerged. Urban women with at least primary education and rural women with secondary or higher education have approximatively the same hazards of modern contraceptive uptake. This confirms the key role played by education in modern contraceptive uptake among young women in Niger. Conclusion Our results suggest that the delay in modern contraceptive uptake is relatively high. This delay is higher among rural and less educated women than among urban and more educated women. Therefore, adolescents and youths in Niger, especially those living in rural areas and those with a low educational level, are at risk of early childbirth and its consequences. These consequences include among others, child and maternal mortality, school dropout and low participation in the labour market. In consequence, there is a need for policies and programmes that empower adolescents and youths through improving information about the access to, and utilization of reproductive health services. It is necessary to reduce all barriers to access and improve the contraceptive distribution system by integrating family planning activities into national policies on reproductive health with more focus on women with no education. Data availability Underlying data used in this study were obtained from PMA2020 (Niger Round 2, 2016) (Performance Monitoring and Accountability 2020 Project, 2016). All PMA2020 datasets are free to download and use, although users are required to register for a PMA2020 dataset account. The request form must include a brief description of the research or analysis that the user would like to conduct using the requested data. Information about the data and terms of use are available here. Grant information The PMA2020 The information regarding exclusion criteria would have been easier to understand if presented using a flow chart rather than narratives. Why only focus on age and residence as the adjusting variables? What about other independent variables? Such as partner information (age and education levels), Family Planning media exposure, marital status, religion etc? All these have also been shown to affect modern contraceptive use. I am concerned with the aspect of residual confounding caused by leaving these variables out from the analysis. The dependent variable was measured in years. Is it complete years? What if a woman started 8. 9. The dependent variable was measured in years. Is it complete years? What if a woman started using a modern method 3 months after her first sexual encounter? How would she be recorded? Do you mean multi-variate or multi-variable analysis? Please check these as they are totally different I have some few concerns with the model building process. Generally, model 1 should be an empty model containing the crude estimates of an exposure variable on the outcome, in this case a model with education alone or a model with residence alone. Model 2 continues to start adjusting for other exposure variables and estimating confounding effect when an exposure var results in a 10%-15% change in the estimates of the primary predictor of interest. An interaction is looked at based on plausibility of its effect, for instance, do we have reason to believe that modern contraceptive use differs among urban women based on the levels of their education? Similarly, with women residing in rural areas. Finally, was model diagnostics performed for the Cox-ph models? Was the Ph assumption holding or was it violated? Was the data both st and svyset? Results In the narratives you mention the frequencies of the variables, these are not shown in table 1. Would be better to present them as well since they are present in your narratives. Would have also been interesting to show during the first sexual encounter, how many women utilized modern contraceptives?, and who are these women? (if this information is available in the dataset). Why was it important to show in table 1 the time between first sexual encounter and the date of interview? What was this information telling the readers? The information on median time to contraceptive use should appear earlier in the narratives since it is of much interest. Please correct the y-axis in figure 1 to say cumulative hazards and not probabilities. Nelson Aalen curves are not probabilities. Please indicate the log-rank p-values on the graphs in order to show the significance difference in modern contraceptive uptake. Most of the interpretations are in-correct. For survival analysis cox Ph models we interpret the "Hazards", and not probabilities/chance/likely. Please correct this -major comment. For model 3: Why did you leave out the main effects and only included the interaction term alone? This information is necessary and important. Why focus on a significance level of <0.1 rather than the tradition <0.05 for table 2? Please provide
2019-05-26T14:26:52.593Z
2019-05-10T00:00:00.000
{ "year": 2019, "sha1": "44c9d0a214d45a7eddcfd3383ff28e1120405867", "oa_license": "CCBY", "oa_url": "https://doi.org/10.12688/gatesopenres.12972.1", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e6816bb47a6efc93d47463c6bea100454c32d6fa", "s2fieldsofstudy": [ "Sociology", "Medicine", "Education" ], "extfieldsofstudy": [ "Medicine" ] }
225239685
pes2o/s2orc
v3-fos-license
The Impact of Material Selection on Durability of Exhaust Valve Faces of a Ship Engine – A Case Study Two alloys were used in order to extend the service life of marine engine exhaust valve head. Layers of cobalt base alloys were made of the powders with chemical composition as follow: the layer marked L12; C-1.55%; Si-1.21%; Cr-29.7%; W-9%; Ni-2%; Mo<0.01%; Fe-1.7%; Co-54.83% and the layer marked N; C-1.45%; Co-38.9%; Cr24.13%; Ni-10.43%; W-8.75%; Fe-7.64%; Mo-7.56%; Si-2.59%. Base metal was valve steel after heat treatment. It was consisted of: C-0,374%; Cr-9,34%; Mn-0.402%; Ni-0.344%; Si-2.46%; Mo-0.822%; P-0.0162%; S-0.001%. Layers on the valve faces were produced by laser cladding using the HPDL ROFIN DL020 laser. Grinding treatment is a very popular form of regeneration of seat and valve plug adhesions. Properly performed grinding operation ensures dimensional and shape accuracy of the surface from 7 to 5 accuracy class and surface roughness Ra not less than 0.16 μm, depending on the object and method of grinding. The 75H and 150S types are a significantly simplified form of valve plug face grinders. Finishing treatment was carried out with a Chris-Marine AB75H sander on a sanding stand equipped with a compressed air system the stand was designed by the author. The sander has been set up to the surface of the valve stem so that the grinding angle of the valve faces is 30°+10°. A flat grinding wheel T1CRA54–K was used for machining. The plunge feed was 0.01 mm/rev. The thickness of the welded layer after grinding was 1.2 mm. Both valves were installed in the ship’s engine and were used in real life. After 2000 hours of operation, the valve marked N was damaged. The valve marked L12 showed no damage and was in operation for the next 1000 hours. INTRODUCTION Proper selection of materials for specific applications is a key issue ensuring the correct and long-term use of machine parts and installations. Special difficulties arise when a structural element is affected by complex conditions, such as: • variable mechanical loads, especially fatigue, • friction wear processes, • high temperatures, • aggressive work environment. Such a set of material requirements is found in the cases of marine engines exhaust valves operating under a heavy load. The operating factors having the greatest impact on the technical condition of the exhaust valves of the marine engines are primarily [10,11,12]: • combustion process which (due to incorrect injection timing or poor fuel quality) accelerates the wear of valve components exposed to exhaust gases; • valve clearance -too low clearance value can lead, in a short time, to burning the valve seat, while too high clearance leads, in turn, to a long-term mechanical punching of the seat; • the cooling quality of the valve seat, its deterioration shortens the service life of both the seat and the head of exhaust valve. It has been confirmed that the steel valves intended for such products do not meet the requirements. Therefore, the face area subjected to the [1,2,3,5,8]. These materials are required to have, above all, high corrosion resistance in the exhaust environment and resistance to wear (high hardness under operating conditions) [4,6,7]. Despite the usage of such solutions, various types of damage listed below may occur: • loss of tightness caused by corrosion pits resulting from erosive exhaust fumes (Fig. 1b), • over burning of the valve head edges caused by the flow of extremely hot gases through the leaks (Fig. 1a), • cracks, scratches and splinters caused by fatigue of the face layer (Fig. 1c). Typical materials used for hard-surfaced surfaces are nickel-based alloys or Stellite, Deloro, Triballoy type cobalt. Cobalt base alloys, because of their good combination of high rupture strength and excellent hot corrosion resistance at high temperatures, have been widely used. Cladded layer made of cobalt base alloys consist of a continuous fcc matrix and a variety of carbides, mainly primary ones, such as M 23 C 6 , M 7 C 3 and M 6 C, which form as the alloys solidify. Subsequent aging or service at elevated temperatures causes a large amount of secondary carbide precipitation, commonly M 23 C 6 . The fine secondary carbide precipitates are more effective in strengthening the alloy matrix [15,26]. In these alloys, chromium forms carbides, which strengthen the cobalt matrix. Other elements are added to improve various properties -tungsten and molybdenum have large atomic radii, which distort the lattice providing strength [13]. EXPERIMENTAL Usually tests were carried out using simulation of engine chamber [11]. This paper presents the behaviour of the clad layers on the faces of the exhaust valve made of two alloys operating in real conditions, during normal work on a ship. In both cases, laser welding was performed using the HPDL ROFIN DL020 laser. The process parameters are specified in Table 1. Laser welding of the valve face was carried out with a high-power HPDL diode laser from ROFIN SINAR DL 020 with a maximum beam power of 2,3 kW. The laser has a head equipped with two diodes, a laser beam power control system and a packet cooling system with diodes.The object of the study was two materials marked as L12 and N. Both of them were cobalt base alloys. These materials are characterized by high resistance to abrasion and corrosive conditions [3,11,17,28]. The alloy marked as L12 was stellite 12 type and it was made from the powder with chemical composition was (in % by mass): C-1.55%; Si-1.21%; Cr-29.7%; W-9.0%; Ni-2.0%; Mo<0.01%; Fe-1.7%; Co-54.83%. The second one marked in this article as N was similar to stellite 21 however it was enriched with nickel and tungsten. Enriching in tungsten was made in order to improve wear resistance [18]. Nickel alloy are widely used in marine technology of valves and this alloys may be cheaper than cobalt and might be better option. Ship engine valves welded with nickel-alloyed materials are particularly exposed to high-temperature corrosion. In this case it is absolutely necessary to use fuel with minimum Sulphur and vanadium content in the operation of engines. In plasma welding nickel alloys were used. But the fuel had to be low-Sulphur with a small amount of vanadium, because nickel lead to high-temperature corrosion. Alloys Nimonic 105, Nimonic 80A and MARAGIN C250, C300 and C350 in their composition as the main component contains nickel, cobalt and molybdenum. They are used for ship casings, hulls, rotor blades and steam turbines, Table 2). After 2000 hours of service valve L12 was still intact without of any traces of serious failure but valve N was completely destroyed. Figure 2 presented the state of the valve N after operation. On the right side of the figure there is visible broken (A) part of the valve and the layer of surfacing (B) has practically disappeared. The damage also applies to the valve plug (C). The L12 valve after 3000 hours of operation did not show any significant signs of wear or damage (Fig. 3). The L12 valve shows little signs of degradation after a longer operating time. Some regions of the scale are visible however some of the face is still shiny and smooth. Detailed investigations were carried out on the face surfaces of both valves and microstructures on the cross-section of the analyzed valves. In addition, hardness measurements were carried out to assess the degree of material degradation. During the operation and heat interaction, diffusion processes occur in the surfacing material leading to changes in the structure. Mainly the formation of oxides on the surface causes the decomposition of carbides and a decrease in the hardness of the material. Microhardness tests are very sensitive to microstructure changes. FAILURE ANALYSIS A scanning electron microscope (SEM JSM7800F with EDS Octane Elite) was used for metallographic examinations which allowed the analysis of the valve face surfaces and microstructures on the cross-sections perpendicular to the face surface. Figure 4 shows the surface condition of the N valve after service. Significant differences of the appearance of the valve face surface were observed. It also indicates different properties of the material in different area of the face which is not acceptable. Observed destruction of the face can be caused by various factors. It may be the result of incorrect selection of padding materials, improper parameters of the padding process, but also mechanical causes of damage, i.e. overloading or improper valve grinding should be taken into account. In this work only chemical composition of the clad and changes in mikrostrusture were analized. The valve was cut into samples, included and a specimen prepared, which was observed on a scanning electron microscope (SEM). On Figure 5 is presented mikrostructue of destroyed valve N. Mikrostructure is typical for cladding technology [3,4,10,26]; visible dendritic structure (dark) and interdendritic regions (light under low magnification). Dendritic region are a solution of alloying additives in cobalt, mainly nickel, iron, chromium, tungsten and molybdenum. The interdendritic region consist of mixture of carbides end eutectics (Fig. 5a). During previous tests of welds made of the same powder as designated L12, occurrence was found was found by XRD analysis presence of carbides were identified as mainly M 12 C (Co 6 W 6 C), M 7 C 3 and M 23 C 6 (Cr 23 C 6 ). EDS analysis shows a significant presence of tungsten, chromium and molybdenum The L12 valve, undamaged after 3000 hours of operation, was tested the same way. Throughout the operation the valve functioned without any problems. It was dismounted and inspected. Despite the proper operation of the engine throughout the entire test period, the contact surface of the outlet valve face has been slightly degraded. Keyence equipment was used for surface analysis. This eqipment is a microscope coupled with a profilograph and allows measure the surface profile of targets in X and Z directions. The height, width or gap on a surface profile can be measured. Observation of the face surface showed the presence of crumbling scale and areas of the exposed metal surface. In Figure 6 a multilayer scale with a tendency to crack and delaminate is visible. However, the largest layer thickness did not exceed 60 μm. SEM analysis of the valve face marked L12 showed the slight plastic deformation of the surface was observed, which can be interpreted as the effects of plastic deformation during work, this is illustrated in Figure 7. SEM observation of the cladding layer cross-section does not show any significant changes in relation to the typical microstructure of the clad layer. However, the presence of the scale on the face surface indicates changes in chemical composition resulting from diffusion processes. The EDS analysis results for the padding area just below the scale was as follows: Si-2.49%; Cr-23.12%; Fe-31.83%; Co-35.30%; Ni-0.60%; W-5.79%; Mo-0.86%. It may cause changes in material hardness because the scale is mainly composed of chromium oxide, which has protective properties. This is also the reason for using alloys with such high chromium content. Chromium is mainly contained in carbide secretions and they are also the main source of diffusion to the surface. Degradation of carbides decreases the hardness of the material [10,24,25]. HARDNESS MEASUREMENT After the operation period, the hardness profiles across the thickness of clad and substrate were measured on a FM800 hardness meter at 49N load using the Vickers method. Any hardness data reported here were statistic averages of at least four measurements. Due to the technology of applying the layer, the hardness measurement was carried out according to developed scheme shown in Figure 9. Due to the heterogeneous structure of the material four measurement series were made, the mean hardness value and the confidence interval were determined. The non-uniform hardness across the coating was a result of multilayer producing process and structure of this layer. The heat affected zone (HAZ) was observed in the steel under the clad layer. From the morphological point of view, the measured differences in hardness were affected by the dendrite structure and can also be attributed to the morphology of the carbides [6,12,16]. This phenomenon is related to the multiple heating and cooling during the multilayer cladding process. The heat treatment in high temperature and exhaust gases atmosphere may led to the changes in the hardness especially in the upper part of the clad. The results are shown in Figures 10-12. In each of the examined cases, the hardness of the surfacing layer was significantly higher than the hardness of the substrate. In the case of L12 sample hardness was found in the range 473 to 581 HV5.0 at distance 3mm from the steel-clad interface (close to clad top surface) and remained at a similar value to a distance of 1mm from the interface. In the fusion line hardness was rapidly falling to value 231-251 HV5.0. The obtained results indicate an uniform hardness of the entire weld. In the case of N sample hardness was found in the range 480 to 581 HV5.0 at distance 3mm from the steel-clad interface (close to clad top surface) -similar to L12, but in the deep of the clad hardness decreases up to 195-447 HV5.0. In means large changes in hardness on the cross-section. In both cases no significant decrease in hardness was observed close to the top surface, which indicates that despite the formation of the oxide layer, no degradation of the material will occur. CONCLUSIONS 1. The cladding process provided both clad layers on the valve faces with a correct structure, typical for cladding process. It means dendritic structure.Dendritic region are a solution of alloying additives in cobalt, mainly nickel, iron, chromium, tungsten and molybdenum. Interdendritic region consist of mixture of eutectics end carbides identified as mainly M 12 C (Co 6 W 6 C), M 7 C 3 and M 23 C 6 (Cr 23 C 6 ). 2. The analyzes confirmed the usefulness of the stellite 12 type alloy (in this case clad layer marked L12) for operation in a marine engine. Experimental composition marked N was prematurely damaged, however microstructure analysis showed no abnormalities. The obtained images of L12 and N microstructures showed significant similarities. 3. Hardness measurements showed stable hardness of the L12 surfacing weld, higher than the hardness of the substrate. While a constant decrease in the hardness of the N-weld surfacing across the thickness is observed which could affect its durability. 4. It is also important to consider the possible mechanical factor. This would imply irregular wear of the seat in the undamaged area of the N valve. Improper valve fitting may led to premature valve failure. Incorrect grinding-in could cause the valve to leak and, as a consequence, to blow hot exhaust gases. This caused a local increase in temperature and intense destruction.
2020-08-06T09:06:49.873Z
2020-09-01T00:00:00.000
{ "year": 2020, "sha1": "21254db9da33be3bece8fdf52464a6c482a04814", "oa_license": "CCBY", "oa_url": "http://www.astrj.com/pdf-124074-53998?filename=The%20Impact%20of%20Material.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "412ccbfb05c87cf4176cfba5fd707ee850e338e9", "s2fieldsofstudy": [ "Materials Science", "Engineering" ], "extfieldsofstudy": [ "Environmental Science" ] }
250943726
pes2o/s2orc
v3-fos-license
Cathodic generation of reactive (phenylthio)difluoromethyl species and its reactions: mechanistic aspects and synthetic applications The cathodic reduction of bromodifluoromethyl phenyl sulfide (1) using o-phthalonitrile as a mediator generated the (phenylthio)difluoromethyl radical, which reacted with α-methylstyrene and 1,1-diphenylethylene to provide the corresponding adducts in moderate and high yields, respectively. In contrast, chemical reduction of 1 with SmI2 resulted in much lower product yields. The detailed reaction mechanism was clarified based on the cathodic reduction of 1 in the presence of deuterated acetonitrile, CD3CN. Introduction Organofluorine compounds containing a difluoromethylene group have been of much interest from biological aspects since the difluoromethylene group is isopolar and isosteric with an ether oxygen [1,2]. Particularly, organic molecules bearing a (arylthio)difluoromethyl group (ArSCF 2 ) have potential biological applications such as anti-HIV-1 reverse transcriptase inhibitors and agrochemical applications [3,4]. Reurakul and Pohmakotr et al. carried out the reaction of PhSCF 2 Br with SmI 2 in THF/iPrOH to generate PhSCF 2 radicals followed by trapping with various olefins in moderate yields [5]. Prakash et al. also achieved fluoride-induced nucleophilic (phenylthio)difluoromethylation of carbonyl compounds using PhSCF 2 SiMe 3 [6]. Quite recently, Shen et al., developed various nucleophilic, electrophilic, and radical difluoromethylthiolating reagents [1]. However, these methods require various metal and organometallic reagents. On the other hand, electrochemical organic synthesis is a metal-free process and does not require any hazardous reagents and it produces less waste than conventional chemical syntheses. Therefore, electrochemical synthesis is desirable from an aspect of green chemistry [7][8][9][10]. In this context, we have developed various electrochemical methodologies for efficient selective fluorination [11,12] and molecular conversion of organofluorine compounds to date [13][14][15][16][17][18]. We have also achieved the gemdifluorination of sulfides bearing various electron-withdrawing groups at the α-position (Scheme 1) [19][20][21]. Furthermore, we also succeeded in the electrochemical gem-difluorodesulfurization of dithioacetals and dithiocarbonate (Scheme 2 and Scheme 3) [22,23]. In this work, we have studied the electrochemical generation of (phenylthio)difluoromethyl reactive species from bromodifluoromethyl phenyl sulfide and their synthetic application as well as mechanistic aspects. Results and Discussion Cathodic reduction of bromodifluoromethyl phenyl sulfide (1) At first, the reduction potential (E p red ) of bromodifluoromethyl phenyl sulfide (1) [24,25], the reduction potential of 1 was found to be similar to that of PhCF 2 Cl. Next, we carried out the constant potential cathodic reduction of 1 at a platinum cathode in Bu 4 NClO 4 /MeCN. Notably, when 1.3 F/mol were passed, starting compound 1 was consumed completely. As shown in Scheme 4, difluoromethyl phenyl sulfide (2) was mainly formed as well as bis(phenylthio)difluoromethane (3) as a minor product. From these results, one-electron and two-electron reductions of 1 seem to take place simultaneously to generate radical and anionic intermediates. In order to trap the radical intermediate, the constant potential cathodic reduction of 1 was performed in the presence of various olefins such as α-methylstyrene, cyclohexene, and dihydrofuran. The results are summarized in Table 1. Regardless of trapping reagents, 1.3-1.4 F/mol of electricity was required to consume the starting material 1. The required electricity was similar to the electrolysis in the absence of the trapping reagent. Only when α-methylstyrene was used as the radical trapping reagent, the expected radical adduct 4 was formed in reasonable yield of ca. 30% (Table 1, run 1). A platinum cathode is more suitable for the formation of adduct 4 compared to a glassy carbon cathode (Table 1, run 2). Dolbier et al. reported that electron-poor perfluoroalkyl radicals such as n-perfluoropropyl radical have high reactivity to electron-rich olefins such as α-methylstyrene and styrene [26]. In fact, our cathodically generated reactive species also reacted with α-methylstyrene. However, electron-rich dihydrofuran did not provide any radical adduct at all (Table 1, run 4). The reason is not clear at present. Thus the obtained results indicate that the cathodically generated reactive species would be the (phenylthio)difluoromethyl radical. In order to increase the yield of adduct 4, the cathodic reduction of 1 was performed in other solvents such as DMF and CH 2 Cl 2 using 20 equiv of α-methylstyrene. However, the yield of 4 did not increase. The cathodic reduction of perfluoroalkyl halide generates radical and/or anionic species in general [24]. In order to generate radical species selectively, indirect cathodic reduction using various mediators has been often employed. Médebielle et al. successfully carried out the cathodic reduction of ArCF 2 X and RCOCF 2 X with nitrobenzene as a mediator to generate the corresponding difluoromethyl radicals selectively, and they applied this electrocatalytic system to the synthesis of various heterocyclic compounds bearing a perfluoroalkyl or perfluoroacyl group [27][28][29][30]. Furthermore, they extended this methodology to tandem cyclization to provide fused difluoromethylene-containing heterocycles [31]. In consideration of these facts, we studied the cathodic reduction of 1 using a mediator. Indirect cathodic reduction of 1 using o-phthalonitrile as mediator At first, cyclic voltammetry was carried out to investigate the electrocatalytic reduction of bromodifluoromethyl phenyl sulfide (1) As shown in Figure 1b, a typical reversible redox couple (E 1/2 red = −1.69 V vs. SSCE) of o-phthalonitrile was clearly observed. A significantly enhanced cathodic peak current was observed after addition of compound 1 to the solution containing o-phthalonitrile while the anodic peak current disappeared completely as shown in Figure 1c. The reduction peak potential of 1 is −2.4 V vs. SSCE, which excludes the reduction of 1 at this potential. Therefore, the enhanced cathodic current of o-phthalonitrile clearly suggests that a typical electrocatalytic reduction reaction takes place. Thus, it was found that o-phthalonitrile should work as an electron transfer catalyst, i.e., a redox mediator. On the bases of the cyclic voltammetric measurements, the cathodic reduction of 1 was carried out at a constant potential using o-phthalonitrile as mediator. As shown in Scheme 5, the total yield of products 2 and 3 increased appreciably to ca. 80% compared to the direct cathodic reduction of 1 (70% yield in Scheme 4). Next, the indirect cathodic reduction of compound 1 was carried out similarly in the presence of α-methylstyrene and the results are summarized in Table 2. When 0.2 equiv of the mediator were used, the yields of both products 2 and 4 were decreased compared to the direct cathodic reduction ( Table 2, run 1). Increasing the amount of the mediator to 0.5 equiv resulted in an increase of the yield of 4 to 35% ( Table 2, run 3) while the yield of 2 was decreased significantly. In this case, the required electricity was increased to 1.8 F/mol. From these results, we anticipate that a one-electron reduction of compound 1 takes place to generate the PhSCF 2 radical, which is further reduced affording the PhSCF 2 anion when a trapping reagent is absent. The resulting anion seems to undergo elimination of difluorocarbene to generate a phenylthiolate anion which reacts with compound 1 to form product 3 as shown in Scheme 6. Scheme 8: Cathodic reduction of compound 1 in the presence of α-methylstyrene at a high current density. In order to confirm the proposed reaction pathway to product 3, the reaction of bromodifluoromethyl phenyl sulfide (1) with phenylthiolate anion was performed at room temperature. As expected, product 3 was formed in moderate yield of 67% as shown in Scheme 7. Scheme 7: Reaction of compound 1 with PhS anions. It is known that difluorocarbene has generally low reactivity towards olefins; however, it can be trapped with electron-rich olefins [32]. In order to trap difluorocarbene with an olefin, we tried to increase the amount of generated difluorocarbene by increasing the current density for the cathodic reduction of compound 1. Thus, the cathodic reduction of 1 was carried out completely at a high current density of 16 mA/cm 2 in the presence of α-methylstyrene. As shown in Scheme 8, the expected difluorocarbene adduct 5 was detected by high resolution mass spectrometry in addition to products 2, 3, and 4. As already mentioned, SmI 2 is a well-known one-electron reducing reagent, and has been used to generate PhSCF 2 radicals and perfluoroalkyl radicals from PhSCF 2 Br and perfluoroalkyl halides, respectively. The generated radicals undergo addition to olefins and acetylenes [5,33]. Pohmakotr et al. and Yoshida et al. reported that the reaction of PhSCF 2 Br and PhCF 2 Cl with SmI 2 generated PhSCF 2 and PhCF 2 radicals, which were trapped with styrene [5,34,35]. Therefore, we carried out the reaction of compound 1 with SmI 2 in the absence and presence of α-methylstyrene, which is more electron-rich compared to styrene. The results are summarized in Table 3. Scheme 9: Indirect cathodic reduction of compound 1 in CD 3 CN. Table 3, even when two equivalents of SmI 2 were used in the absence of α-methylstyrene, the conversion of compound 1 was low and a large amount of starting material 1 was recovered ( Table 3, run 1). In this case, simple reduction product 2 was formed together with trace amounts of product 3. As shown in Since HMPA is known to enhance the reducing ability of SmI 2 [36], we performed the reaction of compound 1 in THF containing 7.5 equiv HMPA. As expected, the conversion of 1 increased from 34% to 66%, and product 3 was formed in 32% yield ( Table 3, run 2). However, the yield of product 2 decreased from 12% to 5%. Then, the reaction of 1 with SmI 2 was carried out similarly in the presence of α-methylstyrene (Table 3, run 3). However, the result was almost the same as that in the absence of α-methylstyrene: the yields of products 2 and 3 remained unchanged and the expected adduct 4 was not formed at all although the conversion of compound 1 increased. In both cases (Table 3, runs 2 and 3), unidentified products were formed. Thus, it was found that the chemical reduction of compound 1 with SmI 2 was quite different from the electrochemical reduction. However, notably, when THF containing MeOH (10 equiv with regard to compound 1) was used, adduct 4 was formed in 14% yield (Table 3, run 4). In this case, product 3 was not formed. In order to determine the hydrogen source of the products 2 and 4, indirect cathodic reduction of 1 was carried out in deuterated acetonitrile, CD 3 CN (Scheme 9). As shown in Scheme 9, deuterated products 2 and 4 were formed. In the case of product 2, almost complete deuteration was observed, which clearly indicates that product 2 should be formed via a PhSCF 2 radical intermediate. Thus, the main hydrogen source for the formation of product 2 was determined to be MeCN. On the other hand, in the case of adduct 4, deuterated and protonated 4 were formed in a similar yield, which suggests that 4 would be formed via both radical and anionic intermediates. In order to further clarify the reaction mechanism, the indirect cathodic reduction of compound 1 was performed in the pres-ence of α-methylstyrene in MeCN containing cumene (iPrC 6 H 5 ) and isopropyl alcohol (iPrOH). The former works as a hydrogen radical source while the latter works as both a hydrogen radical and proton source. The results are summarized in Table 4. Although it was expected that the yield of product 4 would be increased in the presence of cumene as a hydrogen radical source, the yield was decreased (Table 4, run 2) compared to the electrolysis in the absence of cumene (Table 4, run 1). On the other hand, the yield of product 4 increased in the presence of iPrOH (Table 4, run 3), and the yield further increased to 60% at a higher content of iPrOH of 50% (Table 4, run 4). In the latter case, the required electricity was increased to 2.7 F/mol. Reutrakul and Pomakotr et al. also reported that iPrOH is an effective additive for the addition of PhSCF 2 radical to olefins [5]. Since the presence of a large amount of a proton source such as iPrOH increased the yield of adduct 4 significantly, the electrolysis of compound 1 in the presence of 1,1-diphenylethylene as a more electron-rich olefin compared to α-methylstyrene was carried out similarly. As expected, the adduct 6 was formed in a high yield of 90% as shown in Scheme 10. Isopropanol can serve as both a proton and a hydrogen radical source while cumene serves only as a hydrogen radical source. The indirect cathodic reduction of compound 1 in the presence of cumene decreased the yield of adduct 4 while the use of iPrOH instead of cumene increased the yield markedly. As already mentioned, in the chemical reduction of compound 1 with SmI 2 , only 10 equiv of MeOH to 1 also enhanced the formation of adduct 4 to some extent (from 0% to 14% yield) as shown in Table 3. Therefore, iPrOH seems to promote the radcal addition rather than reduction although the reason has not been clarified yet. Reaction mechanism Although the cathodic reduction of perfluoroalkyl halides usually involves one-and two-electron transfer, their indirect cathodic reduction using mediators undergoes one-electron reduction selectively as reported by Saveant et al. [24]. In this study, we also confirmed that the o-phthalonitrile-mediated reduction of PhSCF 2 Br (1) in the absence of radical trapping reagents consumed much less than 2 F/mol of electricity. Furthermore, the indirect cathodic reduction of compound 1 in CD 3 CN formed the deuterated product, PhSCF 2 D (2D) as a major product. On the other hand, the indirect cathodic reduction of compound 1 in CD 3 CN containing a radical trapping reagent such as α-methylstyrene consumed less than 2 F/mol of electricity to provide protonated and deuterated adducts 4/4D in almost same yields. Similar indirect electrolysis of compound 1 in iPrOH/MeCN in the presence of 1,1-diphenylethylene consumed much more than 2 F/mol of electricity to afford adduct 6 in high yield. Moreover, the indirect cathodic reduction of compound 1 at high current density in the presence of α-methylstyrene formed a trace amount of 1,1-difluorocycopropane derivative 5, which is an evidence of the generation of difluorocarbene from 1. In consideration to these facts, we propose the following reaction mechanism as shown in Scheme 11. The one-electron reduction of 1 generates the PhSCF 2 radical A, which abstracts a hydrogen radical from MeCN to give product 2 (path b). The radical A undergoes further reduction to generate anion B (path a). Elimination of difluorocarbene from anion B forms a phenylthiolate anion, which reacts with the starting material 1 to form product 3. In the presence of radical trapping reagents such as styrene derivatives, radical A reacts with styrenes to form radical intermediate adduct C. The radical C abstracts a hydrogen radical to form products 4 and 6. Alternatively, the radical intermediate C is further reduced to Scheme 11: Reaction mechanism. generate anion D followed by protonation to give products 4 and 6. Conclusion We have successfully carried out catalytic electrochemical reduction of bromodifluoromethyl phenyl sulfide using o-phthalonitrile as mediator to generate (phenythio)difluoromethyl radicals selectively. The generated radicals were efficiently trapped with electron-rich olefins such as α-methylstyrene and 1,1-diphenylstyrene. The reaction mechanism was also disclosed by using the deuterated solvent CD 3 CN. Supporting Information Supporting Information File 1 Experimental section: general information, materials, and general procedure for cathodic reduction of compound 1.
2022-07-22T15:07:33.942Z
2022-07-20T00:00:00.000
{ "year": 2022, "sha1": "d05efcecafd60a58cfb7c6a1bee0988174b3cc65", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "MergedPDFExtraction", "pdf_hash": "cb54aa96db51d3545bf40176234cf869544a50a2", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [] }
241451360
pes2o/s2orc
v3-fos-license
Liver damage during infections with coronavirus The pathogen of the new 2019 coronavirus disease (COVID-19), the sever acute respiratory syndrome coronavirus 2 (SARS-Cov-2), presented a significant risk to health care. The WHO has described the SARS-CoV-2 infection outbreak as an international public health emergency. The main damage caused by the infection with SARS-CoV-2 was known to be lung infections. Previous research revealed that liver damage is prevalent in patients infected with the additional widely zoonotic coronaviruses, Severe Acute Respiratory Syndrome (SARS) and Middle East Respiratory Syndrome (MERS), and has been reviewed in relation to the severity of MERS, SARS, and COVID-19 diseases. Likewise, the mechanism and features of liver damage and liver injury has also been observed, as outlined in this review, which results in extreme cases during the phases of the disease. Introduction Coronavirus2 (SARS_COV_2) is a new pathogenic coronavirus known as COVID-19 that causes severe respiratory disease in humans [1]. COVID-19 is a public epidemic issue not just due to accelerated horizontal exposure, but due to the public infrastructure and economic consequences [2]. The results show that different degrees of liver damage can occur in patients infected with SARS-COV-2 and SARS-CoV. In COVID-19 patients, the irregular liver malfunction of liver disorder may be observed, whether in the type of cholestasis, hepatitis, or even both [3]. Serum alanine aminotransferase (ALT) elevation was observed in 28 % of 99 COVID-19 cases, and elevated total bilirubin was documented in 18% of the same patients in an early study in China [4]. With the severity of COVID-19, the effectiveness of liver dysfunction will increase. Liver function tests are standard or slightly raised for early SARS-CoV-2 infection. In liver histology showed moderate lobular, syncytial multinuclear hepatocytes and portal function micro vascular steatosis [5]. Electron microscopy showed endoplasmic reticulum dilatation, mitochondrial swelling, damaged cell membranes, and glycogen depletion. Electron microscopy is used to display endoplasmic reticulum dilatation, mitochondrial swelling, weakened cell membranes, and glycogen depletion [6]. Data indicates that within hepatocytes, SARS-CoV-2 also replicates [7]. The slight rise in aminotransferases and the lack of apparent necrosis suggest a mild liver damage caused by SARS-CoV-2 [8]. The required cell entry ACE2 receptor expressed by hepatocytes and cholangiocytes [8]. While extreme liver damage documented in patients with acute COVID-19, several reasons such as hypoxemic, sepsis-associated cholestasis, drug-induced liver injury, and hypotensive ischemic hepatitis are likely to trigger it. Various of the medications utilized in serious situations of liver damage, like those used to prevent infection of SARS-CoV-2, including lopinavir, ritonavir, and remdesivir, have been associated with hepatotoxicity. This faster review will discuss the mechanisms and liver damage features produced by SARS-CoV-2 and SARS-CoV infections that can provide information for further research on liver damage done by COVID-19. Medical Characteristics Early investigations from China indicated that the infection rate for SARS-CoV-2 was three to seven days and sometimes two weeks, respectively. The maximum infection rate reported was 12.5 days [9]. Whilst the most important clinical characteristics of COVID-19 from a Chinese resident studies reported fever (>38C), fatigue, dry cough, leukopenia, myalgia and elevated liver enzymes. Vomiting, diarrhea, and nausea were realized in 2% to 10% of COVID-19 patients. In Wuhan's recent case series by Wang and coworkers [10] [11]. The COVID-19 and Liver Damage COVID-19 can lead to aggravation of un treated chronic liver disease, guiding to hepatic decomposition, higher death rate, and liver failure. Table 1 offers a list of newly available research. Generally, 2 to 11 percent amongst COVID-19 cases were recognized to have liver disease, especially individuals with extreme COVID-19 [12]. These data are from bolt data during the early pandemic from China and some of the data at the US, Zang et al. analyzed the effects of the virus on the liver and found 14% to 53 % involvement of patients have normal AST and ALT [13]. Acute liver injury was found in 1.5 % of the 5700 hospitalized patients in new york and chronic liver failure cases were observed in a recently approved new york study by Safiya Z et al. Other bolt data from China studies showed that the abnormal liver test was detected in between 12% to 78% of their patients [14]. In China's latest study, 23 patients (2.1%) appeared active for HBsAg, and only one that had chronic COVID-19 [15]. Another Wuhan study conducted by Wang and associates, found that four patients with COVID-19(2.9%) had ongoing liver problems [16]. Interestingly, a comparative study found that only 4% had ongoing chronic Hepatitis-B between 161 survivors and 113 non survivors [17]. In another study Xu and coworkers, from outside Wuhan recognized 26 COVID-19 patients of which 2.9 % got liver disease [18]. Thirteen of the 274 patients with acute liver damage were observed and 10 of them had died [17]. The existing data clearly show that increased liver enzymes are mainly found in patients with critical cases with COVID-19. 274 patients with acute liver injury, [13] were diagnosed and 10 died [17]. In 8 (62 %) out of 13 patients in ICU, increased AST was observed compared to 7(25 %) out of 28 in the outside of the ICU [19]. In extreme COVID-199, the peak AST (1,445 U/L) and ALT (7,590 U/L) levels observed [20]. Intriguingly, elevated the enzyme ratio was found in subjects undergoing upkeep with ritonavir/lopinavir (25% vs. 2556.1 %) [21]. Surprisingly, more patients developed elevated transaminases in spite of the existence of ACE2 in cholangiocytes [22]. Data collected from Xu et al. in Wuhan, China presented elevated levels of gamma glutamyltransferase in extreme subjects of COVID-19 [23]. The elevation of liver enzymes in this population is unknown, whether it caused by the disease itself or from liver injury due to medication. In extreme COVID-19 cases, potential liver damage caused by overflowing cytokine production [22]. Hepatic malfunction is probable to occur from cytokine overflowing instead the directly cytopathic virus influences. Further information wanted to determine the trend and grade of liver damage in COVID-19 cases. Pathophysiology of Liver Infection The study by Xu, et al reported the death of the patient infected with COVID-19 because of mild microvascular steatosis and inflammation in the portal tract and hepatic lobes located in the liver histology. At this point, although it is uncertain if these effects are due to the virus-related infection or medications. In particular, peripheral blood testing presented substantially diminished but proinflammatory hypersensitive CD8 and CD4 cells, with elevated Th17, CCR6+, CD4 and T cells with granulations of cytotoxic effect in CD8 cells, that might furthermore lead to hepatocyte failure [24]. In alternative paper liver surgeries demonstrated after death in four COVID-19 cases focal macrovesicular steatosis and moderate sinusoidal dilatation. Moderate lobular lymphocytic invasion occurred in portal areas that was not important. In one of the patients, RNA of SARS-CoV-2 was separated via RT-PCR from tissue of liver [25]. In spite of higher ACE2 levels receptors' that expressed by the bile duct epithelium, but there wasn't really any evidence that points to injure to the bile duct [25], but it could be due to direct damage of the hepatocytes and cholangiocytes as both of them have the ACE2 receptor the target of virus , so that all patients with COVID whither severe or mild were not showed the abnormal liver test or liver damage .The second mechanism could be related to translocation from the virus from the intestine to the portal blood and subsequently to the liver that will evidence as 2% to 10% of patients can present with diarrhea and the virus was found in the stool of the affected patients. The third mechanism is drug hepatotoxicity remembered those patients were on multiple medication seems there is no current treatment for that includes antibiotic, antimalarial, remdesivir, and all of these medications could cause to the abnormal liver testing that observational study by Zhang and his coworkers showed AST/ALT increase even after control of infection .Lastly, and the most important the immune mediated inflammation evidenced by the cytokine storm that is particularly found during sever the disease. No variation in frequency of liver damage between non-survivors (28 % ) and survivors (30 %) and SARS and Liver Damage An infectious disease induced by SARS-CoV is severe acute respiratory syndrome (SARS) [40], which was first detected in November 2002 in the Chinese city of Guangdong and Hong Kong and has spread rapidly to 29 cities around the world. About 10% of patients suffer from CLD, especially chronic hepatitis B, possibly because of the SARS outbreak's geographical position. More than 50 percent of patients developed (mainly moderate) irregular liver function tests and the others improved. In various studies, increased liver function tests, an expected elevation ALT in the ICU and mortality in general were related with severe diseases, this elevated the probability of SARS affected liver dysfunction instead of just being related with it [40][41][42][43][44][45][46][47]. A variety of studies observed that SARS patients suffer liver damage and a slight to mild increase of AST or ALT levels, or both, in the early stages of infection. Most patients had reduced levels of serum albumin and raised levels of serum bilirubin [48][49][50][51][52][53][54][55][56][57][58]. Also, Numerous studies conducted to identify the mechanism of liver damage due to SARS-CoV [59][60][61]. In SARS patients massive numbers of virus particles found in parenchymal cells , and vascular endothelium of the liver, and SARS was known to use angiotensin-converting enzyme 2 (ACE2) as receptor to enter the cell ,which is broadly expressed in endothelial cells of the liver, making the liver a potential target for SARS-CoV [59][60][61][62][63]. Liver biopsies showed a substantial elevated in eosinophilia bodies with balloon-like hepatocytes, and mitotic cells in cases with SARS, proposing that SARS may stimulate apoptosis of cells the liver and therefore initiate the liver damage [6]. Several different papers presented that during the caspase-dependent pathway, SARS-CoV-specific protein 7a be able to stimulate cell lines apoptosis of various organs like the kidney, lung and liver, targeting tissue of the liver and causing liver damage [64]. In SARS-CoV patients with early infection, abnormal serum levels of chemokine's and cytokines have been observed. Duan et al revealed that in cases with abnormal liver function, serum levels of IL-6, IL-10, and IL-1 were greater than in normal liver function cases, indicating a potential related between liver damage and SARS-CoV infection-stimulated inflammatory responses [45]. Additionally, due to the increased proliferation of the hepatitis virus through infection with SARS, the patients with SARS that infected with hepatitis B / HCV virus were further susceptible to liver damage and acute hepatitis, as well as steroids , quinolones, macrolides, ,ribavirin and other medication utilized for patients of SARS, it can also damage the liver [43,46,49]. MERS and Liver Damage The first infection with the MERS' virus was identified in Saudi Arabia in 2012 [65], and low albumin levels were considered a distinct indicator of extreme MERS disease [66]. In MERS patients, liver biopsy indicated moderate hydropic hepatocyte degeneration and lobular lymphocytic infiltration [67,68]. In recovering MERS patients, had fewer liver injuries than in non-recovering patients (77.9% versus 91.3%), and mortality was higher in patients with comorbidities [69,70]. Conclusion Live damage mechanisms that happened throughout infection with SARS-CoV-2 remains obscure. Our limited knowledge indicates that generally pathogenic human coronavirus infection can cause liver damage due to immunopathology caused by excessive inflammatory response and/or direct cytopathic pathogen effects. In the meantime, in patients with viral hepatitis, SARS-CoV may exacerbate liver damage, despite the lack of reason yet for SARS-CoV-2 and MERS. Clearly, medication stimulated liver damage must not be neglected when treating of coronavirus infection and must be carefully examined. In addition to the active treatment of the primary infection caused by coronavirus disease, from a therapeutic point of view, consideration must also be given to controlling the incidence of liver damage and the use of medicines that may trigger liver damage, including steroids or quinolone antibiotics, macrolide, etc. It is advisable to treat patients with liver damage with drugs that could both inhibit inflammatory responses and protect liver functions. Extreme acute liver injury cases with a higher mortality rate have been recorded. To characterize the degree and trigger of liver damage in COVID-19, future research with deep actually follow are necessary. Detailed evaluation is expected of the influences of COVID-19 on chronic liver disease, with further investigation needed in this field.
2021-09-01T15:04:21.707Z
2021-06-30T00:00:00.000
{ "year": 2021, "sha1": "a308c9faa6f2dd1eab4fc34b4b98e9ec7107b3c8", "oa_license": "CCBY", "oa_url": "https://journal.mtu.edu.iq/index.php/MTU/article/download/302/81", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "b7fc3e5953f977edd63dfde207243836eba1fa36", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
139856827
pes2o/s2orc
v3-fos-license
Development of Acoustically Active Nanocones Using the Host–Guest Interaction as a New Histotripsy Agent Histotripsy is a noninvasive and nonthermal ultrasound ablation technique, which mechanically ablates the tissues using very short, focused, high-pressured ultrasound pulses to generate dense cavitating bubble cloud. Histotripsy requires large negative pressures (≥28 MPa) to generate cavitation in the target tissue, guided by real-time ultrasound imaging guidance. The high cavitation threshold and reliance on real-time image guidance are potential limitations of histotripsy, particularly for the treatment of multifocal or metastatic cancers. To address these potential limitations, we have recently developed nanoparticle-mediated histotripsy (NMH) where perfluorocarbon (PFC)-filled nanodroplets (NDs) with the size of ∼200 nm were used as cavitation nuclei for histotripsy, as they are able to significantly lower the cavitation threshold. However, although NDs were shown to be an effective histotripsy agent, they pose several issues. Their generation requires multistep synthesis, they lack long-term stability, and determination of PFC concentration in the treatment dose is not possible. In this study, PFC-filled nanocones (NCs) were developed as a new generation of histotripsy agents to address the mentioned limitations of NDs. The developed NCs represent an inclusion complex of methylated β-cyclodextrin as a water-soluble analog of β-cyclodextrin and perfluorohexane (PFH) as more effective PFC derivatives for histotripsy. Results showed that NCs are easy to produce, biocompatible, have a size <50 nm, and have a quantitative complexation that allows us to directly calculate the PFH amount in the used NC dose. Results further demonstrated that NCs embedded into tissue-mimicking phantoms generated histotripsy cavitation “bubble clouds” at a significantly lower transducer amplitude compared to control phantoms, demonstrating the ability of NCs to function as effective histotripsy agents for NMH. ■ INTRODUCTION Histotripsy is a noninvasive ultrasound (US) ablation technique, which mechanically ablates soft tissue (i.e., tumors) through acoustic cavitation mechanism. As a noninvasive and nonthermal technique, histotripsy uses very short, focused, high-pressured US pulses to fractionate the tissue. 1−4 When these US pulses interact with the water inside of the tissue, bubble clusters are nucleated, forming highly energetic microbubbles that expand to a size of >50 μm, and rapidly collapse, releasing energy into neighboring cells to mechanically fragment them to subcellular debris. 5,6 To generate this rapid bubble expansion and collapse in histotripsy, a high negative pressure threshold (≥28 MPa) is required to initiate cavitation. 7,8 Although histotripsy has shown its potential for many clinical applications including tumor ablation, 9−12 the efficacy of this technique is limited to situations in which a single target tumor can be identified and imaged before the treatment, 13 which is not possible for treating micrometastases or multiple tumor nodules. In addition, even with the image-guided system, the operator (clinician) needs to be extremely careful to accurately deliver US pulses to the focused area in order to avoid cavitation in healthy neighboring cells. To deal with these limitations of targetability, selectivity, and high cavitation pressure, our group has recently developed nanoparticle-mediated histotripsy (NMH) as a novel ablation method to achieve selective tumor ablation. We developed NMH in order to significantly reduce the pressure required to selectively deliver histotripsy into the target tissue, enhance treatment efficiency, and allow the selective ablation of tumor cells while creating no effect on the neighboring healthy cells. NMH feasibility was demonstrated for the first time using perfluorocarbon (PFC)-filled nanodroplets (NDs) as a cavitation nuclei capable of reducing the histotripsy cavitation threshold. 14,15 These NDs were synthesized using a tri-block amphiphilic copolymer which encapsulates low-boiling point PFCs, either perfluoropentane (PFP) or perfluorohexane (PFH), as a result of their self-assembly into nanosized (∼200 nm) droplets. 15 Phase transition of PFC in the core of NDs from liquid to gas through acoustic droplet vaporization forms nanobubbles under the US pulses, and these nanobubbles act like cavitation sites, and initiate the cavitation at lower cavitation threshold pressure. 13 Detailed studies about NMH showed 1,8,13,14,16,17 that using the NDs as the cavitation nuclei significantly lowered the cavitation threshold pressure (∼7 MPa vs 28 MPa without droplets) and allowed cavitation to occur selectively only in the regions where the NDs localize at the lowered cavitation pressure value. 14 However, although NDs successfully addressed the selectivity and high cavitation threshold limitations of histotripsy, these first-generation histotripsy agents were associated with a few shortcomings that called for the development of new histotripsy agents. First of all, the synthesis of the NDs is complex, needs multiple steps, and requires expertise in the field of polymer chemistry. Second, the size of the NDs (∼200 nm) is adequate for accumulation into the tumor tissue through an enhanced permeability and retention (EPR) effect, as supported by previous examples in the literature. 18−23 However, recent research has shown that a smaller size is better for EPR tumor accumulation, especially in early stage tumor formation. 24,25 Third, determining PFC concentration is important for NDs to be effective and consistent between the treatments. For prior NMH experiments, NDs concentration was determined using secondary concentration detection methods such as the nanoparticle tracking analysis system. This method provides concentration as a number of NDs/mL; however, the specific PFC amount in the NDs not only depends on solution concentration but also depends on size and size distribution of NDs. This means that every ND sample had a different amount of PFC, which makes it impossible to determine the exact concentration of PFC in the used solution. Lastly, stability of NDs was limited and requires cold storage conditions. Based on these limitations, there is a significant need for a new, practical, stable, and biocompatible histotripsy agent, which allows molecular-level isolation of PFC into nanosized (<50 nm) structures and provides concentration information of PFC without using secondary detection techniques. Cyclodextrins (CDs) are Food and Drug Administration (FDA)-approved, biocompatible, nanosized molecules composed of glucopyranose units. 26,27 They are also known as cyclic oligosaccharides consisting of different numbers of cyclic sugar units. CDs used in the field of biomedicine are αcyclodextrins, β-cyclodextrins (BCD), and γ-cyclodextrins consisting of six, seven, and eight glucopyranose units, respectively. 28 CDs have a conical shape with a hydrophilic exterior surface and a hydrophobic internal cavity. 29 They are known for their efficient role as a host molecule in an inclusion complex, and they have been reported to improve the solubility, bioactivity, and stability of many hydrophobic molecules. 30,31 Research has shown that among the abovementioned CDs, BCD has been proven to be the most effective in forming inclusion complexes with PFCs because of the size compatibility of the PFC chain and the BCD cavity. 32 An early example shown by Hogen-Esch et al. presents characterization of an inclusion complex formed between BCD and a PFC alkyl chain containing water-soluble polymers. 33 Later, more work was published, for environmental purposes, to show the pollutant removal potential of the inclusion complex because some perfluorinated compounds, especially perfluoroalkyl carboxylic acid and perfluoroalkylsulfonates, are considered as a new class of persistent organic pollutants. In this regard, inclusion complex formation between BCD and different perfluoroalkyl carboxylic acids and their physicochemical characterization have been deeply studied. 34−36 The only example that used the BCD/PFC inclusion complex for an USrelated application was recently published. 37 In this work, PFC used for US imaging (FC-77) was complexed with BCD at different PFC/BCD ratios. Even though US imaging potential of the complex was shown, the size of the complex was reaching the micron level depending on the PFC/BCD ratio, resulting in one PFC chain having two BCD molecules in a complex. To the best of our knowledge, there are no prior studies that use CD−PFC inclusion complexes for the histotripsy application. The intramolecular hydrogen bonding of the secondary hydroxyl group of BCD leads to low aqueous solubility, which can be improved by increasing the temperature. However, this can cause problems such as evaporation of PFC from the system when BCD and volatile PFC derivatives attempt to form the inclusion complex. To increase water solubility of BCD at room temperature, it is necessary to substitute the hydrogens of hydroxyl groups by methoxy functions. 38 Among the other methods to increase water solubility of BCD, the methylation of BCD stands out as an economical and efficient method in the literature. 38 Moreover, methylated β-cyclodextrin (MCD) is used as an approved pharmaceutical component of certain drug delivery products. 39,40 Therefore, it appeared to be a good choice for our study because after methylation, the solubility increases and provides a more suitable environment for PFC complexation with higher efficiency. At the same time, it is still biocompatible, practical, and small as host molecules. We hypothesized that hydrophobic PFH, as the most effective PFC for histotripsy, 13 can go into the hydrophobic cavity of the biocompatible, cone-shaped BCD derivatives in an aqueous environment to form an inclusion complex through host−guest interaction, resulting in solid "nanocones" (NCs) as a new generation histotripsy agent. Preparation of NCs is as easy as mixing two commercial and widely available compounds in an aqueous environment with a certain ratio and then collecting precipitated NCs through simple filtration. These NCs are expected to be of smaller size (<50 nm), and through complexed PFH, they can lower the cavitation threshold pressure for histotripsy. Moreover, the PFH amount in these NCs can be easily calculated, as it is directly proportional to the concentration of the NCs. In this study, NCs, as the new histotripsy agent, were synthesized through host−guest interaction of FDA-approved, water-soluble derivatives of BCD (methylated BCD) and PFH through a practical and economical method with the size of less than 50 nm. The efficiency of inclusion complex formation was investigated, and quantitative complexation was achieved using better water-soluble derivative of BCD as a biocompatible and stable product. The amount of PFC in the obtained NCs was calculated, which acts as an important feature as it can be tuned differently for different treatments. Once all the physicochemical characterizations were performed, histotripsy parameters of NCs were studied to determine if NCs could effectively lower the cavitation threshold and act as an efficient histotripsy agent. ACS Omega Article ■ RESULTS AND DISCUSSION Preparation of NCs through Host−Guest Interactions. Initially, NCs formation was examined using BCD and PFH because BCD is the most commonly used CD derivative to form inclusion complexes with PFC derivatives. PFH was chosen as the PFC derivative because it was shown that PFH provides sustainable cavitation nuclei and allows generation of well-defined lesions at different frequencies. 13 Even though PFP (bp ≈ 29°C, surface tension ≈ 9.5 mN/m) lowers cavitation threshold pressure ∼1−3 MPa more than PFH (bp ≈ 56°C, surface tension ≈ 11.9 mN/m), obtaining sustainable cavitation nuclei and well-defined lesions are assessed as more important in order to have control over NMH treatments. Host−guest interaction between BCD and PFH was investigated at different BCD/PFH molar ratios of 1:1, 1:5, 1:20, and 1:50, by dissolving BCD in water at 80°C, to ensure complete solubility, and cooled down to a temperature (45°C ) lower than boiling point of PFH before its addition. The complexation efficiency of the NCs was calculated using gas chromatography (GC). A calibration curve was constructed within a concentration range, including minimum and maximum PFH amounts that can be loaded into the BCD cavity, and the area of this known concentration of PFH was compared to the area of anisole, which was used as an internal standard at constant concentration for each measurement ( Figure S1). For each NCs sample, the complexation efficiency was different and varied significantly from each other. For 1:1, 1:5, 1:20, and 1:50, the complexation efficiency turned out to be 68 ± 9, 31 ± 7, 27 ± 2, and 89 ± 21%, respectively. Increasing the ratio to 1:50 resulted in a higher complexation efficiency, but the variations in the results were high as well. It was observed that because of its lower solubility in water at room temperature (18.5 mg/mL), BCD started precipitating once the temperature was below 60°C. This heterogeneity might cause the inconsistency of the complexation efficiency because the solution was already heterogeneous, and most of the BCD dissolved in water was already precipitated at the time of PFH addition ( Figure 1a). Thus, increasing water solubility of BCD would help us verify the above statement. To address this point and increase the complexation efficiency, the solubility of BCD was increased by a simple process of random methylation of BCD as a common and practical method in the literature. Methylation of BCD was performed using dimethyl carbonate anhydrous (DMC) and K 2 CO 3 by dissolving BCD in dimethylformamide anhydrous (DMF). 41−43 The structural confirmation and degree of methylation of BCD were analyzed using nuclear magnetic resonance and mass spectroscopy, respectively. There was a clear peak shift, and −OH peaks diminished on the 1 H NMR spectrum of MCD ( Figure S2) and the carbon signal of the incorporated methyl group can be clearly seen in the 13 C NMR spectrum ( Figure S3) as well. The mass spectrum of MCD indicated high population of low degree of methylation such as 2,4,6,8-methyl groups per ACS Omega Article molecule ( Figure S4), and this was enough to significantly increase the solubility in aqueous medium. The mixture of MCD and PFH in water at room temperature resulted in a completely clear solution, followed by fine precipitate of their complex as NCs that can be taken by simple filtration and dried out as solid powder (Figure 1b). The way of using lowdegree methylated BCD allowed us to establish an efficient synthesis and practical purification way for NCs preparation. Because the host molecule is completely soluble at room temperature and more available for the guest molecule, the amount of PFH used for complexation was lowered. For the molar ratios of 1:2 and 1:5 (MCD/PFH), the complexation efficiencies were revealed to be 97 ± 2 and 98 ± 3%, respectively, which accounts for approximately absolute complexation with a reasonable value of gravimetric yield comprising 40 ± 3% for 1:2 and 71 ± 7% for 1:5. This is sufficient evidence to argue for practical and economical production of a new histotripsy agent. However, to see the effect of the MCD/PFH ratio on the complexation, the amount of PFH was increased to 1:10 (MCD/PFH) and the complexation efficiency was calculated as 130 ± 4% with the gravimetric yield of 68 ± 4%, indicating the possibility of having more than one PFH molecule in the cavity of MCD when the concentration of PFH is further increased. A MCD/ PFH ratio of 1:5 was chosen as the most effective ratio because it provides the uniform sample of 1:1 complexation with a higher gravimetric yield and high complexation efficiency that will allow the calculation of the PFH amount more accurately. Characterizations of the NCs. The NCs, which refers to the inclusion complex of methylated BCD and PFH, were first characterized using NMR spectroscopy. Even though 3 and 5 protons of glucopyranose units of MCD which lie inside the cavity and 6 which lies at the minor border of the cavity did not show significant peak shift at 1 H NMR ( Figure S5), 19 F NMR clearly confirmed the presence of fluorine of PFH in the NCs (Figure 2a). The spectrum of NCs showed peaks of fluorine, which confirms the development of the complex, as MCD contains no fluorine atoms. Moreover, a clear peak shift can be seen in the spectrum of NCs when compared to the spectrum of free PFH ( Figure 2b). As can be seen, examining the Δδ ppm values for the peaks of CF 3 and CF 2 , labeled as a, b, and c, listed in Table S1, all the peaks showed down field shift, and the peak a, which represents the peak of CF 3 , has the highest Δδ ppm value (1.485), followed by peak b (1.297) and then peak c (1.074). This is consistent with the data of Gou et al. confirming the formation of the complex. 32 Fourier transform infrared (FTIR) was performed to confirm the methylation of BCD and the formation of NCs. The FTIR spectrum of BCD, MCD, PFH, and NCs are presented in Figure S6. The spectrum of BCD showed a characteristic peak at 3200−3400 cm −1 because of the O−H group stretching. The characteristic peak at 2854 cm −1 is seen because of C−H asymmetric/symmetric stretching. Additionally, a peak at 1643 cm −1 represented the H−O−H deformation bands of water present in the BCD cavity. Peaks at 1152 and 1022 cm −1 indicated C−H overtone stretching. 44 The spectrum of MCD shows all the above characteristic peaks of BCD. According to the literature, for PFH, the characteristic peaks of CF 2 appear at 1230 and 1150 cm −1 , whereas the characteristic peak of CF 3 appears at 1210 cm −1 . 34,45−47 The same characteristic peaks were seen in the FTIR spectrum of PFH as 1235, 1143, and 1199 cm −1 ( Figure S6). Although, the peak at 1143 cm −1 was overlapping with the peak of BCD and MCD, clear peaks can be seen at 1242 cm −1 in the spectrum of NCs, indicating the presence of PFH in the cavity, hence providing evidence for the formation of the complex. Thermal gravimetric analysis (TGA) of NCs was performed to investigate the thermal behavior of the complex and to evaluate the low boiling point of PFH evaporation from the system. Figure 3 shows the comparison of TGA thermograms of BCD, MCD, and NCs. The first stage of BCD's thermogram was around 25−110°C, with the midpoint of 73.0°C indicating weight loss due to water evaporation, which was calculated as 9.55%, whereas the thermogram of MCD showed a weight loss of 7.68% for the same region at the midpoint of 62.2°C. On the contrary, the thermogram of NCs showed higher weight loss around 19.25% at this region, and the temperature of weight loss was increased from 62.2 to 72.1°C, indicating the presence of PFH in the structure. This increase in weight loss indicates the low boiling point (56°C) PFH evaporation from the cavity that was also confirmed by differential scanning calorimetry (DSC) analysis presented in Figure S7. When the contribution of water evaporation of MCD on weight loss was subtracted from the weight loss of NCs, the obtained weight loss evaluated as weight loss associated with PFH evaporation. This information was used to calculate complexation efficiency as 93%, which supported the data obtained from GC. After initial weight loss because of either water or PFH evaporation, BCD, MCD, and NCs exhibited main degradation at around 315, 300, and 260°C, respectively. To obtain further proof of inclusion complex formation, change on the morphology and the elemental analysis of the samples were conducted via scanning electron microscopy (SEM) and energy-dispersive X-ray spectroscopy (EDX). Figure 4 shows the images at high and low magnifications of BCD, MCD, and NCs, including elemental analysis data and elemental mapping of the NCs. As can be seen in the figure, methylation of CD caused morphology change supported by the EDX data that showed increase in the atomic ratio of C as well. While BCD showed the atomic ratio of C as 43.87%, and the atomic ratio of O as 56.13%, MCD showed 63.65 and 34.19%, respectively (Figure 4a,c). An elemental analysis of the NCs made of MCD and PFH showed the atomic ratio of C as 61.25%, the atomic ratio of O as 34.14%, and the atomic ratio of F as 4.35%, indicating presence of PFH in the structure ACS Omega Article through complex formation (Figure 4e). Moreover, a clear difference was seen in the micrograph of NCs, which showed a small platelike structure, compared to the MCD and BCD (Figure 4f). EDX is a useful technique that allows identifying the few atom thickness nanostructures to see the elemental distribution. Figure 4g−i shows that the elemental mapping of NCs showed the continuous distribution of the fluorine atom throughout the sample. NCs prepared with the MCD/PFH ratio of 1:5 and 1:10 were also analyzed via EDX ( Figure S8) to compare fluorine percentage, and the results revealed that the NCs with 1:10 contained almost 25% more fluorine content than NCs with 1:5, which is correlated with the data obtained from GC. As stated before in the hypothesis of the work, NCs are expected to have a smaller size than NDs. The size of NCs was examined using 0.1 mg/mL aqueous solution of NCs via two different methods, that is, dynamic light scattering (DLS) (Figure 5a) and transmission electron microscopy (TEM) (Figure 5b). According to the DLS results, the size of BCD and MCD were measured as 119.94 ± 47 nm, and 85.64 ± 13 nm, respectively. Low water solubility of BCD resulted in a bigger size with a large error bar, while improved solubility of MCD caused decrease in the size. The size of the NCs was 37.75 ± 7 nm, indicating better dispersion in water correlated with the published data that show inclusion complexes may have smaller size than their CD derivatives 48 (Figure 5a). In addition to DLS measurements, the size of the NCs was confirmed through an imaging technique. A well-dispersed NCs solution at a given concentration was dropped on the TEM grids and dried out before imaging. It was observed that NCs appear as very small spots on a scale of 100 nm, and it can be concluded that the size of the NCs is certainly lower than 50 nm (Figure 5b). Biocompatibility of the NCs. Biocompatibility is one of the most important properties for any agent that is designed relying on the systemic blood circulation. An ideal agent is expected to not hemolyze the red blood cells (RBC) and to not affect the viability of the healthy cells. Even though BCD is an FDA-approved compound and PFCs are not metabolized in the body, they can be removed simply via inhalation; 18 NCs were still tested for their hemolytic activity and cell viability to provide evidence that methylation and complex formation do not change the biocompatibility of the compounds. First, the interaction between NCs and RBCs was investigated to see whether NCs cause hemolysis on RBCs. To eliminate any additional absorption apart from the free hemoglobin absorption, the plasma layer was removed from the blood sample and replaced with sodium chloride solution. The hemolytic activity of the NCs was examined via the RBCs interaction comparison of BCD, MCD, and NCs at 0.1, 0.5, and 1.0 mg/mL concentrations, and the results are presented in Figure 6. For concentrations of 0.1, 0.5, and 1.0 mg/mL, BCD showed a hemolysis of 10.9 ± 0.45, 9.5 ± 0.42, and 4.8 ± 0.37%, respectively. The decrease on hemolysis by increasing ACS Omega Article concentration of BCD is associated with aqueous solubility of BCD because high concentration of BCD results in larger aggregates in phosphate buffer solution (PBS) (data not shown), decreasing the overall particle concentration. Hence, a lower number of BCD particles interacts with the RBCs, resulting in a lower hemolysis percentage. The increase in the concentration of MCD, on the other hand, revealed the hemolysis percentage to be 11.9 ± 0.46, 9.6 ± 0.41, and 16.1 ± 0.32% for the similar concentrations, respectively, because of the increasing number of methyl groups on the outer surface of the CD, which results in slightly increased hemolytic activity of MCD. Furthermore, MCD has better water solubility than BCD, and increase in hemolysis by increasing concentration indicates the real behavior of the compound. The lowest hemolysis was caused by the NCs at each of these concentrations. At a concentration of 0.1 mg/mL, only a hemolysis of 9.6 ± 0.43% was recorded, and at 0.5 and 1.0 mg/ mL, a hemolysis of 5.2 ± 0.34, and 3.1 ± 0.34%, respectively, were recorded. Decrease in hemolytic activity while increasing concentration of NCs was slightly more noticeable than decrease in hemolytic activity of BCD by the same effect. This is consistent with the data in the literature that inclusion complexes are less hemolytic than host molecules. 49 Even though it is not statistically significant, the decrease in hemolytic activity by increased concentration can be attributed to the slight increase in size (79.14 nm, see in Figure S9), which lowers NCs cell interaction. Second, cellular cytotoxicity of NCs was investigated on healthy human embryonic kidney cells, HEK-293T, which are the cells that almost all reagent injected into body somehow passes by, as these cells will be responsible for cleaning out any trace of NCs. For this purpose, similar concentrations of 0.1, 0.5, and 1.0 mg/mL were tested for BCD, MCD and NCs. The selected concentrations were high enough to show any dosedependent cytotoxicity of NCs. Thus, if necessary, the concentration of the NCs, which is directly proportional to the concentration of PFH, can be increased up to 1.0 mg/mL to further reduce the cavitation threshold. Figure 7 displays the results of the cell viability of each material at different concentrations. It was revealed that, upon incubation of the cells with the NCs, the cells were more viable than any other compound. For concentrations of 0.1 mg/mL, the NCs showed 94.5% cell viability, whereas BCD and MCD showed 92.6 and 89.6%, respectively. The results were similar for 0.5 mg/mL as they showed 89.3% cell viability for BCD, 85.4% for MCD, and 97.9% for the NCs. For 1.0 mg/mL, the cell viabilities of BCD and the NCs were comparable as 87.4 and 86.8% cells were viable. MCD showed a decrease in the cell viability at 1.0 mg/mL, which was calculated as 69.4% that was consistent with the hemolysis data. Overall, NCs show better cell viability than their precursors. Histotripsy Efficiency and the Effect on the Cavitation Pressure Threshold. It was hypothesized that NCs will cause cavitation at lower cavitation pressure threshold, similar to NDs used in previous NMH experiments. This hypothesis was tested by applying single-cycle 500 kHz histotripsy pulses to the agarose gels phantom using pulse repetition frequency of 1 Hz at peak negative pressures of 16.0, 20.5, and 28.5 MPa. The corresponding peak positive pressure values were 18.8, 25.0, and 34.4 MPa, respectively. A 500 kHz transducer was used with 5 cycles per pulse. Tissue-mimicking phantoms were used to provide a well-controlled viscoelastic medium, which is very important for the study as the damage in the tissue induced by the histotripsy highly depends on the mechanical properties of the tissue. 14 Three different tissuemimicking phantoms were prepared, including 0.7, 7 μg NCs/ mL as the sample, and the empty phantom as the negative control ( Figure 8). Results showed that histotripsy pulses did not create significant bubbles in the empty phantom until 28.5 MPa, which is the pressure that corresponds to the threshold cavitation pressure of normal histotripsy without an agent. Similar histotripsy applications on the phantoms that contain 0.7 and 7 μg NCs/mL, which correspond to 1 × 10 −4 and 1 × 10 −3 μL PFH/mL, caused bubble formation at the transducer driving voltage as low as 45 V (16.0 MPa), showing that even a small amount of PFH inside the NCs can cavitate the histotripsy at lower pressures ( Figure 8). Moreover, only MCD (empty NCs) containing phantom indicated the necessity of PFH for cavitation by not showing any cavitation at the tested, 27 MPa, pressure value ( Figure S10). More theoretical and experimental work needs to be done to investigate the cavitation mechanism of NCs, as well as a full comparison of the effects of NCs concentration on the histotripsy cavitation ACS Omega Article threshold including a comparison of the thresholds for NCs and our previously used NDs. These studies are currently in progress. ■ CONCLUSIONS Overall, the results of this study demonstrate the feasibility of PFH-filled NCs as new generation histotripsy agents for NMH. These NCS were prepared through host−guest interaction between water-soluble derivative of CDs (MCD) and USactive PFH molecule; resulting in NMH nanoparticles that are easier to produce and more cost-/time-effective compared to the NDs used in prior NMH studies. The NCs are also smaller in size (<50 nm), which is expected to give a better chance of accumulation into the tumor tissue through the EPR effect. This is necessary for NMH to selectively ablate target tumors at lower pressures than of those used inconventional histotripsy. In addition, unlike the NDs, the PFH amount in the NCs can be directly determined in order to tune the PFH dose in planned NMH treatments, which further supports the superiority of the NCs. Additionally, the synthesis of NCs is easy, cost-effective, stable, and they are biocompatible. The NCs did not show hemolytic activity against RBCs and did not significantly affect the cell viability of the healthy human embryonic kidney cells (HEK 293T) even at the concentration as high as 1 mg/mL. Finally, NCs were able to lower the cavitation pressure threshold of histotripsy below 16.0 MPa using only 1 × 10 −4 μL PFH/mL, which is a significantly smaller total volume of PFH than what was previously used in NDs-mediated histotripsy. Together, the results from this initial study demonstrate the potential of NCs for the NMH therapy, and pave the way for future studies optimizing NCs design as well as the continued development of NMH as a controllable method for targeted tumor ablation. Methylation of BCD. BCD was randomly methylated using the procedure described in Gan et al. 41 Briefly, BCD (2.64 mmol) was dissolved in anhydrous DMF (60 mL) in a two-necked round-bottom flask, which was connected to a condenser on the top. After complete dissolution of BCD, K 2 CO 3 (14.5 mmol) was added to the solution. Furthermore, DMC (66 mmol) was added drop-by-drop and stirred for 24 h at 45°C under N 2 atmosphere. Subsequently, K 2 CO 3 was removed by centrifuging the solution at 2000 rpm for 5 min. The solvent was removed from the solution by vacuum distillation, during which the residue turned into a syrup-like concentration. MCD was then precipitated using acetone and rinsed three times using diethyl ether. After filtration and drying process, the obtained solid was crystallized in water, filtered, and washed with acetone. The solid product was dried under vacuum and stored at room temperature before use. 1 Preparation of NCs Using MCD and PFH. MCD (4.4 × 10 −2 mmol) was dissolved in double-distilled water (1 mL) at room temperature and PFH at different molar ratios (1, 5, 10 folds) was added to the vial of MCD followed by gradual precipitation of the NCs as the product. After overnight stirring, the solutions were placed in the fridge at 4°C for 1 h, and then centrifuged at 5000 rpm for 10 min. The supernatant was decanted and the precipitate was dried under vacuum. 1 Determination of Hemocompatibility. The plasma supernatant of human blood (8 mL) was discarded after centrifuging at 3500 rpm for 5 min. The RBCs were washed three times using a 150 mM saline solution. After the third wash, the RBCs solution was resuspended in 100 mM PBS (pH 7.4), followed by 10-fold dilution using PBS. Solutions of desired concentrations of the NCs (1.0, 0.5, and 0.1 mg/mL) were prepared in PBS pH 7.4 and mixed with 400 μL of RBCs solution and PBS to reach 2 mL final volume. Each solution was prepared as triplicate, as well as control solutions. The solutions were incubated at 37°C for 1 h. After incubation, the RBCs solutions were centrifuged at 13 000 rpm for 5 min, resulting in intact and ruptured RBCs to pellet out, leaving the hemoglobin in the supernatant solution. The supernatant (200 μL) was transferred to a 96-well plate from each tube, and absorbance was measured at 541 nm as an indication of hemolysis. The observed hemolysis of untreated RBCs in PBS solution and in 0.1% v/v Triton X-100 solution was used as negative and positive controls, respectively. The observed hemolytic activity was normalized with respect to control groups. Figure 8. Bubble cloud was generated using NCs as the histotripsy agent in agarose tissue phantoms exposed to histotripsy pulses at peak negative pressures of 16.0, 20.5, and 28.5 MPa. The empty phantom was used as the negative control. ACS Omega Article Cell Toxicity Studies. For the cell toxicity and cell viability studies, human embryonic kidney cells (HEK-293T) were selected because of their reliable growth and propensity for transfection. Also, once the NCs are injected into blood circulation, they will eventually interact with these cells inside the body. The concentrations of 1.0, 0.5, and 0.1 mg/mL were tested on HEK-293T cells. Briefly, HEK-293T were seeded in 96-well plates at a seeding density of 20 000 cells/well and allowed to adhere overnight before replacing the culture medium with FBS-free culture medium, containing different concentrations of BCD, MCD, and NCs (1.0, 0.5, and 0.1 mg/ mL), and incubating for 24 h under normal culture conditions. The medium of the cells were then replaced with 10 μL Cell Titer 96 Aqueous One Solution Cell Proliferation Assay (MTS assay reagent) containing 110 μL medium and incubated for 2 h, followed by measuring the absorbance of this solution at 490 nm using the SpectraMax i3 microplate reader. Contribution of free culture medium was eliminated by subtracting the absorbance of equal volume of culture medium at this wavelength and cell viability calculated with respect to untreated controls. The concentrations that result in statistically significant reduction in cell viability were considered cytotoxic. Ablation of the Agarose Tissue Phantom Using NCs as a Histotripsy Agent. The phantom was made using 1% agarose w/v by slowly mixing agarose powder (Agarose type VII, Sigma-Aldrich) into saline solution. The temperature of the solution was raised above 70°C until the solution became completely transparent. The solution was then degassed using a partial pressure vacuum of 2.7 kPa for 30 min. The agarose solution was then cooled down to 37°C. To obtain phantoms containing NCs, they were slowly added to different gel solutions at around 40°C while the solution was still stirring and cooling down. The agarose mixture was then poured into rectangular molds made of polycarbonate and placed at 4°C. The solutions were allowed to solidify, and the tissue phantoms embedded with NCs (test) and without the NCs (negative control) were obtained. A 500 kHz transducer was used to apply single-cycle histotripsy pulses, and the pulse repetition frequency was set to 1.0 Hz. The results were recorded using a high-speed camera.
2019-04-30T13:09:14.427Z
2019-02-25T00:00:00.000
{ "year": 2019, "sha1": "abd70b34c4fae1dab58515d2382302a1917cc20c", "oa_license": "acs-specific: authorchoice/editors choice usage agreement", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.8b02922", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "983b0bc8a341ca583268d2057329993124022995", "s2fieldsofstudy": [ "Chemistry", "Medicine", "Materials Science" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
136771309
pes2o/s2orc
v3-fos-license
Synchrotron X-ray radiography studies of pitting corrosion of stainless steel: Extraction of pit propagation parameters : In situ synchrotron X-ray radiography was used to observe the evolution of 2D pits growing at the edge of stainless steel foils in chloride solutions of varying concentrations under current and potential control. A method was developed for measuring the local anodic current density along the perimeter of pits from the rate of advance of the pit into the metal. Pit depth tends to increase with time with kinetics consistent with diffusion control (under a salt layer), whereas lateral development (on film-free surfaces) is influenced by solution conductivity. Perforated covers formed on pits control their growth and stability. In situ synchrotron X-ray radiography was used to observe the evolution of 2D pits growing at the edge of stainless steel foils in chloride solutions of varying concentrations under current and potential control. A method was developed for measuring the local anodic current density along the perimeter of pits from the rate of advance of the pit into the metal. Pit depth tends to increase with time with kinetics consistentwithdiffusioncontrol(underasaltlayer),whereaslateraldevelopment(onfilm-freesurfaces) isinfluencedbysolutionconductivity.Perforatedcoversformedonpitscontroltheirgrowthandstability. This access CC Introduction Stainless steels are employed in many industrial and architectural applications owing to their good corrosion resistance and other desirable properties. It is well known, however, that localised corrosion (and in particular pitting corrosion) can occur in the presence of halides (particularly chlorides). This form of damage can often lead to the degradation of both functional and cosmetic properties of components. In the presence of mechanical stresses, this form of damage has been observed to potentially lead to the development of stress corrosion cracks (SCC), thus degrading the structural properties of components (e.g. [1]). The work described in this paper aims at further elucidating the nature of pitting propagation mechanisms in order to support the development of mechanistic models able to predict the likely extent and kinetic of damage in relevant conditions. Whilst relevant to many applications, the work was carried out with the specific aim of supporting the development of safety cases and long term strategies for the storage and disposal of radioactive waste in the UK (in particular intermediate level waste, ILW [2], which is currently packaged in stainless steel containers grades 304L and 316L). It is well established that corrosion pits in stainless steel grow by an undercutting mechanism in which the pit maintains an overall hemispherical or dish shape covered by a perforated metal cover [3][4][5][6][7][8][9][10][11][12][13]. The fine structure is formed by local inhomogeneity in the current density in different regions of the pit, which has been simulated in a model developed by Laycock and co-workers for austenitic stainless steel (300 series) [14][15][16][17]. The model assumes that there are three different regions of a growing pit: the passive region near the mouth or lacy cover where the concentration of metal ions is low, a diffusion-limited region at the bottom where the pit is covered in a salt layer, and an actively dissolving region at the sides where there is neither passive film nor salt layer to limit dissolution. In this paper, radiographic measurements of 2D pits on Carcea et al. [28] stainless steel grades 304 and, to a lesser degree, 316 are used to test the assumptions made in the model and provide quantitative data on the inhomogeneous local current densities within growing pits. It has long been assumed that pit stability is controlled by diffusion: in order to maintain an aggressive environment within the pit that prevents repassivation, the rate at which metal ions escape from the mouth of pits must be no greater than the rate at which they are produced by anodic dissolution processes [18]. Pit stability is thus usually described in terms of the pit stability parameter i.x, the product of the current density at the bottom of the pit i and the pit depth x. The critical value of i.x, which must be exceeded for a pit to remain stable, has been experimentally determined to be 3 [8,19] or 4 [9] mA/cm for stainless steel in chloride solutions. However, these parameters are generally determined from electrochemical measurements in which pits are assumed to grow in a hemispherical shape at a uniform current density. In order to develop models that more accurately reflect pit propagation, it is necessary to make direct measurements of the evolution of pit shape, and determine how the local current density varies within pits. Artificial mono-dimensional (1D) pit electrodes have been previously used to develop an understanding of pit electrochemistry and dissolution kinetics [20][21][22][23]. These electrodes are usually pseudo-one-dimensional cavities with a small corroding surface area in which concentrated solutions of metal ions similar to a real pit are developed. These studies provide a basis for interpreting diffusion-limited current densities in 2-and 3-dimensional pits, which are the focus of this study. According to these studies, for pits growing under electrochemical conditions in which growth occurs under diffusion control, there is a relationship between (pit depth) 2 and time (using Faraday's second law in conjunction with Fick's first law for diffusion). In this approach, Fick's first law can be written as [23][24][25]: where i lim is the diffusion-controlled current density, z is the transferred charge (2.2 [26,27]), F is the Faraday constant, D eff is effective diffusion coefficient, C is the concentration gradient of metal ions at the pit bottom (or pit surface) and mouth (or bulk solution), and h is the pit depth. The relationship between pit depth and time is: where is the metal density, M is the atomic mass and t is time. Therefore, for pits growing under diffusion control, the value of D eff C can be extracted from the slope in the plot of h 2 vs. growth time. The values of D eff C determined experimentally by a number of researchers are shown in Table 1. Determination of the effective diffusion coefficient D eff from D eff C requires knowledge of the concentration difference C. It is generally assumed that the pit mouth has a metal ion concentration of zero. It is also assumed that pits growing in stainless steel under diffusion control are covered by a salt layer of FeCl 2 ·4H 2 O [29], in equilibrium with a saturated metal-chloride solution. The saturated concentration of metal ions adjacent to the salt layer has been determined to be 3.5 M Fe 2+ , 1.1 M Cr 3+ and 0.5 M Ni 2+ [30]. However, a saturation concentration of 4.25 ± 0.05 M for FeCl 2 reported by Kuo and Landolt [20] is commonly used as the concentration of metal ions at the pit bottom in the interpretation of 1D artificial pit measurements [8,21,26,31,32]. There have been relatively few attempts to observe and in some cases extract the average current density from video images taken of growing 2D pits. Frankel presented a method to directly measure the average anodic current density from the growing pit boundary velocity in Al [33], an Al alloy [34] and Ni-Fe [35] thin films. Subsequently, Ryan et al. [27,36] determined the anodic current density in pits propagating as 2D disks in stainless steel thin films by measuring the pit edge movement velocity. Ernst and Newman [11,12,37] studied stability of pit growth in detail and measured the kinetics of 2D pit propagation in depth and width and compared the results with kinetics in 1D pencil electrodes. They developed a semi-quantitative model for pit propagation which explained the lacy pit cover formation during the pit growth, although they did not measure current density within the pit. More recently, Tang and Davenport [38] tracked the pit boundary movement and computed the instantaneous but average current density in Fe-Co thin films. However, there have been no previous attempts to quantify the local current density during inhomogeneous growth of pits, although such local variation in current density has long been recognised [7]. In this paper, we present the use of synchrotron X-ray radiography to characterise the growth of 2D pits in a geometry similar to that previously used by Ernst and Newman [11,12], and extract the local current density around the perimeter of pits grown under both potentiostatic and galvanostatic conditions. The pit growth kinetics and stability are also studied. Further work [39,40] is being carried out to extend this information to the case of atmospheric conditions (particularly relevant to the management and disposal of radioactive wastes), in which concentrated solutions, cathodic processes and resistive (e.g. IR-drop) effects are likely to play an important role in the amount of damage associated with stainless steel pits over very long timescales (many decades). Measurements Pits were grown at the top edge of stainless steel foils (20 and 25 m, Goodfellow Cambridge Ltd) cut to a width of ∼0.7 mm embedded in epoxy resin (Araldite) and attached to an electrochemical cell as described in reference [41]. Grades 304 and, to a lesser degree, 316 were used in all experiments. Approximately 30 min prior to each measurement, the top edge of the foil was abraded with 4000 grit paper, washed, dried, and small drops of lacquer were applied to the two ends of the exposed surface to prevent pit initiation at the ends of the foil. This arrangement was generally effective in preventing the onset of crevice corrosion. In some experiments, pits growing underneath the lacquer were observed. The results of these measurements are described in a later section of this paper. The electrochemical cell setup is described in detail in an earlier paper [41]. An Ivium (CompactStat) potentiostat was used to provide electrochemical control and all potentials were measured relative to an Ag/AgCl reference electrode ([Cl − ] = 3 M). Electrolytes were 0.005, 0.01, 0.1 and 1 M NaCl prepared from laboratorygrade chemicals and deionised water supplied from an Elix water The growth of pits was recorded through high resolution high speed X-ray radiography, carried out at 15 keV at the TOMCAT beamline at the Swiss Light Source (SLS). The TOMCAT detector, used with a 20× objective and 1 × 1 binning, covered a maximum field of view of 0.75 mm × 0.75 mm, providing the minimum pixel size of 0.37 m × 0.37 m. All radiographs were flat-field corrected before analysis. Experiments typically lasted between 10 and 40 min. At high chloride concentrations, multiple pits typically initiated within seconds of applying the potential and grew simultaneously together. In these cases, experiments were short and terminated after two or more pits had merged together. However, at lower chloride concentrations, normally only one or two pits initiated or continued to grow. In lower chloride concentrations, the induction time for pits to initiate was longer [42]. Therefore, these experiments were performed for longer periods. Pit parameters were automatically extracted from radiographs using a customised filter plug-in implemented into the ImageJ software [43], which is described elsewhere [41,44]. Details of the extraction of the local current density are described in reference [44]. Pit edge detection and definition of pit depth and width In order to quantify the growth of pits during successive radiographs, it is necessary to extract the co-ordinates of the pit boundary as well as parameters associated with the pit morphology. Fig. 1 illustrates the definitions used in this work for the pit "depth", "width" and "mouth". The maximum distance from pit bottom up to foil interface with solution is defined as the pit "depth", the maximum lateral extent of pit is defined as pit "width", and the horizontal distance between the two points where pit boundary connects to the foil top surface is considered as pit "mouth" (or the distance between the junction points of pit internal perimeter with foil interface with solution). Current density extraction Once the pit boundary has been defined for several successive radiograph frames of a growing pit, then the local current densities at the pit boundary can be calculated from the boundary velocity. Fig. 2 shows the position of the pit boundary 20 s later than its earlier position (yellow boundary). The velocity of the pit boundary can be calculated at each point by the displacement along the local normal from one frame to a subsequent one at a later time dt. The displacement is measured by the normal distance from the centre of two adjacent points in the boundary at time t with respect to the boundary at time t + dt. The velocity is then converted using Faraday's 2nd law into a local current density: where i is the local current density, dx/dt is the local measured pit boundary velocity. Characteristic pit growth behaviour The typical variation with time of the pit current measured in a potentiostatic experiment on 304 (650 mV (Ag/AgCl) in 0.005 M NaCl), together with images showing typical radiographs collected at different times are shown in Fig. 3. It can be seen that, after a period of initiation, the current generally increases approximately according to t 1/2 but strong fluctuations are imposed upon this trend. The radiographs also show that a dish-shaped pit with a very thin perforated metal cover gradually grows through period of relatively fast local growth followed by a decrease in growth rate and eventual passivation of some of the (previously active) surfaces, accompanied by the development of fast growing regions ('lobes'), generally growing sideways. As the pit grows, the supplied current increases owing to the increase in pit surface area. However, at ∼70 s, there is a sudden drop in current. This is likely to be associated with development of a new perforation that allows escape of metal ions from the pit, diluting the pit solution and leading to local passivation. At ∼75 s, the current starts to increase again: this is associated with lateral growth of a new region of attack at the bottom of the pit. The continuing current fluctuations are a consequence of ongoing perforation and development of new regions of lateral pit growth, leading to the characteristic "lacy" perforated pit cover shown in Fig. 4. Pit covers of this type have previously been observed for pits in stainless steels [3,4,[6][7][8][9]45]. Fig. 4 also shows an optical micrograph indicating the location of lacquer drops at the two ends of the foil that were applied to prevent pit initiation. Pit growth under potentiostatic and galvanostatic control In this study, under potentiostatic control and in concentrations of 0.1 M NaCl and above, multiple adjacent pits of similar size and shape were formed on grade 304, which grew together until they merged (Fig. 5). This simultaneous initiation and propagation of multiple pits appears to be a characteristic feature of potentiostatic control, since, in these conditions, there is no limit to the current that can be supplied to the system. A video of this process (10× times speeded up) is provided in the online version of this article (Video 1). Supplementary material related to this article can be found, in the online version, at doi:10.1016/j.corsci.2015.06.023. Conversely, it was found to be difficult to initiate pits reproducibly under galvanostatic conditions due to competing crevice corrosion at the edge of the foil that was in contact with epoxy. Therefore, in this work, galvanostatic measurements were generally preceded by a brief period under potentiostatic control to initiate the pits. Fig. 6 shows typical growth of a pit grown in galvanostatic conditions at a current of 10 A, following pit initiation under potentiostatic control at +650 mV (Ag/AgCl) for a period of 10 s. Further information on the current signal during growth of pits of this type is provided elsewhere [41,44] and a video of this process (10× times speeded up) is provided in the online version of this article (Video 2). It is evident that a number of pits were initiated in the potentiostatic regime, but once the sample was switched to galvanostatic control, the smaller pits repassivated and only one pit survived and continued to grow. It was always the case in the galvanostatic measurements carried out in this way that only one or two pits survived after switching from potential control to current control. Following a switch to galvanostatic control, if two pits survived, they usually grew at the same rate and to the same size as illustrated in Fig. 7. A video of this process (10× times speeded up) is provided in the online version of this article (Video 3). Further evidence illustrating pit survival following switching from potential control to current control is available in reference [44]. Supplementary material related to this article can be found, in the online version, at doi:10.1016/j.corsci.2015.06.023. Pit morphology A difference in morphology was frequently observed between pits grown under potentiostatic vs. galvanostatic control. Fig. 8 compares the morphology of pits grown potentiostatically and galvanostatically after a charge of ∼2.6 mC has passed. The potentiostatically grown pit ( Fig. 8(a)) is shallow and smooth with a clearly defined and smooth perimeter. In contrast, the galvanostatically grown pit ( Fig. 8(b)) is deeper but has a rougher surface and an etched perimeter. It should be noted that in this work, the potentials that were applied under "potentiostatic" control and observed under "galvanostatic" control were significantly different. For example, the measured potential decays from 650 mV to 150-200 mV (Ag/AgCl) within a period of 150 s following the switch from potential control to an applied current of 10 A. It is therefore likely that the observed differences can be attributed to the difference in interfacial potential. Fig. 9 shows the boundaries of pits at different stages of growth for potentiostatic and galvanostatic pits. In the pit grown potentiostatically (at higher potential), sideways growth via propagation of lateral lobes can be observed. However, more uniform growth in all directions towards a circular shape can be observed for the galvanostatically grown (low potential) pit. Effect of lacquer on pit shape In these experiments, lacquer was applied to the ends of the metal foil to prevent pit initiation and increase the probability of growth of a single pit in the centre of the foil ( Fig. 10(a)). However, sometimes a pit that initiates in the centre of the foil may grow under the lacquer ( Fig. 10(b-d)). The presence of the lacquer means that perforation of the pit cover does not lead to local dilution of the pit solution. Instead, the pit is able to continue to grow horizontally, as shown in the left side of Fig. 10(b) and the right side of Fig. 10(c and d). The other side of each pit grows by the mechanism of successive perforation. This indicates how partially covered pits may lead to development of crevice corrosion. Fig. 11 shows a growing pit with a plot of local current density along the pit boundary measured from the velocity of boundary movement by considering frames that are 5 s apart. In this graph (as in the following ones), the abscissa in the plots of current density associated with different images represent the distance of a line traced across the pit contour (i.e. the pit boundary) from the point on the left-hand side of the image in which the pit intersects the original surface, which enables location of different points on the pit boundary through a single coordinate. The developing fronts within the pits and their corresponding current density in the plot are marked with X and Y. At all of the times illustrated in Fig. 11, it can be seen that towards the outermost points on the perimeter of the pit, the current density drops to zero, which is to be expected for a passive region of the electrode. The transition from passive to active surface is abrupt leading to undercutting and lobe formation. In the centre of the dissolving surface (i.e. at the bottom of the pit) the current is constant, generating a local minimum between two regions of maximum dissolution (located on the side). Local current densities within pits Figs. 12 and 13 show the time-dependence of local current densities within the pits. At each frame time, the maximum current density along the pit perimeter was extracted. Also, the current density at the mid-point of the pit was extracted in order to give an indication of the current density at the pit bottom, assuming the symmetry of the pit cavity is retained during growth. The fluctuation in maximum current density illustrates the dynamic nature of growth at the developing lobes. It is evident that the current density values remain fairly constant for the pit growing under potentiostatic control (Fig. 12), whereas the current density decays during growth of the pit under galvanostatic control, following the gradual decrease in potential (Fig. 13). Fig. 14 compares the maximum current density along the boundary of pits grown at 650 mV (Ag/AgCl) at different chloride concentrations of bulk solutions. A slight increase is observed in the maximum current density as the concentration of bulk chloride increases; this is likely to be associated with the lower IR drop in solution at higher concentrations leading to a higher interfacial potential. Pit growth rate In order to extract the pit growth parameter D eff C, pits were grown for an extended period under potentiostatic control until they merged into a 1D pit. Fig. 15 shows a series of images at different times for pit grown in 1 M NaCl at 650 mV (Ag/AgCl). It can be seen that the coalescence of individual pits is relatively rapid, and by 300 s there is a uniform dissolution front and no pit cover is evident. It should be noted that it was not possible to grow 11. Sequential growth of a pit and corresponding local current density along the pit boundary measured from the velocity of boundary movement by considering frames that are 5 s apart. X and Y are developing fronts within the pits and their corresponding current density in the plot. individual pits of any size in 304 in 1 M NaCl under these experimental conditions. Fig. 16 shows a graph of pit depth squared against time for the pit shown in Fig. 15. The depth values are taken from the deepest part of the pit up to the pit mouth (original interface between foil and solution). It can be seen that the plot is linear to a good level of approximation. Taking into account Eq. (2) and considering M = 57.6 g mol −1 and = 7.82 g cm −3 , from the gradient of the plot, the value of D eff C is estimated to be 4.36 × 10 −8 mol cm −1 s −1 . The same approach described above was used in other measurements leading to the propagation of isolated '2D' pits; in this case it was assumed that if the resulting plot of (pit depth) 2 against time was linear, then the dissolution is diffusion controlled and diffusion length is equal to the pit depth. An example of this methodology is reported in Fig. 17, which shows an example of an isolated (2D) pit grown on a 316L stainless steel foil in 1 M NaCl at 750 mV (Ag/AgCl). In general, in the electrochemical conditions tested, pit growth on 316L was more difficult and less reproducible than for 304, but this experiment shows a good example illustrating the growth of a pit with relatively little cover. A plot of pit depth squared as a function of time is shown in Fig. 18. The linear correlation between the square of the depth and time suggests that the pit is growing under diffusion control. In this measurement, the value of D eff C was estimated to be 2.5 × 10 −8 mol cm −1 s −1 . Table 2 summarises the D eff C values obtained for pits under different conditions. It may be seen that the D eff C values obtained for all 2D pits are lower than the values measured for the 1D pit. Aside from this, the values are relatively similar across measurements performed in potentiostatic conditions at different chloride concentrations, with higher variability in measurements carried out at lower chloride concentrations (pits No. [19][20][21][22][23]. Slightly lower values are obtained for galvanostatically grown pits (No. 24-30), with relatively little variation. This variation can be attributed to the variation in the degree of perforation of the pit cover. Therefore, a "perforation factor" was estimated for 2D pits by taking the ratio of the value of D eff C for a 2D pit and that of the 1D pit. Fig. 20, which shows the evolution of pit width with depth for different chloride concentrations. These plots are approximately linear for pits less than ca. 60 m depth, from which it may be deduced that the ratio of pit width to pit depth is approximately constant for early stages of growth. At later stages, the slope of the curve increases, indicating a relative increase of the rate of propagation with width in respect to depth. The overall ratio of width to depth increased with chloride concentration from ca. 1-2 in 0.005 M NaCl to ca. 4 in 0.1 M NaCl, indicating a faster growth sideways than with depth. Fig. 21 compares the width and depth of pits grown under constant potential of 650 mV (Ag/AgCl) or current of 10 A. At both conditions, the pit width increases with the increase in bulk chloride concentration. For a given concentration, pits are wider under potential control than under current control ( Fig. 21(a)). However, as mentioned above, this is a likely to be the result of the significant difference between the potential of the two electrochemical regimes, e.g. 200 s after growth, the measured potentials of galvanostatically grown pits were ∼140 and 208 mV (Ag/AgCl) at 0.1 and 0.01 M, respectively. The pit growth in depth does not show systematic dependence on applied current, potential or chloride concentration ( Fig. 21(b)). Pit stability product A value for the pit stability product can be calculated from the product of the local current density (i a ) and the local depth at each point along the pit surface. It is assumed that the local depth may be defined as the vertical distance from the pit surface up to the pit rim. Fig. 22 shows the stability product along the boundary of the pit grown in 0.1 M NaCl at 10 A for 22 and 47 s following initiation at 650 mV (Ag/AgCl) for 10 s. At the initial stages (a) the stability product is less than 2.5 mA/cm all along the pit boundary. As the D eff C calculated for pits grown in different conditions. The pit reported in the 1st row is a '1D pit' (general dissolution of the entire exposed surface), while the other experiments all refer to '2D pits' (grown in isolation from each other and leading to damage only locally). All pits were grown in 304 foil, unless otherwise stated. No. [NaCl] (M) E (mV) a I (A) c D eff C × 10 8 (mol cm −1 s −1 ) Perforation factor (%) e Notes pit grows, the stability product exceeds 3 mA/cm only at the pit bottom areas and fluctuates around this value during the rest of growth time. This value is broadly consistent with previous work [8]. Fig. 23 shows the maximum stability product as a function of time for galvanostatically grown pits at 10 A in 0.1 M NaCl solutions. The stability products tend to fluctuate between 3 and 4 mA/cm with some sudden increases to higher values. Pit growth shape It is evident from this work (Figs. 5 and 6) that both the shape and numbers of pits are influenced by whether the sample is under potentiostatic or galvanostatic control. Under potentiostatic growth conditions, multiple pits initiate and continue to grow at same rate (Fig. 5). This is consistent with the observation of the sudden initiation of pit sites and stable growth of pits above E pit and the critical pitting temperature (CPT) [46,47]. In the experiments shown here, it was difficult to initiate pits reliably under galvanostatic control, so pits were initiated under potentiostatic control for 10 s and then grown under galvanostatic conditions. In similar conditions, it was previously found that all of the pits rapidly die except one "champion pit" [48,49]. The difference is that for galvanostatic growth, the amount of current is limited while the pit and thus dissolving surface area is growing larger, leading to a gradual decrease in current density. Under such conditions, the current flowing is insufficient to maintain a concentrated solution within pits, so they gradually repassivate. If multiple pits are present initially, once the electrochemical control is switched to galvanostatic, the applied current will flow into the pits which provide the least electrical resistance (i.e. resistors in parallel according to conventional rules of electrical circuits). Therefore, only pits with lower electrical resistance may continue to grow. The electrical resistance may be mainly affected by the extent of pit active surface area and perforation of the lacy cover. Fig. 9 compares the pit development mechanism under potential and current control. Under potential control, a pit propagates by successive development of laterally expanding lobes consistent with the schematic model proposed by Ernst and Newman [11]. Pits growing under galvanostatic conditions initially propagate in a similar shape to potentiostatically grown pits; small lobes from both sides of the pit undercut metal and perforate the cover with a sharp pit perimeter. However, the pit shape gradually adjusts to accommodate the limited applied current with the lowest electrical resistance, resulting in a relatively uniform dissolution rate in all directions, thereby approaching an approximately circular shape. The ratio of pit width vs. pit depth is ca. 2-4 in the potentiostatically grown pits and ca. 1.4-2 in the galvanostatically grown pits, suggesting that pits grown under potentiostatic control tend to be less penetrating (more dish-shaped) than those grown under galvanostatic control (this may be related to the different potentials involved). As the pit grows (galvanostatically) and its perimeter increases, the average current density and accordingly the interfacial potential decreases and the pit perimeter at the bottom transforms to a rough and etched surface. This transition agrees with Sato's idea [50,51] and the observations of Ryan et al. [27] that pits initiated at high potential often grow with a polished surface but if the potential is decreased, pits either repassivate or propagate in a salt layerfree active state with convoluted structure and lowest metal ion concentration possible for continuation of propagation. In contrast, in pits grown under potentiostatic control, the pit perimeter looks sharp and well-defined during the whole growth period (see e.g. Figs. 4 and 5); this is consistent with observations of Sato [50,51] that pits grown at higher potentials have polished and bright internal surfaces, suggesting that they are covered by salt layer over the majority of their internal surface since polished surfaces are characteristic of electropolishing dissolution beneath salt [3,8,15,52]. Evidence for the presence of a crystalline salt layer at the bottom of pits has also been observed in X-ray diffraction measurements of 2D pits of the type shown in the present work [53]. It should be noted that in the present work, the potentiostatic measurements used a relatively high potential (+550 to +750 mV (Ag/AgCl)) whereas the galvanostatically grown pits develop at lower potentials (typically ranging from +600 down to <100 mV (Ag/AgCl)) during the growth of a pit). Thus, a significant difference in the evolution of pit morphology between potentiostatically and galvanostatically grown pits is likely to be a result of the decrease in interfacial potential during the course of galvanostatic pit growth. Current density around pit perimeter and its variation with time The approach presented here and in a previous publication [41] shows how the local current density around the perimeter of a pit can be quantified. Previous work of Ernst and Newman [11,12] showed the overall change of pit shape with time, but there was no quantification of local current density. Other researchers have measured the average values from circular pits growing in thin films, and so have not captured the difference in current density that arises locally from the escape of metal ions from the pit mouth. There have been a number of purely electrochemical measurements of pit current density made on the basis of assuming that pits grow homogeneously as hemispheres: the current density values vary in the rage of 0.1-10 A/cm 2 for metastable pits [8,9,54] and stable pits [52,55]. Similar average values are found in the present work, but the key novel contribution of this paper is the quantification of the variation of local current density around the perimeter of the pit. For pits under potentiostatic control, the pit current densities can be divided into three regions, consistent with those used in the model of Laycock and co-workers [14]. Near the mouth of the pit, where the concentration of metal ions in the solution is relatively low, the metal repassivates, leading to a negligible current density. At the bottom of the pit, where the presence of a salt layer is expected, typical current densities are in the range of 1-2 A/cm 2 . Higher current densities, in the range of 3-5 A/cm 2 are found between the passive and salt layer regions where pit lobes grow laterally with active dissolution in the absence of any salt layer. Close to the pit mouth, there is a remarkably sharp transition between the high active dissolution and the adjacent passive region. The high current density characteristic of this transition may relate to the critical current density for passivation, i crit , proposed by Salinas-Bravo and Newman [56], a concept that has been recently further developed [57][58][59][60]. The observation of an extremely sharp transition between active and passive regions may also shed light on mechanisms of stress corrosion cracking (SCC) that require passive walls and an active tip to maintain the conditions necessary for transgranular SCC. The current density trend in all potentiostatically grown pits shown in Fig. 12 indicates that the maximum i a fluctuates around a certain value which slightly decreases as the pits grow. Less fluctuation is seen in mid-point current density which can be considered as the current density at the pit bottom (assuming symmetrical pit shape) and there is a smooth decrease during pit growth. Comparing the maximum current densities associated with lateral growth (i a ) shown in Fig. 14 shows that an increase in the chloride concentration of the bulk solution results in slightly higher current density. This result suggests a dependence of the growth rate at laterally developing lobes on bulk chloride concentration, and in particular to the resulting IR drop associated with the solution. In other words, the growth rate at developing lobes (where i a is at its maximum) depends on the interfacial potential, therefore an increase in the chloride concentration in the bulk solution causes a smaller IR drop and thus a higher interfacial potential, which leads to higher dissolution rate and i a . The dependence of lateral growth on potential supports the idea that the lateral corroding surface is not covered with a salt layer which is consistent with the observations of Ryan et al. [27,36] and Ernst and Newman [11]. The current density profiles shown in Fig. 13 illustrate a clear decrease with growth time which is a characteristic of galvanostatic growth, in which, as the pit propagates and corroding surface increases, the local current density reduces due to the limited availability of applied current. Additionally, as the pit propagates, a more uniform distribution of current within the pit can be deduced from the smaller difference between the maximum and pit mid-point current density, indicating a stabilisation of its characteristic aspect ratio (i.e. the ratio between width and depth) after an initial transient. The decrease in current density with time is consistent with the observations of Alkire and Wong [52], although in our work a linear relationship between current density and the square root of time was not observed. Growth of pit depth In this work, a linear relationship has been observed between the square of the pit depth (h 2 ) and time (t), consistent with diffusion-controlled growth [20,23,53,[61][62][63][64][65]. The gradient of h 2 vs. t plots is proportional to the product D eff C, which is given in Table 2. This suggests that pits grow in depth under diffusion control. Although the slopes (D eff C values) showed some variability (generally within a factor of 2), no systematic change was found with chloride concentration, or even if the pits are grown under potentiostatic or galvanostatic control. This supports the idea that stable pits grow under diffusion control with a salt layer at the bottom, which adjusts in thickness so that the interfacial potential between the metal and the salt layer gives a current density equal to the rate of diffusion of metal ions [26,61] As a result, the growth of pit depth depends only marginally on the external conditions (both polarisation and chloride concentration). While it would be expected that D eff C may depend to some extent upon the pit geometry, the most likely cause of the variation is likely to be the (unknown) variation in the extent of pit cover perforation developed in different conditions. If no cover exists (a condition achieved in the case of the '1D' pit in this study), a maximum D eff C is likely to be observed. In this study the 1D pit value of D eff C is in broad agreement with the values reported in the literature ( Table 1). The decrease in D eff C obtained for pits with a (perforated) cover (typically 50% of the maximum value) can be attributed to the decrease in the rate of effective diffusion of metal ions away from the pit provided by such physical barrier. The variation in perforation factor in pit growth is a major source of uncertainty in the prediction of pit growth. Its role in pit stability is important, acting either as a resistive barrier [9] or a diffusion barrier [8], thus protecting metastable pits and even stable pits [4,6] from repassivation. In order to provide useful input parameters for insertion into pit growth models, the simplest approach suggested in this work is to estimate an empirically determined "perforation factor" by taking the ratio of the value for a 'covered' pit with the value obtained for a pit without a cover. Lateral growth of pits As shown in Figs. 19 and 20, the pit width growth rate increases with chloride concentration. These changes are consistent with the results of Ernst and Newman [11]. The most likely reason for the changes in width with chloride concentration is that an increase in the bulk chloride concentration leads to a decrease in IR drop in the solution and therefore, an increase in the interfacial potential and dissolution rate at the laterally developing fronts, which grow under activation/ohmic-drop control. However, at the pit bottom, dissolution is diffusion-limited as described above, so that changes in the pit depth with time are independent of salt concentration. Comparison of pits that have grown under potentiostatic or galvanostatic control in solutions of the same chloride concentration show that while the pit depths are similar (growing under diffusion control), the pit widths are greater for the pits grown under potentiostatic control. This can be attributed to the lower potential measured during the galvanostatic experiments carried out in this work. Fig. 22(a) shows that the pit at the initial stages grows below the stability product of 2.5 mA/cm. This is in agreement with the work of Pistorius and Burstein [8] which showed that metastable and even stable pits initially grew with the stability products value below 3 mA/cm. This is due to the diffusion barrier provided by the pit cover, as is visible in the figure, which maintains the concentrated solution inside pit cavity. Fig. 22(b) shows the slight increase in the stability product as the pit enlarges. The stability products as a function of time for galvanostatically grown pits, shown in Fig. 23, illustrate an initial increase, but tend to fluctuate between 3 and 4 mA/cm for the rest of growth time. It is seen that pits initially grow below the stability product with the support of their cover. Even after that, only at the pit bottom does the stability product exceed 3 mA/cm; the rest of boundary grows under conditions below the stability value because of diffusion barrier provided by cover. This emphasises the importance of the lacy cover for transport of metal ions from the pit bottom into the bulk solution and supports the proposed "perforation factor". Radiography observations have confirmed that the pits grown under potentiostatic control at relatively high potentials develop via lobes through an undercutting process that perforates the metal surface and gradually changes the pit shape from semicircular at the start to dish-shaped as growth occurs. In these conditions, the pit perimeter is smooth, consistent with the presence of stable chemical conditions (and hence propagation rates) within the pit (the salt layer present at the pit bottom is likely to provide a chemical buffer to any externally driven change). 2. In the early stages of pit growth under galvanostatic control following initiation under potentiostatic control, pits propagate by lobes undercutting the metal in a similar way to potentiostatic growth. As the pits propagate under constant current, however, the potential decreases and they tend to approach a circular shape with a rough etched surface at the bottom, which is likely to grow close to the critical concentration required for propagation, without a salt layer. The change in growth mode may also be associated with a result of the gradual decrease in potential as the pit grows. 3. The local current density along pit perimeter can be directly measured from the movement of the pit boundary with suitable imaging techniques (e.g. X-ray radiography). The active local current density inferred on the basis of these techniques varied (locally) between ∼1 and 5 A/cm 2 . 4. The maximum current density along the pit perimeter is observed at the transition point from the passive to active region of the pit wall and is between ∼3 and 5 A/cm 2 . It is suggested that this is the critical passivation current density, i crit , which increases slightly with increase in the bulk chloride concentration. The lowest active current density is seen at the pit bottom, and is a diffusion-limited current density associated with a metal-chloride salt layer. 5. The current density within a pit under potentiostatic control remains almost constant during growth, whereas it shows significant decrease during galvanostatic pit growth. 6. For both potentiostatically and galvanostatically grown pits, a linear relation exists between the square of the pit depth and growth time, and is independent of the bulk chloride concentration, which suggests that increase in pit depth is under diffusion control, and the pit bottom is covered with a salt layer. 7. The diffusion-related parameter D eff C can be extracted from the radiographic 2D pit growth data and found to vary around 50% of the value for a 1D pit reflecting the effectiveness of lacy pit covers in hindering diffusion of metal ions from the pit. The value increases with increasing chloride concentration of the bulk solution, reflecting the increase in porosity of the cover. 8. Lateral growth of pits is controlled by the conductivity (and therefore chloride concentration) in the solution for both potential-and current-controlled regimes.
2019-04-28T13:05:33.221Z
2015-11-01T00:00:00.000
{ "year": 2015, "sha1": "e59bbaaa4ee5a7ee223b39d943cdb2ccfb72af03", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.corsci.2015.06.023", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "e85b11011f85a9d29f100f7a8e50c6e2196dfc1a", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
267579436
pes2o/s2orc
v3-fos-license
Human iPSC-derived photoreceptor transplantation in the cone dominant 13-lined ground squirrel Summary Several retinal degenerations affect the human central retina, which is primarily comprised of cones and is essential for high acuity and color vision. Transplanting cone photoreceptors is a promising strategy to replace degenerated cones in this region. Although this approach has been investigated in a handful of animal models, commonly used rodent models lack a cone-rich region and larger models can be expensive and inaccessible, impeding the translation of therapies. Here, we transplanted dissociated GFP-expressing photoreceptors from retinal organoids differentiated from human induced pluripotent stem cells into the subretinal space of damaged and undamaged cone-dominant 13-lined ground squirrel eyes. Transplanted cell survival was documented via noninvasive high-resolution imaging and immunohistochemistry to confirm the presence of human donor photoreceptors for up to 4 months posttransplantation. These results demonstrate the utility of a cone-dominant rodent model for advancing the clinical translation of cell replacement therapies. INTRODUCTION The evolution of stem cell and retinal organoid technologies has generated remarkable opportunities for therapeutic development, including precision medicine and cell replacement therapies.In vitro-derived three-dimensional (3D)-retinal organoids are ''mini-retinas'' that faithfully mimic the in vivo retinogenesis timeline to make each of the seven major cell types of retinal tissues sequentially in a self-assembled fashion.These retinal organoids are used as a mainstream tool for advancing numerous studies, including organoid development, disease modeling, drug discovery, gene therapy, regenerative medicine, and precision medicine (Kandoi and Lamba, 2023).Stem cellderived photoreceptors have been shown to successfully survive and integrate into host retinas of numerous animal models following transplantation, including mice and rats (Lamba et al., 2009(Lamba et al., , 2010;;Mandai et al., 2017;McLelland et al., 2018;Zhu et al., 2017) and, more recently, in canines (Ripolles-Garcia et al., 2020).However, all of the models used to date have experimental limitations.For example, traditional rodent models lack a cone-rich central region resembling the human macula, which makes it difficult to test therapies for many macular or cone dystrophies.Although larger animal models such as canine, swine, and nonhuman primates (NHPs) have a retinal architecture that mimics the cellular landscape of the human macula (Aboualizadeh et al., 2020;Chandler et al., 1999;Mowat et al., 2008), these models can be expensive, have limited availability, and lack the expansive transgenic tools that are available in rodents. Despite these limitations, the ability to use noninvasive imaging to monitor cell survival and integration increases the utility of all animal models.For examples, scanning light ophthalmoscopy (SLO) and optical coherence tomography (OCT) have been used in several studies to monitor transplanted cells in vivo in mice and NHPs (Liu et al., 2020;Takagi et al., 2018;Uyama et al., 2022).While the noninvasive nature of these imaging approaches enables longitudinal study, they have limited lateral resolution.Adaptive optics techniques have recently been applied to numerous animal models (Geng et al., 2012;Huckenpahler et al., 2019;Hunter et al., 2010;Joseph et al., 2020;Sajdak et al., 2016), which enable cellular resolution imaging of retinal cells.Noninvasive fluorescence adaptive optics SLO (AOSLO) has also been used to track the survival and migration of individual transplanted photoreceptors in cynomolgus monkey eyes (Aboualizadeh et al., 2020).Coupling these advances in imaging with novel animal models offers an opportunity to move retinal therapeutics forward. Here, we sought to advance cone replacement therapy efforts by exploring nonconfocal AOSLO imaging in the 13-lined ground squirrel (13-LGS).Advantages of the 13-LGS include the unique ''cone-rich'' retina (Ahnelt, 1985;Jacobs and Yolton, 1969), low cost and ease of availability, and amenability to noninvasive retinal imaging.In the present study, we have assessed the survival and integration of fluorescently labeled photoreceptors derived from human retinal organoids in 13-LGS with healthy, undamaged control retinas as well as mechanically or chemically damaged retinas.Retinal damage was induced with chemical intravitreal injection or mechanically by retinal detachment in the inferior retina.Following transplantation, transplanted cells were longitudinally tracked to examine survival.Using nonconfocal AOSLO, we were able to resolve structures resembling cone inner segments in the transplanted region with concomitant fluorescent signal captured through SLO.The in vivo data were corroborated with immunohistochemical studies that demonstrated the presence of human and retinal cell-type-specific markers for up to 4 months posttransplantation in these retinas.Our results demonstrate the utility of this alternative cone-rich preclinical model for advancing cone replacement therapeutic approaches. RESULTS Long-term survival of human induced pluripotent stem cell (hiPSC)-derived retinal cells up to 4 months posttransplantation An hiPSC line with GFP knocked in at the AAVS1 safe harbor locus was differentiated into cone-rich 3D-retinal organoids (Figures S1A-S1C) (Pei et al., 2015).The 60-day-old organoids were briefly pulsed (24 h) with a Notch inhibitor (PF-03084014) to drive the fate into cone differentiation and reduce the proliferative stem cell population, as previously described (Chew et al., 2022).13-LGS retinas were damaged at least 2 weeks pretransplantation through either chemical insult or mechanical detachment (Figures S2A-S2C).The horizontal optic nerve head (ONH) in 13-LGS eyes was used as a landmark to distinguish the degenerated inferior versus nondegenerated superior retinal region.Upon the receipt of organoids from the University of California, San Francisco (UCSF) at the Medical College of Wisconsin (MCW), the organoids were dissociated into single cells and their viability was assessed to be 61.2%-93.5% by flow cytometry.The average viability obtained was 77.18% ± 8.08% from a total of 11 shipments of organoids for transplantation conducted over a period of 3 years (Figures S1D and S1E). Approximately 0.7-1 million cells dissociated from the retinal organoids were injected into the subretinal space (SRS) of 13-LGS eyes.The animals were immunosuppressed by adding 210 mg/L of cyclosporine A (AC457970050, Thermo Fisher Scientific, Waltham, MA) in their drinking water for 2 weeks (1 week before and 1 week after transplantation) to prevent perioperative cell rejection.Posttransplantation, the presence and the location of the transplanted bolus of cells was visualized immediately by fluorescent SLO (fSLO) and spectral domain OCT and compared against the pretransplantation baseline condition images.Longitudinal retinal imaging collected monthly indicated the presence of GFP signal from transplanted cells up to 16 weeks in both ATP-induced and retinal detachment-damaged models (Figure S2D).The presence of fluorescence signal persisted in 11/16 eyes of ATP-damaged and in 11/18 eyes of retinal detachment models, for up to 16 weeks.In the undamaged controls, 8/20 eyes showed the survival of transplanted cells up to 12 weeks.Although we primarily delivered the cells into the SRS, reflux of injected cells was observed in the non-SRS space along the needle track in some of the transplanted retinas (Figure 1).These transplanted cells settled in different retinal layers within the 13-LGS retinas (Figure 1A) or in the vitreous space/epiretinal location (Figure 1B).Overall, monthly follow-up using fSLO fundus and OCT in vivo imaging confirmed the presence of cell clumps in the non-SRS (vitreous space, epiretinal location) and SRS of degenerated and undamaged control models (Figure 1).Donor photoreceptors' inner segment structure, colocalized with the persistent GFP patch at 3 months postinjection At the end stage of either a 6-week or 12-week follow-up, the retina was imaged with a custom AOSLO system to visualize the transplanted cells with single-cell resolution.We confirmed the presence and identified the location of the transplanted region through fSLO imaging.Then, we further sorted the transplanted retina by the location of cell clumps in different retinal layers via SD-OCT imaging (Figure 1).Through the fSLO and AOSLO montage overlay, we selectively targeted the region where the cells resided in each model; ATP, retinal detachment, and undamaged eyes (Figure 2A).We further extracted one region of interest (ROI) from the AOSLO montage, and the results were presented in confocal and nonconfocal modalities (Figure 2A).The confocal modality collected the light reflected from the photoreceptor outer segments, whereas the nonconfocal modality collected light that was scattered from the photoreceptor inner segments (Scoles et al., 2014).Evidence of host photoreceptor degeneration was noted in ATP and retinal detachment models via AOSLO by disruption of the host cone photoreceptor mosaic .Furthermore, the region of the transplanted cells captured on the confocal channel was observed as a reflective material at 150 days postinjection.This result was not unexpected because the photoreceptor outer segments do not develop until 180-200 days in retinal organoid cultures for the hiPSC line used in this study.Further tracking of these specific cells via nonconfocal imaging indicated the presence of structures resembling photoreceptor inner segments in the transplanted region in the retinal detachment model (Figure 2B).These mature photoreceptors were not visualized within the ATP model.This result suggests that the transplanted cells in the ATP model animals were not uniformly bounded in the photoreceptor layer based on in vivo imaging. Further comprehensive image analysis on the retinal detachment model was performed where AOSLO montages were spatially aligned to the fSLO image to colocalize the GFP fluorescent signal (Figure 2B).Visualization of the transplanted cells was seen most inferiorly within the subretinal detachment, likely due to gravitational effects (Figure 2B).The three locations presented are the transition/border of the subretinal detachment, within the detachment, and at the transition zone to the injected cells (Figure 2B).The morphology of the unaffected cone mosaic with intact inner and outer segments is shown above the white dashed lines in the damaged transition images acquired by confocal/nonconfocal imaging (Figures 2B-1 and 2B-2).In contrast, the areas below the white dashed lines primarily show the disrupted cone mosaic and some enlarged photoreceptors, which is likely due to edematous swelling, that occurs secondarily to damage (Figures 2B-1 and 2B-2).A complete loss of photoreceptors in the ablated region was seen, as evidenced by the exposed hexagonal packing pattern of the underlying the retinal pig-mented epithelial cells (Figures 2B-3 and 2B-4).Lastly, examination of the confocal images in the transplanted transition zone of injected cells revealed a reflective tissue covering the ablated region below the white dashed line compared to the nontransplanted/damaged region seen above the white dashed line (Figure 2B-5).Although resemblance of a photoreceptor mosaic was not seen, our imaging demonstrates the presence of injected material in the region below the white dashed lines in transplanted transition images (Figure 2B-5).In the nonconfocal modality image of the transplanted transition region, the presence of potential replacement photoreceptors is demonstrated by -4, and A-7). AOSLO confocal (A-2, A-5, and A-8) and nonconfocal (A-3, A-6, and A-9) magnified images of the white boxed region on the transplanted ROI (A-1, A-4, and A-7) showing the reflective and scattering pattern of the photoreceptor outer segments and inner segments from a single photoreceptor cell, respectively.ATP and retinal detachment models show a complete loss of outer segments (A-2 and A-5).The transplanted cells in all 3 models show putative inner segments (white arrowheads, A-3, A-6, and A-9).(B) A representative fSLO image overlaid on AOSLO montage from the retinal detachment model.The confocal and nonconfocal images captured using AOSLO at the damaged transition border (B-1 and B-2), within the damaged region (B-3 and B-4), and at the transplanted transition border (B-5 and B-6) is shown here.The dotted white line marks the boundary separating the nondetached and detached retinal regions (B-1 and B-2).The image shows the intact outer segment and inner segments above the white dashed line area in the undamaged retinal region versus the loss of the cone mosaic below the white dashed line area in the detached retinal region (B-1 and B-2).Complete degeneration of cone mosaic with hexagonal packaging of the underlying RPE is shown in the damaged images (B-3 and B-4).The confocal and nonconfocal images in the transplanted transition region show reflective patterns instead of outer segment structure and putative inner segments in the transplanted region (below the white dashed line), respectively (B-5 and B-6).Scale bar, 50mm. structures resembling the inner segments of the photoreceptors in the region below the white dashed line (Figure 2B-6).To sum up, the in vivo imaging data show both the survival and maturation of donor photoreceptors following the subretinal transplantation in the 13-LGS retinal detachment model up to 16 weeks. Transplanted donor cell was identified as either photoreceptor/ganglion/amacrine cell by histology Following postmortem tissue collection, retinal sections were stained with the previously described human nuclear marker (HuNu) (Zhu et al., 2017) and cells expressing GFP were confirmed to colocalize it (Figures 3A and 4A).Furthermore, we identified more human nuclear marker-expressing cells compared to GFP, likely due to the silencing of GFP promoters as cells mature.We confirmed this by assessing GFP expression in D150 organoids from this line.We observed lower or lack of GFP signal in OTX2 + photoreceptors and especially HuC/D + (ELAVL3/ELAVL4) inner retinal neurons while it was robustly present in VSX2 + retinal progenitors (Figures S1F and S1G).We further confirmed human origin with another human-specific marker, human LMNB2 (Figure S4A).Interestingly, we also observed that the photoreceptor loss in the host retina was complete in the ATP model relative to the retinal detachment model when comparing the DAPI channel in the outer nuclear layer (ONL) (Figures 3A, 4A, and S2B).The retinal detachment model still had a partially preserved photoreceptor layer (Figure 4A and S2C).Next, we explored the fate of the HuNu + cells.Cells transplanted in ATP-induced damaged retinas were found to be scattered axially across the retina (Figure 3).This suggests that the cells were able to migrate across the outer limiting membrane and integrate into different retinal layers.In contrast, the transplanted cell in the retinal detachment model were mainly confined within the subretinal space, with very few cells migrating into the retina (Figure 4).Based on these observations, we conclude that the migration of transplanted photoreceptor cells into the retina can occur but depends on the severity of damage. We next evaluated the fate of the transplanted cells.Overall, the majority of the transplanted cells were found to be OTX2 + photoreceptor with some inner retina migrating cells expressing amacrine/ganglion marker (HuC/D + ) in both ATP and retinal detachment models (Figures 3B, 3C, 4B, and 4C).The HuNu + cells in the subretinal space of both ATP and retinal detachment models and a few that migrated into surviving host ONL were identified as photoreceptors by co-staining with the photoreceptor marker Otx2 (Figures 3B and 4B), with GFP + processes extending into the plexiform layer, where photoreceptor synapse with bipolar and horizontal cells.The majority of the HuNu + /OTX2 + cells also co-stained for another photoreceptor transduction marker, recoverin (Figures 3C and 4C).Finally, we confirmed the presence of human cones in some of the transplanted GFP + cells by the expression of retinoid X receptor gamma, cone arrestin (ARR3), and green cone opsin (GCO), although the expression of the mature markers (GCO and ARR3) was much lower than that in host photoreceptors (Figure S3).The HuNu + cells that migrated into the inner nuclear and ganglion cell layers were HuC/D + , suggestive of ganglion and amacrine cell fate (Figures 3D and 4D).Further analysis with the ganglion cell-specific marker BRN3A (POU4F1) showed that these cells were BRN3A À , suggestive of amacrine fate (Figure S4B).We also observed some migration of SOX2 + / HuNu + cells, indicating the limited migration ability of either stem cells or glia (Figure S4C).Finally, we assessed synaptic development by staining for a human-specific synaptic marker synaptophysin (Ludwig et al., 2023).We observed coexpression of the synaptic marker in GFP + cells in both sets of transplant conditions (Figures 3E and 4E).Overall, the localization of the transplanted cells varied depending on the damage modality and the fate varied depending on the destination of the cell within the retina. DISCUSSION This study explored the survival and integration of transplanted hiPSC-derived photoreceptors in the cone-dominant 13-LGS retina.Here, we emphasize our focus on the survival of the photoreceptor precursors in the conedominant environment and compared different damage models.For this study, we adopted a xenotransplantation approach to successfully demonstrate the survival of 60-day-old photoreceptors (from 3D-retinal organoids) in both chemically (ATP) and mechanically retinal detachment-induced retinal degeneration models of 13-LGS for up to 16 weeks posttransplantation.In vivo imaging modalities were used to consistently track the presence of GFP signal expressed by the transplanted human donor cells in the host retina.Furthermore, we imaged the transplanted location with a custom AOSLO system to evaluate the structure of these cell clusters compared to undamaged retinas with high-resolution imaging.Although this is not direct proof that these are indeed developing inner segments from the donor photoreceptors, it is encouraging because it resembles the structure of the photoreceptor inner segments.In a prior transplantation study within NHPs, fluorescence AOSLO was used to directly identify transplanted donor cells (Aboualizadeh et al., 2020). Use of the 13-LGS model provides an advantage over previously reported transplantation studies using traditional rodent (e.g., mouse, rat) models due to their unique conedominant, rather than rod-dominant, retinal composition.This difference offers a host retina with an environment potentially better suited to recapitulate the human macula, which is cone dominant and drives some of our most important visual functions.Although retinal transplantation has been tested in NHPs (Shirai et al., 2016), which have a fovea like the human retina, our 13-LGS model is relatively less expensive and more easily accessible (Merriman et al., 2012).The main limitation of the 13-LGS is its natural cycle of hibernation from October to March (Sajdak et al., 2019).During hibernation, the 13-LGS retina undergoes extensive retinal remodeling (Sajdak et al., 2019), which can confound long-term studies because the impact of remodeling on the transplanted cells is unknown.This may limit carrying out longer-term functional studies as retinal organoid derived human photoreceptors completely mature and express opsins at >180-200 days (Capowski et al., 2019;Kandoi and Lamba, 2023). In transplantation studies, ex vivo histology results have the advantage of presenting direct evidence of the survival and integration of donor cells.While this provides information for a specific time point of assessment, the progression of cell survival and integration over time is lost.Conversely, our study adopted multimodal noninvasive imaging of the transplanted cells.This allowed for the longitudinal tracking of the survival of individual cells and integration over a 16-week period.The real-time tracking of viable cells is beneficial because it allows us to both continuously monitor the presence of donor cells, their position, and integration time point into the host retina.In addition, the noninvasive imaging methods SLO and SD-OCT, used to capture fluorescence signal and cell location here, are also commonly used clinical tools, highlighting the clinical translational capabilities of results obtained by these methods to future human clinical trials.Although noninvasive imaging offers the ability to continually capture information of cell survival and integration progression, it remains insufficient as direct evidence for characterizing retinal changes.Hence, the in vivo phenotypic findings would ultimately need confirmatory histology data to support our outcomes.Here, we successfully combined the dual approach of in vivo imaging along with histology at 16 weeks posttransplantation to confirm the integration status and the donor cell origin.Follow-up studies may be considered to validate the phenotypic findings of in vivo imaging corresponding to ex vivo evidence at each of the defined time points. Several previous rodent studies suggest that the donor cells can transfer protein into the host cells and confound transplant data analysis (Pearson et al., 2016;Santos-Ferreira et al., 2016).This is thought to be predominantly due to cytoplasmic exchange through nanotube connections (Heisterkamp et al., 2022;Ortin-Martinez et al., 2021).However, in our findings, the material transfer of cytoplasmic protein is unlikely to confound our integration data, because GFP + cells expressed at least two different human-specific markers.Interestingly, although most of the transplanted cells were OTX2-expressing photoreceptors, some human cells that migrated further into the ganglion cell layer or inner retina expressed the amacrine cell marker.This demonstrates the migration potential of these cells into the retina from the subretinal space where they were originally transplanted (Figures 3D and 4D).However, the degree of migration seen was dependent on the degree of photoreceptor loss in the damage model used.A complete disruption of the photoreceptor layer and possibly the outer limiting membrane seems to be required for large-scale migration into the inner retina.Another study in a murine model showed the incorporation of hiPSCderived cones in a retina with cone degeneration (Gasparini et al., 2022).Future studies with more detailed characterization of the microenvironmental factors contributing to migration and integration of transplanted hiPSCderived cones in such transplantation studies can provide clues for successful therapies. Anesthesia and eye preparation for transplantation and imaging All of the 13-LGS were anesthetized using isoflurane mixed with oxygen.The animals were first induced with a 5% isoflurane (901805, VetEquip, Livermore, CA) inhalant in a clear induction chamber.The isoflurane was maintained at 5% throughout the surgical procedures through a nose cone.During the imaging procedures, the isoflurane was maintained at 1%-4% through a nose cone.Isoflurane exposure to investigative staff was minimized by passive gas scavenging.Following isoflurane induction, the eyes of the 13-LGS were dilated and cyclopleged with 2.5% phenylephrine (17478020102, Akron, Lake Forest, IL) and 1% tropicamide (17478010212, Akron).Next, the animal was positioned in a preheated platform for transplantation.For imaging, the animal was positioned in an imaging cassette with a custom mask mount to hold the nose cone in place.Each imaging cassette had a nodal point in which the eye was aligned to minimize the adjustment of the position of the animal after the initial setup. Retinal injections in 13-LGS Retinal degenerations were induced using chemicals via intravitreal injection of ATP (AAJ1058509, Thermo Fisher Scientific, Waltham, MA) or mechanically by subretinal injection of 0.9% normal saline (012007, Phoenix, Manhattan, KS).Before all of the retinal surgical procedures, the dilated and cyclopleged eyes received a drop of tetracaine (10UEF, Alcon, Fort Worth, TX) to provide a temporary local anesthetic.The eye was dropped with betadine (405943, Alcon) to sterilize the region surrounding the injection site and minimize the chances of infection from injections.Next, the top of the eye was coated with Gonak solution (9050-1, Sigma Pharmaceuticals, North Liberty, IA) and a small sterile circular coverslip was used to facilitate the visualization of the fundus.Next, 10-mL intravitreal injections of 0.723 M ATP were injected with a 1-cm 3 insulin syringe closer to the retinal surface in the vitreous space.For retinal detachment, the needle was first placed into the subretinal layer and 50-75 mL of 0.9% sterile normal saline was injected into the inferior retina below the ONH.Degenerations were induced at least 2 weeks before cell transplantation.Cells were transplanted via subretinal injection by using a 25G trocar to make a small 1-mm insert from the limbus, and $0.7-1 million dissociated cells (in 40 mL of 3D-retinal differentiation medium [RDM] with all-trans retinoic acid [ATRA]) from the retinal organoids were injected into the degenerated inferior retina and experimental undamaged superior retina using a Hamilton syringe fitted with a 33G blunt needle.A Leica microsystem M651 surgical scope (Leica Biosystems, Wetzlar, Germany) or a Leica intraoperative surgical scope Proveo8 (Leica Biosystems) was used to visualize the retina during injections. Human retinal organoid differentiation All human stem cell studies were approved by UCSF Institutional Review Board (IRB) and Human Gamete, Embryo and Stem Cell Research (GESCR) Committees.hiPSCs knocked in with pan-expressing CAG-GFP at the safe harbor locus (Pei et al., 2015) were directed toward retinal fate via the embryoid body (EB) and three-step (3D-2D-3D) differentiation approach, as per the published protocols from our previous studies (Arthur et al., 2022;Bachu et al., 2022).Briefly, hiPSCs were lifted and cultured onto a suspension plate to form EBs. After 1 week, the EBs were briefly exposed to BMP4 at varying concentrations (1.5-0.375 nM) for approximately 1 week.The EBs were then plated onto a Matrigelcoated plate to form optic vesicles in RDM.At the end of 4 weeks, the optic vesicles were manually excised and cultured in a 3D-RDM with 1 mM ATRA (R2625, Sigma-Aldrich, St. Louis, MO) to generate self-assembled laminated 3D-retinal organoids.Approximately 55to 60-day-old organoids were treated with 10 mM PF-03084014 hydrobromide (PF) (PZ0298, Sigma-Aldrich), a small-molecule inhibitor of the Notch pathway for 24 h.The purpose of using PF was to drive the differentiation of the remaining retinal progenitor pool within the 3D-retinal organoids toward the cone fate (Chew et al., 2022).PF media was removed and viable cone-rich 3D-organoids expressing the parental reporter GFP were then collected in a 15-mL conical tube filled with freshly prepared 3D-RDM containing ATRA.The tube with PF-treated organoids was then shipped from UCSF to MCW overnight with warm packs (by FedEx overnight priority delivery) for next-day cell transplantation. Organoid dissociation and cell viability assay On the transplant day, the retinal organoids were briefly washed with 13 PBS, centrifuged at 50 3 g, and treated with 5 mL papain solution containing Earle's Balanced Salt Solution and 500 mL deoxyribonuclease I (LK003150, Worthington Biochemical, Lakewood, NJ) for 30-45 min with gentle agitation and a couple of intermittent triturations until complete dissociation.Postdissociation, the cells were passed through a 100-mm cell strainer Fisher Scientific,Pittsburgh,PA). The cells were quantified using 0.4% trypan blue dye (1525006, Thermo Fisher, Waltham, MA) on a hemocytometer.Lastly, the cells were pelleted at 200 3 g and were resuspended in the 3D-RDM with ATRA at the desired cell concentration (2.8 3 10 4 -3.5 3 10 4 cells/mL) for injection. Cell viability assay using flow cytometry Dissociated cells 1 3 10 6 were fixed with 2% paraformaldehyde (28908, Thermo Scientific, Waltham, MA) for 5 min and washed with 13 PBS by centrifugation at 200 3 g for 5 min.Cells were then incubated in the dark with 1 mM TO-PRO-3 staining solution (T3605, Thermo Fisher) in 1 mL 13 PBS for 15 min.Following fixation and staining, cells were washed 3 times with 13 PBS by centrifugation at 1,200 rpm for 5 min.The final cell pellet was resuspended in flow cytometry buffer (Ca/Mg 2+ -Free PBS, 10010023, Thermo Fisher + 2% fetal bovine serum, 16000044, Thermo Fisher + 0.1% sodium azide, 71448-16, Fisher Scientific) for analysis.The unstained fixed cells were used as control for optimizing the gating strategy.Acquisition of the cells was performed using a BD LSR-II with appropriately set parameters (642 nm excitation/ 661 nm emission) and analyzed using flow cytometry analysis software (FlowJo version 10.7.2).Live cells were quantified after excluding the dead cell population stained by TO-PRO-3. Retinal imaging SLO GFP signals from the transplanted cells in the host retina of 13-LGS were assessed by in vivo fundus imaging.The Spectralis HRA with a customized multiline system (Heidelberg Engineering, Heidelberg, Germany) was used to capture the images.The near-infrared Stem Cell Reports j Vol.19 j 331-342 j March 12, 2024 339 reflectance (NIR) images were captured using an 810-to 820-nm laser source.The GFP images were captured using an excitation laser (peak = 486 nm, full width at half-maximum = 4 nm) and transmission filters (500-550 nm).The NIR and the GFP images were captured by registering and averaging 30-50 and 100 frames, respectively.The sensitivity of images was set at 36%-40% and 100% for NIR and GFP, respectively.The animals were anesthetized as described earlier for the imaging session.The images were then captured starting at the ONH and then moving toward the inferior/superior retina for visualizing the GFP signals at the bleb location. SD-OCT Cross-sectional in vivo retinal images were captured using Bioptigen Envisu R2200 or R2310 SD-OCT systems equipped with a rabbit imaging bore (Leica Microsystems).The animals were anesthetized as described earlier for the imaging session.The baseline imaging on the degenerated retinas at pretransplantation (1 week) and posttransplantation (2, 6, 12, and 16 weeks) were captured for each animal.The R2200 and R2310 SD-OCT systems were used interchangeably, and the image was registered to the same scale while processing.For each imaging session, vertical and horizontal volume scans were acquired using 650 A scans/B scans and 300 B scans per volume parameters.In addition, vertical and horizontal line scans (1,000 A scans/B scans with 100 repeats of B scans) were acquired following each volume scan.Volume scans were used to locate the ROIs within the retina, and line scans were to capture repeated scans at the exact location of interest.The images were taken along the ONH for baseline imaging, and the horizontal ONH was used as a landmark for aligning baseline and follow-up images.Once the image was acquired, the volume scans were processed and segmented using in-house customized processing software, OCT volume viewer, to extract the precise fundus image with the location of the line scan.The line scans were then processed using ImageJ (NIH, Bethesda, MD) software.Among the 100 frames, 25 frames with the least movement were extracted, and 1 reference frame was extracted from the 20 frames.The reference frame was used to align the 25 frames, and the selected 20 frames were averaged as the final processed image. AOSLO A high-resolution image of the cone mosaic of transplanted cells was captured using our previously described custom AOSLO system (Gaffney et al., 2021;Sajdak et al. 2016Sajdak et al. , 2019)).The animal was anesthetized and dilated as described earlier.The transplanted regions within the retinas were imaged 12 weeks before endpoint euthanasia.The fundus image collected with fSLO provided a road map for the AOSLO imaging (Figures 2A-1, 2A-4, and 2A-7).The protocol collected the outer retinal layer structure starting at the ONH and across the subretinal detachment region to the transplanted cells.Confocal and nonconfocal modality images were captured simultaneously to detect the outer and inner segments of the photoreceptor.During each imaging session, a segment of the optic nerve or blood vessel was captured as a landmark.In addition, a montage of the transplanted location was captured.The lid of the eye was held open with a speculum as needed, and lubrication drops were applied with either artificial tears (TRS-05-GCP, Gericare, Brooklyn, NY) or Systane (00065143105, Alcon).The custom AOSLO system used a 790-nm superluminescent diode (SLD) laser source for imaging and an 850-nm SLD for wave-front sensing.The measured optical power of 790 nm and 850 nm was 355 mW and 48 mW, respectively.Eye wavefront aberration was measured with a Shark-Hartmann wavefront sensor and corrected with a 7.2-mm diameter 97-actuator ALPAO deformable mirror (ALPAO, Montbonnot-Saint-Martin, France).The system was modified to image an animal eye with a 4.5-mm system pupil diameter.The raw data were collected as 100-frame video image sequences.The image was captured with 2 or 3 fields of view.The custom image registration and processing steps were completed as previously described (Sajdak et al., 2016(Sajdak et al., , 2019)).The collected image sequences were first desinusoided to remove the distortion due to the sinusoidal motion of the resonant scanner with a Ronchi ruling grid of 118.1 lines/mm image to estimate the distortion.The collected image sequences were next examined, and a reference frame was selected for registration.With the selected reference frame, the image sequences were strip registered through the custom software to display the processed image.Finally, the processed images were semiautomatically montaged and aligned by a custom MATLAB script with a locations file for each image.The aligned images were imported in Adobe Photoshop (Adobe, San Jose, CA), and the alignment of each image was manually checked and adjusted to create a final montage. 13-LGS eye collection and retinal section preparations The animals were humanely euthanized by decapitation following isoflurane anesthesia.The whole eye globe with a GFP signal collected from the last in vivo imaging time point was extracted from the animals.The extracted eye was initially rinsed with fresh cold 13 PBS, and a slight opening was created at the limbus of the eye with a small blade.The whole eye was next immersed in freshly prepared cold 4% paraformaldehyde (157-8, Electron Microscopy Sciences, Hatfield, PA) prepared in 0.1 M sodium phosphate buffer.The fixed eye was left at 4 C overnight and was washed with cold 0.1 M sodium phosphate buffer 3 times with 15-min intervals.Then, the anterior segment, the lens, and the vitreous humor were dissected from the eye globe.The fixed eye cups were immersed again in cold 4% paraformaldehyde for 20 min to ensure that the retinal tissues were fixed completely.The fixed eyecups were washed with 13 PBS 3 times each at 5-min intervals and then passed through a series of sucrose solutions (S1888-5KG, Sigma Life Science, St. Louis, MO) from 15% to 30% (made in 13 PBS) for 1 h each or until the eye cup sank to the bottom of the tube.The eye cups were finally embedded in 1 part of Tissue Tek O.C.T. Compound (4583, Sakura Finetek, Torrance, CA) mixed with 20% sucrose solution mixture.The embedded eye cups were sectioned as 8-mm sections on cryostat (Leica CM3050S) and collected on clean Superfrost Plus slides (48311-703, VWR, Radnor, PA).Sections were stored at À80 C until use. Immunohistochemistry The frozen retinal sections were taken out of À80 C and thawed at room temperature.The thawed retinal sections were rehydrated with 13 PBS and permeabilized with 0.1% Triton X-100 (0694-1L, VWR Life Science, Solon, OH) made in 10% normal donkey serum (NDS) (S30-100ML, Sigma) for 15 min.The permeabilized tissues was saturated with 10% NDS (made in 13 PBS) for 1 h at room temperature.The sections were probed with primary antibodies (see Table S1 for antibody details) diluted in 10% NDS overnight at 4 C.After the primary antibody incubation, the slides were washed with 13 PBS 3 times for 5-min intervals each at room temperature.Following the washes, the sections were incubated with Alexa Fluor-conjugated secondary antibodies (see Table S2 for antibody details) diluted in 10% NDS for 1 h in the dark at room temperature.The secondary antibodies were removed after 1 h and a few drops of 1 mg/mL DAPI (10236 276 001, Roche, Indianapolis, IN) made in 13 PBS were added to sections and incubated in the dark for 10 min at room temperature.Finally, the slides were washed with 13 PBS 3 times at 5-min intervals, and mounted with Fluoromount-G (17984-25, Electron Microscopy Sciences) and cover glasses (16004-350, VWR).Stained sections were imaged using LSM 700 inverted confocal microscope (Carl Zeiss, Thornwood, NY).Captured images were processed and montaged using ImageJ software (NIH). Figure 1 . Figure 1.Localization of transplanted cell in ATP, retinal detached, and undamaged 13-LGS retina (A and B) (A) Representative fundus images of fSLO overlaid on NIR images and their corresponding OCT B scan images in damaged (ATP, retinal detachment) and undamaged 13-LGS retinas showing the localization of transplanted cells in the SRS and (B) in the non-SRS, including vitreous and epiretinal surface.The black line in the fundus images of (A) and (B) corresponds to the crosssectional location of the SD-OCT B scan.Scale bar, 200mm. Figure 2 . Figure2.Adaptive optics (confocal and nonconfocal) images of damaged and undamaged 13-LGS retinas (A) Representative fSLO images overlaid on AOSLO montages from ATP, retinal detached, and undamaged 13-LGS retinas exhibiting the localization of the transplanted region (A-1, A-4, and A-7).AOSLO confocal (A-2, A-5, and A-8) and nonconfocal (A-3, A-6, and A-9) magnified images of the white boxed region on the transplanted ROI (A-1, A-4, and A-7) showing the reflective and scattering pattern of the photoreceptor outer segments and inner segments from a single photoreceptor cell, respectively.ATP and retinal detachment models show a complete loss of outer segments (A-2 and A-5).The transplanted cells in all 3 models show putative inner segments (white arrowheads, A-3, A-6, and A-9).(B) A representative fSLO image overlaid on AOSLO montage from the retinal detachment model.The confocal and nonconfocal images captured using AOSLO at the damaged transition border (B-1 and B-2), within the damaged region (B-3 and B-4), and at the transplanted transition border (B-5 and B-6) is shown here.The dotted white line marks the boundary separating the nondetached and detached retinal regions (B-1 and B-2).The image shows the intact outer segment and inner segments above the white dashed line area in the undamaged retinal region versus the loss of the cone mosaic below the white dashed line area in the detached retinal region (B-1 and B-2).Complete degeneration of cone mosaic with hexagonal packaging of the underlying RPE is shown in the damaged images (B-3 and B-4).The confocal and nonconfocal images in the transplanted transition region show reflective patterns instead of outer segment structure and putative inner segments in the transplanted region (below the white dashed line), respectively (B-5 and B-6).Scale bar, 50mm. Figure 3 . Figure 3. Ex vivo analysis of transplanted cells in ATP-induced damaged retina (A) Representative images of the transplanted regions showing colocalization of surviving and integrated GFP + cells with the human nuclear marker HuNu (red).The human marker-positive cells were observed to be scattered across the various retinal layers.(B and C) Cells in the outer retina colabeled with the photoreceptor markers OTX2 (B, white) and recoverin (C, white).(D) Representative image showing HuNu + (red) and GFP + cells in the inner retinal layers colocalizing with the amacrine/ganglion cell marker, HuC/D (white) (B).(E) Representative image showing expression of synaptic marker synaptophysin (red) using a human-specific antibody in GFP + cells in the outer retina.Insets for each subpanel show a magnified view and arrowheads highlighting the coexpression.The arrows in (B) highlight GFP + processes extending into the plexiform layer.DAPI (blue) marks nuclei.INL, inner nuclear layer, GCL, ganglion cell layer.Scale bar, 20 mm. Figure 4 . Figure 4. Ex vivo analysis of transplanted cells in retinal detachment-damaged retina (A) Representative images of the transplanted regions showing colocalization of surviving and integrated GFP + cells with the human nuclear marker HuNu (red).The human marker-positive cells were observed to be scattered across the various retinal layers.(B and C) Cells in the outer retina colabeled with the photoreceptor markers OTX2 (B, white) and recoverin (C, white).(D) Representative image showing HuNu + (red) and GFP + cells in the inner retinal layers colocalizing with the amacrine/ganglion cell marker HuC/D (white) (B).(E) Representative image showing expression of synaptic marker synaptophysin (red) using a human-specific antibody in GFP + cells in the outer retina.Insets for each subpanel show a magnified view and arrowheads highlight the co-expression.Scale bar, 20 mm.
2024-02-11T06:18:56.130Z
2024-01-30T00:00:00.000
{ "year": 2024, "sha1": "87c962a229cb16a90d26dafc29d4d383aa3f0542", "oa_license": "CCBY", "oa_url": "http://www.cell.com/article/S2213671124000080/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b35dfb6e048d21b3f074a2b07183eeb0874a2812", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
4662639
pes2o/s2orc
v3-fos-license
Measuring Hospital Performance Using Mortality Rates: An Alternative to the RAMR Background: The risk-adjusted mortality rate (RAMR) is used widely by healthcare agencies to evaluate hospital performance. The RAMR is insensitive to case volume and requires a confidence interval for proper interpretation, which results in a hypothesis testing framework. Unfamiliarity with hypothesis testing can lead to erroneous interpretations by the public and other stakeholders. We argue that screening, rather than hypothesis testing, is more defensible. We propose an alternative to the RAMR that is based on sound statistical methodology, easier to understand and can be used in large-scale screening with no additional data requirements. Methods: We use an upper-tail probability to screen for hospitals performing poorly and a lower-tail probability to screen for hospitals performing well. Confidence intervals and hypothesis tests are not needed to compute or interpret our measures. Moreover, unlike the RAMR, our measures are sensitive to the number of cases treated. Results: To demonstrate our proposed methodology, we obtained data from the New York State Department of Health for 10 Inpatient Quality Indicators (IQIs) for the years 2009-2013. We find strong agreement between the upper tail probability (UTP) and the RAMR, supporting our contention that the UTP is a viable alternative to the RAMR. Conclusion: We show that our method is simpler to implement than the RAMR and, with no need for a confidence interval, it is easier to interpret. Moreover, it will be available for all hospitals and all diseases/conditions regardless of patient volume. Implications for policy makers Policy-makers can benefit from the results of our study in the following ways: • We frame the evaluation of hospitals in terms of screening rather than hypothesis testing. • Unlike the risk-adjusted mortality rate (RAMR) , our method can be applied to all situations, regardless of the number of cases, thereby providing a more comprehensive evaluation of each hospital. • Our alternative measure has a clear and practical interpretation. Implications for the public In choosing a hospital for treatment of a serious disease or condition, you will want to consider many factors, perhaps the most important being your likelihood of surviving the hospitalization. You may find the risk-adjusted mortality rate (RAMR) difficult to interpret since that will require that you have a benchmark value for comparison and an understanding of confidence intervals. Our proposed method will provide you with a measure that: • has a clear and practical interpretation, • does not rely on a benchmark or an understanding of confidence intervals, and • is available regardless of the number of cases were treated by the hospital last year. Therefore, you will be sure to have information on all hospitals under consideration, and that the information will be readily understood without confusion. Background Healthcare costs in the United States rose 5.3% in 2014 to $3.0 trillion, or over $9500 per capita, reaching 17.5% of gross domestic product (GDP), 1 which has created recurring and urgent demands to reduce healthcare costs. Hospitals may look for ways to reduce cost by reducing the number of treatments, using less expensive interventions, or reducing length of stay, thereby risking quality degradation. Thus, the need to monitor provider performance has perhaps never been greater. Comparing hospitals based on their outcome measures is a relatively common practice. Healthcare agencies often use hospital mortality rates within specific disease or condition categories to measure performance. Often state health departments and the Center for Medicare and Medicaid 2 release these reports to the public. The Leapfrog Group 3 and the Agency for Healthcare Research and Quality (AHRQ) 4 contribute to these reports. The public as well as healthcare agencies look to these reports, which may be difficult to interpret, to find quality healthcare providers. A proper comparison of mortality rates must include an adjustment for risk variation that "levels the playing field" so that a hospital's evaluation is independent of the risk profile of its patients. Risk factors commonly include age, comorbidities, illness severity, and other patient and case characteristics. The most commonly used adjustment method is the risk-adjusted mortality rate (RAMR) defined later. Hibbard et al 5 found that hospitals whose RAMRs were made public were more likely to engage in quality improvements efforts than those who had a confidential quality report or no quality report at all. Baker 6 showed that the 30-day mortality rate in New York State (NYS) after coronary artery bypass graft declined 41% between 1989 and 1992 after public reporting, and that published mortality rates impacted referral rates among cardiologists in NYS, thereby affecting provider market share and patient choice of healthcare provider. Schneider and Lieberman 7 report that, when choosing health plans, employers use these "report cards" to identify high quality health plans at reasonable costs. Similarly, employees may select plans, doctors and hospitals using report card information about providers, quality score, price and accessibility. Moreover, a poorly rated provider may cease operations leaving the local area with a diminished supply of a critical healthcare service. However, previous studies have questioned the use of the RAMR for this adjustment. Iezzoni 8 shows that different methods can provide varying judgments. Thomas and Hofer 9 questioned the accuracy of the RAMR as a valid indicator of a hospital's quality performance, and conclude that "reports that measure quality using RAMRs misinform the public about hospital performance. " Dimick et al 10 show that the RAMR may not be appropriate for all procedures, particularly when the sample size is small. Scott 11 discusses the statistical difficulties associated with a rare outcome (death), the omission or difficulty of proper measurement of patientrelated prognostic factors, and the weak and inconsistent correlation between hospital-wide RAMR and explicit quality indicators. He concludes the RAMR is a poor indicator of unsafe hospital care. There are a number of ways a hospital's performance is measured. Racz and Sedransk 12 modeled risk-adjusted assessments utilizing Bayesian and frequentist indirect standardization methods, which they compared to the RAMR for "provider profiling. " These methodologies produced very similar results although they found markedly fewer outlying hospitals when the random-effects assumption was applied to the hospitals. The Leapfrog Group 3 provides a hospital safety score that is generally a grading system that shows how safe a hospital is for patients. The purpose of the score is to provide a mechanism to consumers in which they can educate themselves about the safety of a facility they may be considering. The scores are publically reported and are available online. In addition, the Joint Commission's ORYX program 13 integrates performance measurement into their accreditation and quality improvement processes. The Center for Medicare and Medicaid Services (CMS) 2 publicly reports process performance measures based on data collected by the Hospital Quality Alliance (HQA). 14 In this paper, we will demonstrate that the RAMR is intrinsically a highly flawed performance measure and that, moreover, it is applied improperly. We will then propose an alternative to the RAMR, along with statistically appropriate implementation methods, that avoids all the problems listed above without requiring additional data. Our alternative measure has a clear and practical interpretation, is based on standard statistical theory and methods and, unlike the RAMR, can be applied regardless of the number of cases. Methods We demonstrate our methodology using data from the NYS Department of Health (NYS DOH). 15 We chose to test our model using NYS data for several reasons. NYS was among the first states to use the RAMR, applying it to cardiac surgeons and the hospitals in which they performed cardiac surgery starting in 1989. Kasprak 16 cites the NYS DOH program as "the first program in the country to produce public data on outcomes for cardiac surgery and is the nation's longest running program of its kind. " During the intervening 28 years, the NYS DOH has dramatically expanded its use of the RAMR to 24 inpatient quality indicators (IQIs). It also applies the methodology to complications and readmissions. NYS DOH makes its raw data readily available to the public through its web site. 15 We believe that, if we can effect improvements in NYS, then other states are likely to follow. The Risk-Adjusted Mortality Rate The RAMR attempts to account for the differing risk profiles of patients. It is standard practice to use a logistic regression model for a given procedure or illness to estimate each patient's probability of death based on their patient and case characteristics. The mean and standard deviation of the number of deaths, and therefore the expected mortality rate (EMR) for a procedure or illness at a given hospital, is then estimated from these probabilities. The ratio of the observed mortality rate (OMR), which equals the number of deaths divided by the number of cases, to the EMR is then multiplied by the statewide observed mortality rate (SOMR) or national mortality rate (NMR) to obtain the RAMR. Suppose that a hospital treats n patients with a particular diagnosis and that X of these patients die. Then the OMR = X/n. Define a variable called Death, which equals 1 if the patient died, or equals to 0 if the patient survived. Next, consider a logistic regression model that has Death as its dependent variable and a set of established mortality risk factors for that diagnosis as its independent variables. Suppose that this model estimates probabilities of death of p 1 , p 2 , …, p n , for the n patients. Then the hospital's EMR= 1 1 n i i p n = ∑ and its . Thus, RAMR < SOMR (alternatively, RAMR > SOMR) indicates better-than-expected (or worsethan-expected) performance. According to Marang-van de Mheen and Shojania 17 the hospital standardized mortality ratio ( is not a sufficient measure to inform patients and policy-makers as to whether the mortality risk is higher in a hospital compared to another. In addition, the HSMR may be affected by Simpson's paradox. Although this may not cause direct harm to a patient, this may cause hospitals that need to address problems not to do so and may cause hospitals that actually do have problems not to address them. Shortcomings of the Risk-Adjusted Mortality Rate and its Application We identify five shortcomings of the RAMR: The RAMR is a poor indicator of hospital performance: We demonstrate that the RAMR needs to be augmented by additional information before a proper interpretation of its value is possible. Consider Ellenville Regional Hospital, located in New York State's Hudson Valley. In 2013, they treated 42 patients with pneumonia, two of whom died. Therefore, their OMR was 2/42 = 4.76%. Based on the patient and case characteristics of their 42 patients, their EMR was 2.06%. Thus, their ratio OMR/EMR = 2.31; their OMR was 2.31 times their EMR. Given that the SOMR for pneumonia in 2013 in NYS was 4.60%, the RAMR for Ellenville Regional Hospital for pneumonia in 2013 was 2.31 * 4.60% = 10.65%. Without further analysis, it is easy to interpret this hospital's performance in pneumonia in 2013 as very poor. Based on its RAMR, if this hospital treated every pneumonia patient in NYS in 2013, then 10.65% of them would have died, not 4.60%; there would have been 2.31 times as many pneumonia deaths. The NYS DOH 15 computes a 95% CI for every RAMR, and in this case the interval is (1.29%, 38.45%). This is a very wide confidence interval primarily because the sample size is small. Also, since the CI for the RAMR contains the SOMR (4.60%), the NYS DOH reports that the difference between Ellenville Regional Hospital's RAMR and the SOMR was not statistically significant. Given that this information is made public, and given the public's general lack of understanding of the proper interpretation of confidence intervals and statistical significance, Ellenville Regional Hospital runs the very real risk that people, including the media, will focus only on the RAMR since it is the performance measure that the State uses. The mysterious confidence interval and the reference to "not statistically significant" is likely to be ignored. The interpretation of the RAMR is obscure: The NYS DOH web site that provides the data to the public includes the formula given above for computing the RAMR but we could find no explanation regarding its interpretation. One interpretation is that, if every patient in NYS with the given condition had been treated at the given hospital, then the RAMR represents the estimated proportion that would have died. We seriously doubt that many people would have come to this interpretation without guidance. Moreover, even if they had, it is unlikely that they would have recognized the amount of sampling error in the RAMR, especially when the sample size is small. We can only wonder how many pneumonia (and other) patients might have boycotted Ellenville Regional Hospital because of its high RAMR, even though it was not statistically significantly different from the statewide rate. The normal distribution is not statistically justified in many cases: The computations performed for NYS DOH by The Leapfrog Group 3 analyze only situations in which the number of cases equals 30 or more. They state "This minimum reporting requirement was identified from the literature, which suggests that 30 cases is generally the point at which a non-normal distribution begins to approximate a normal distribution, which is important given the Safety Score's use of z-scores for standardizing data across disparate data sets." 3 This is a misuse of the normal approximation. The rule cited refers to using the normal approximation to the sampling distribution of the mean. A mortality rate is a proportion, not a mean. The commonly used rule for using the normal approximation to the sampling distribution of the proportion is that both nπ ≥ 10 and n(1-π) ≥ 10, where π is the hospital's EMR. For pneumonia in 2013, these requirements were not met in 110 of the 181 hospitals (60.8%) of the hospitals with 30 or more cases. CIs are often misunderstood: The importance of avoiding confidence intervals should not be minimized. Several authors including Hoekstra et al, 18 Belia et al, 19 Gigerenzer, 20 and Lecoutre et al 21 have demonstrated the widespread inability of undergraduate and master's degree students, researchers, and even statisticians to properly interpret confidence intervals and the associated null hypothesis significance tests. More relevant to the current application, Wulff et al 22 showed the difficulty that physicians have in properly interpreting a wide range of statistical results, and Scheutz et al 23 replicated those results for dentists. Given that professionals who use statistical inference regularly have problems with the proper interpretation of confidence intervals, we question the wisdom of using confidence intervals in a context in which the public is expected to use them in making critical healthcare decisions. Clearly the difficulty in the interpretation of statistical methods makes the interpretation of the quality reports confusing. In addition, Porter et al 24 states "sensitivity to physicians' concerns about being judged unfairly results in a tendency to exclude patients from outcomes comparisons instead of incorporating accepted risk-adjustment methods. " There is no adjustment for multiple comparisons: NYS constructs confidence intervals for each RAMR and uses them to test the null hypothesis that the hospital's performance does not differ from the statewide performance. We believe that the state should instead view this exercise as a screening process, not as a hypothesis testing process. There are two reasons why we advocate screening rather than hypothesis testing. First, the state examines hundreds, if not thousands, of such analyses each year (many conditions across nearly 200 hospitals). This means that the state would need to use a multiple comparisons procedure, such as Bonferroni 25 or Benjamini and Hochberg, 26 which would be unlikely to serve the desired purposes in these circumstances. Second, there is no a priori theoretical reason to believe that any given hospital is either better or worse than average. Proposed Alternative to the Risk-Adjusted Mortality Rate Our methodology uses two measures: the upper tail probability (UTP) to screen for hospitals performing poorly and the lower tail probability (LTP) to screen for hospitals performing well. Let n be the number of patients treated, d be the number of observed deaths, and π be the EMR. Then the UTP = P(X ≥ d | n, π) and the LTP = P(X ≤ d | n, π). It should be noted that both the UTP and the LTP include the number of deaths, d, so they do not sum to one. The UTP computes the probability that the hospital would have had as many deaths as they did, or more, given their number of cases and their EMR. A small UTP indicates that the hospital's number of deaths is unusually high. The LTP computes the probability that the hospital would have had as few deaths as they did, or fewer, given their number of cases and their EMR. A small LTP indicates that the hospital's number of deaths is unusually low. Therefore, with either the UTP or the LTP, a small value represents either unusually poor performance or unusually good performance. Hamm 27 drew on the work of four previous studies to construct the list of verbal interpretations of probabilities shown in Table 1. For screening purposes, a UTP or an LTP less than 5% or 10% might be considered "very unlikely" or "rare" and therefore subject to further investigation. The choice, of course, depends on the decision-maker. We approximate the UTP and the LTP using the binomial distribution function. We recognize that the use of the binomial distribution in this application technically requires that each patient in each hospital with a given condition must have the same probability of death, which is not the case since patients are known to have different patient and case characteristics. The calculation of the exact values of the UTP and the LTP would require the individual patient probabilities of death, which are not publicly available. If they were, the resulting probability distribution would be the so-called Poisson binomial distribution. Under the Poisson binomial distribution, the expected number of deaths would equal the expected number under the traditional binomial distribution but the variance of the number of deaths would be smaller. Thus, the tail areas computed using the binomial are larger than those computed using the Poisson binomial (see Hoeffding 28 and Boland 29 ). Thus, the UTP and LTP values reported herein are conservatively large in the sense that any UTP or LTP that we report is greater than the values computed using the individual patient probabilities of death. To illustrate the appropriateness of using the binomial distribution as an approximation when computing the UTP and LTP, we conducted a simple simulation with 100 patients. Suppose that the probabilities of death of the 100 patients, p 1 , p 2 , …, p 100 , are uniformly distributed between 0.01 and 0.09. We simulated the 100 probabilities of death and then simulated whether each patient lived or died to obtain the simulated total number of deaths. We repeated this process 1000 times to obtain the estimated probability distribution of the total number of deaths. From this distribution, we computed the UTP for each possible number of deaths; we call these the actual UTPs. Finally, we computed the estimated UTPs for each possible number of deaths using the binomial distribution with π = 0.05, the mean of the uniform distribution between 0.01 and 0.09. Table 2 shows the actual and estimated UTPs. Figure 1 shows the relationship between these values. Clearly, the binomial distribution provides a very close approximation. The largest positive difference between the actual UTP minus the estimated UTP is 0.974-0.963 = 0.011, which occurs at 2 deaths, while the largest negative difference is 0.370-0.384 = −0.014, which occurs at 6 deaths. There is no apparent pattern in the differences between the actual and estimated UTPs. Recall the case of Ellenville Regional Hospital, which, in 2013, had 2 deaths among 42 pneumonia patients whose EMR was 0.0206. Its RAMR was 10.65, or 2.31 times the SOMR. Recall also that it was only after considering the confidence interval that we could declare that its RAMR was not statistically different from the SOMR. The UTP in this situation is the binomial probability P(X ≥ 2 deaths | n = 42, π = 0.0206), which equals 0.214. Thus, there is a 21.4% chance that Ellenville Regional Hospital would have experienced 2 or more pneumonia deaths given that the EMR of its 42 pneumonia patients was 2.06%. This is between "Not very probable" and "Fairly unlikely" on Hamm's scale. It is clear that Ellenville Regional Hospital's performance in pneumonia in 2013 did not approach the level of requiring greater scrutiny even though its RAMR was 2.31 times the SOMR. All that needs to be reported is that its UTP is 21.4% and its interpretation is clear: If Ellenville Regional Hospital treated 42 pneumonia patients with the same EMR every year, then it would experience two or more deaths in 21.4% of the years. There is no need for a confidence interval. Next consider an imaginary Hospital A that treated 420 pneumonia cases and experienced 20 deaths. Suppose also that its EMR was 2.06%. Then its OMR would equal 20/420 = 4.76%, and its RAMR would be 10.65, all equal to Ellenville's values. However, its UTP would be P(X ≥ 20 deaths | n = 420, π = 0.0206) = 0.00057. This is two orders of magnitude below "Rarely" and is very close to "Absolutely impossible. " How often does an event with probability 0.00057 occur? The geometric distribution with P = .00057 has an expected value of 1/0.00057 = 1754, meaning that, if Hospital A treated 420 pneumonia patients with the same EMR every year, then it would experience 20 or more deaths on average once every 1754 years. The observed number of deaths at Hospital A should therefore be considered highly unusual while Ellenville's is not unusual at all. This demonstrates that the RAMR is insensitive to case volume, which is incorporated naturally into the UTP in a statistically sound manner. Note that the normal approximation is not needed to compute the UTP and the LTP. Therefore, we can analyze hospitals regardless of the number of cases they treated and their EMRs. Thus, by switching to the UTP/LTP methodology, no such methodological restrictions exist; NYS could assess situations with any number of cases. Results We demonstrate our methodology using data from the NYS DOH web site. 15 The database contains the number of cases treated, the number of deaths, and the EMRs for each of 10 IQIs for the years 2009-2013. The risk adjustments for the IQIs studied are explained in the AHRQ quality indicators, IQI parameter estimates. 24 NYS reports this data only for hospitals that had 30 or more cases in the given IQI in the given year. Comparison of the Upper Tail Probability and the Risk-Adjusted Mortality Rate We computed the 5-year overall RAMR and UTP across all IQIs for 196 hospitals. The Spearman rank correlation is −0.8559 (P < .00005), demonstrating strong agreement between the two measures. This supports our contention that the UTP is a viable alternative to the RAMR. The RAMR is Not Currently Applied When the Number of Cases is Less Than 30 For any situation with fewer than 30 cases, NYS does not report the number of cases, the number of deaths, or the EMR. However, the State does report the statewide total for cases and deaths, which allows us to compute the numbers of cases and deaths for situations with less than 30 cases taken collectively. In Table 3, we show that the unadjusted odds ratios of mortality for patients treated in hospitals that perform fewer than 30 cases per year ranges between 1.37 and 3.71 relative to patients treated in hospitals that perform more than 30 cases per year. It is possible that the adjustment for EMR, for which the data are unavailable, would explain this difference. This might happen if those situations involving fewer than 30 cases also had higher EMRs relative to patients treated in situations involving 30 or more cases. To check this possibility, we fit a linear regression model using EMR (computed for each hospital across all IQIs and all years) as the dependent variable and the number of cases (over all IQIs and years) as the independent variable. The resulting model, with n = 196 hospitals, yields a highly statistically significant positive slope (P < .00005) suggesting that hospitals with fewer than 30 cases in an IQI are unlikely to have patients that are, on average, at higher risk of dying. While it is true that only 2.2% of the cases in the 10 IQIs were Rather than classify a hospital's performance as average, below average, or above average, which would imply a hypothesis testing framework, we propose using the UTP as a screening measure to identify situations in which there are likely to be opportunities for improvement. For example, Table 4 shows the performance of Hospital M. While Hospital M did very well overall and in three of the nine IQIs, its UTP in one IQI (heart failure mortality rate) is small (0.049). It is instructive to examine Hospital M's UTP for the same IQI over the four previous years, as shown in Figure 2. It is clear that Hospital M's mortality rate performance with respect to heart failure mortality rate has been low for at least 5 years. Discussion In this paper, we make two essential points. First, measuring hospital performance is essential for quality improvement but hypothesis testing is not the appropriate approach. Rather, agencies should take a screening approach that seeks to identify both where attention may be needed to improve performance, and where superior performance is occurring that might be emulated elsewhere. Second, the current RAMR Abbreviations: UTP, upper tail probability; AAA, abdominal aortic aneurysm; AMI, acute myocardial infarction; IQI, inpatient quality indicator; OMR, observed mortality rate; EMR, expected mortality rate; UTP, upper tail probability; LTP, lower tail probability. methodology should be replaced. Our proposed alternative method, using the UTP and the LTP, should be considered as a replacement. Briefly, we have shown that: 1. The RAMR does not provide the information necessary to determine whether a hospital's performance is especially bad or especially good, in part because it is insensitive to sample size; 2. The interpretation of the RAMR is obscure to many people; 3. The use of the normal distribution to construct a confidence interval for the RAMR is not statistically justified in many cases; 4. While a proper confidence interval for the RAMR can be constructed without reference to the normal distribution, there is considerable evidence that large portions of the population, including physicians, other healthcare professionals, and the public, do not have a sufficiently fundamental understanding of confidence intervals to use them as a basis for healthcare decision-making; and 5. The RAMR and its confidence interval are portrayed as if a two-tailed hypothesis test were being performed without any attempt to adjust for the multiple comparisons that are being made. The current implementation of the RAMR in NYS does not examine situations in which a hospital has treated fewer than 30 cases in a given IQI. We have found that such situations may account for as much as half of all such cases statewide. This failure is particularly troubling in light of the suggestive evidence that higher volume providers tend to perform better. This shortcoming disappears when using the UTP/LTP approach. For the purposes of our study, we were unable to apply our methodology to situations with fewer than 30 cases since NYS DOH does not publish the necessary data. However, there is no reason to suspect that there would be any problems computing the UTP since the binomial distribution is readily available for any number of trials. Our planned future research involves comparing the UTP/ LTP results with the quality report cards prepared for each hospital. To the degree to which they agree, we will know that the UTP/LTP and the report cards are sensing similar phenomena. To the extent that they disagree, we will have an opportunity to learn more about the hospital's quality performance. In summary, our proposed method is simpler to implement than the RAMR -no need for a confidence interval -and it is easy to interpret. It is statistically sound and is applicable to all situations regardless of the number of cases treated, therefore providing a more comprehensive assessment of performance. Ethical issues Not applicable.
2018-04-26T20:49:41.686Z
2017-08-12T00:00:00.000
{ "year": 2017, "sha1": "992b2cc72659ff35f1509dc34cca98d8000d4e69", "oa_license": "CCBY", "oa_url": "http://www.ijhpm.com/article_3401_53b0850b799b264f8edd25af68d31dc7.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "992b2cc72659ff35f1509dc34cca98d8000d4e69", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
5630715
pes2o/s2orc
v3-fos-license
Copurification of actin and desmin from chicken smooth muscle and their copolymerization in vitro to intermediate filaments Desmin is a 50,000-mol wt protein that is enriched along with 100-A filaments in chicken gizzard that has been extracted with 1 M KI. Although 1 M KI removes most of the actin from gizzard, a small fraction of this protein remains persistently insoluble, along with desmin. The solubility properties of this actin are the same as for desmin: they are both insoluble in high salt concentrations, but are solubilized at low pH or by agents that dissociate hydrophobic bonds. Desmin may be purified by repeated cycles of solubilization by 1 M acetic acid and subsequent precipitation by neutralization to pH 4. During this process, a constant nonstoichiometric ratio of actin to desmin is attained. Gel filtration on Ultrogel AcA34 in the presence of 0.5% Sarkosyl NL-97 reveals nonmonomeric fractions of actin and desmin that comigrate through the column. Gel filtration on Bio-Gel P300 in the presence of 1 M acetic acid reveals that the majority of desmin is monomeric under these conditions. A small fraction of desmin and all of the actin elute with the excluded volume. When the acetic acid is removed from actin-desmin solutions by dialysis, a gel forms that is composed of filaments with diameters of 120-140 A. These filaments react uniformly with both anti-actin and anti-desmin antiserum. These results suggest that desmin is the major subunit of the muscle 100-A filaments and that it may form nonstoichiometric complexes with actin. Biochemical characterization of the subunit of intermediate filaments has recently become possible with the demonstration that the extraction of smooth-muscle actomyosin at high ionic strength leaves an insoluble residue which is enriched in 100-]k filaments (6). There are two predominant proteins in this residue: actin and a 50-55,000tool wt protein (22). This latter protein has been characterized as the major subunit of muscle intermediate filaments by several researchers (5,22,33). We have isolated it from chicken gizzard as a 50,000-mol wt protein which we call desmin (22). In smooth muscle cells, one of the most characteristic morphological features of the intermediate-sized filaments is their insertion into cytoplasmic and membrane-bound dense bodies and their intimate associations with actin filaments at these sites (1,5,39). In skeletal muscle, immunofluorescence reveals that desmin is localized at the Z lines and where the Z lines come into apposition with the plasma membrane. Desmin is also found at the Z lines and intercalated disks of cardiac muscle (22). These are all sites where actin structures are linked either together or to membranes. From these distributions, we concluded that desrain forms a network in muscle cells which interlinks individual myofibrils, at their Z disks, into a single integrated mechanical unit and also functions in the linkage of this unit to the plasma membrane. In this paper, we present evidence that actin and desmin copurify from extracts of chicken gizzard and copolymerize into 100-/~-like filaments. These results suggest that desmin and actin may form a stable complex and provide an insight to the molecular basis of how desmin may function to link actin filaments in muscle cells. Analytical Methods Protein concentration was determined relative to a bovine serum albumin (BSA) standard by an elevated temperature modification of the Lowry method (29). All buffer pH values were determined at 20oc. Electrophoretic recipes are all wt/vol. Oiatysis Dialysis tubing was prepared by simmering it in 0.1 M NaOH and 10 mM EDTA for 8 h. The tubing was then neutralized with Tris-HCl, pH 6.8, extensively rinsed with water, and stored in water at 5~ One-Dimensional SDS Slab Gel Electrophoresis (SDS-PA GE) One-dimensional electrophoretic analysis of proteins was performed on high-resolution SDS-polyacrylamide slab gels (SDS-PAGE) by a modification of the discontinuous Tris-glycine buffer system (18). The stacking gel contained: 5% acrylamide, 0.13% N,N'-methylene bisacrylamide, 0.125 M Tris-HCl, pH 6.8, and 0.1% SDS. The quantities of acrylamide and of bisacrylamide in the analytical (lower) gel were provided by a hyperbolic relationship: % acrylamide • % bisacrylamide = 1.3. Gels containing 12.5% acrylamide were used most often because of their high resolution in the molecular-weight range of actin and desmin. A 12.5% analytical gel contained: 12.5% acrylamide, 0.107% bisacrylamide, 0.386 M Tris-HCl, pH 8.7, and 0.1% SDS. Polymerization was catalyzed by the addition of 100 /zl of 10% ammonium persulfate and 10 /zl of N,N,N',N'-tetramethylethylenediamine/30 ml of gel solution. The same running buffer was used in both the upper and lower reservoirs: 0.025 M Tris base, 0.112 M glycine, and 0.1% SDS, final pH 8.5. Sample buffer (2 • contained 0.1 M dithiothreitol, 0.08 M Tris-HCl, pH 6.8, 10% glycerol, 2 % SDS, and bromophenol blue. After electrophoresis, gels were stained overnight in 50% ethanol, 10% acetic acid, and 0.05% Coomassie Brilliant Blue R-250. Gels were destained by three changes of 10% ethanol, 5% acetic acid, and photographed over a light box with Polaroid PN-55 film (Polaroid Corp., Cambridge, Mass.), using an orange-colored filter to enhance contrast. In the figures, most of the well-stained, compact bands represent 2-4 /~g of protein, while faint bands may represent <0.1 /.tg. In our experience, SDSelectrophoresis systems other than discontinuous Tris buffered slab PAGE do not adequately resolve or visualize significant minor components in desmin preparations. The Two-Dimensional Isoelectric-Focusing S D S-G el-Electrophoresis (IEF/SDS-PAGE) Two-dimensional electrophoresis was carried out according to the system of O'Farrell (24). The first dimension (isoelectric focusing) was prepared and pre-run as described (15), but with the following modifications: the gels (2.5 • 120 ram) contained 0.2% Ampholines, pH range 3.5-10; 0,8% Ampholines, pH range 4-5; and 2% Ampholines, pH range 5-7 (each supplied as 40% solution). Ampholines were not added to overlay solutions nor to lysis buffers. Samples (see below) were loaded, overlayed directly with 0.02 M NaOH, and run at 450 V for 16 h and then at 800 V for 1 h. Samples of native proteins were dissolved in 8 M urea at room temperature for 1 h. They were then made 1% in Nonidet NP-40 and 0.5% in 2-mercaptoethanol. The samples were subsequently isoelectric focused as described above, but without being heated. Microscopy Unless otherwise specified in the text, all desmin gels for microscopy were induced from cycle-2 acetic acid extracts (see below). Samples for electron microscopy were placed on carbon-coated 400-mesh copper grids. Because many of the samples were bulky, insoluble gels, it was found helpful to place a drop of water on the grid and then wipe the gel across it. These preparations were stained for 2 min with a 2% aqueous solution of uranyl acetate, excess stain was drawn off with a filter paper, and the grids were then air dried. The specimens were observed in a Philips EM 201 electron microscope operated at 80 kV, and photographed with a 35-mm camera. Final magnifications were determined from the calculated values for the 35-ram camera. Samples of desmin gels for fluorescence microscopy were placed on glass coverslips and spread by gentle flattening against a slide. This causes some of the gel to adhere to the coverslip in a thin layer. The coverslips were then dehydrated for 10 min in 95% ethanol, rehydrated in calcium-and magnesium-free phosphatebuffered saline at pH 7.4 (PBS), and stained for indirect immunofluorescence as described (19). Antibodies against actin (19) and desmin (22) were the same as those described. They were prepared against proteins purified to apparent homogeneity from smooth muscle (chicken gizzard). The coverslips were mounted on slides with a drop of Elvanol as mounting medium. Photomicroscopy was performed on a Leitz microscope equipped with a fluorescence epi-illumination system and Leitz FITC filter module H. Samples were photographed through a x 100 oil immersion phase objective. Plus-X film was exposed at Din 28 and developed in Diafine. Magnification was determined by photographing the lines ruled on a hemacytometer. Enrichment of Gizzard Preparations for Desmin (see Scheme I) Desmin was extracted from chicken smooth muscle by modifications of previous methods (6,22,33). All procedures were performed at 5~ The details are presented in Scheme I. During the extractions, gelatinous masses formed and were discarded. They were not analyzed. The final KI-insoluble residue (KI-residue) was washed with water to reduce the KI concentration to below 1 mM and stored as a thick slurry in the presence of 10 mM NaNz. KI-residue stored as an actual pellet tended to solidify with time. Freezing (at -20~ hastened this process. Water-washed KCI-or KI-residue pellets were made into acetone powders (KCI-AP, KI-AP) by suspending them in an equal volume of water to prevent dumping. This suspension was then mixed with 3-5 vol of cold acetone, stirred for 1 h, and spun out. This was repeated with more acetone until the final suspension contained <5% of the original water. The final acetone-insoluble residue was air dried overnight and stored at -20~ Purification of Desmin from KCl-and KI-residues (see Scheme II) Desmin was extracted from KC1-or KI-residues with acetic acid at low temperatures, because this resulted in the least number of artifactual charge modifications observable by IEF/SDS-PAGE (see Results). The details of this procedure and the notations used for the various desmin extracts are given in Scheme II. Cycle 2 desmin was used in most experiments. Desmin was extracted from acetone powders as described in Scheme II but with the following modification: the acetone powder was first extracted at room temperature with water and then washed on a Biichner funnel to remove soluble actin and tropomyosin. The washed material was then extracted with acetic acid as described above. Purified desmin was occasionally stored by dissolving it in 1 mM HCI and keeping this solution frozen at -80~ Desmin was stable for up to 2 wk under these conditions. If desmin was stored as a precipitate at pH 4 in acetic acid/acetate or at pH 7.5 in Tris-HCl, it tended to become acid insoluble with time. Freezing either KIresidues or desmin precipitates for several weeks also caused desmin to become acid insoluble. In the cyclic purification procedure of Scheme II, desmin was found to be soluble in acid only at low ionic strength. Preparation of KCl-and KI-Insoluble Residues of Gizzard tOO much salt remained in the precipitate. Desmin precipitates are very sticky, especially towards glass, and the manipulation of them was minimized to reduce losses. Clarify by centrifugation at 20,000 g, 20 min. Extract-2 Neutralize to pH 4.0-4.2 and collect ppt. as for extract above. J Precl tate-2 Redissolve in desired volume of 1 M acetic acid. Cycle-2 desmin This cyclic procedure may be continued for an arbitrary number of steps. Cyclic Purification of Desmin Extracted with Acetic Acid and a high molecular weight protein (HMW) begin to appear in the supernate, especially in LI supernate 3 (Fig. 2 d). Two-dimensional gel electrophoresis of LI supernate 3 ( Fig. 3) reveals the presence of the ct and /3 components of desmin (15), the presence of/3-and -y-actin (10,15,27,36,42), a pair of spots designated * (shown most clearly in Fig. 6), and three isoelectric variants of a-actinin. The HMW does not appear on our twodimensional gels. We have adopted the conversion of labeling the actin isoelectric variants of smooth muscle as/3 and 3/, with 3/denoting the most basic variant (10,15). High ionic strength (HI) extraction of the LIinsoluble residue with 1 M KCI ( Fig. 2e-g) and then with 1 M KI (Fig. 2 h -j ) , releases most of the actomyosin, a-actinin, HMW, and tropomyosin. The remaining KI-insoluble residue (Fig. l j ) contains mostly actin and desmin. We have not quantitated yields or fold purification during this purification because of the lack of a suitable quantitative assay for desmin. Comparison of Fig. 1 a and k and examination of the solubilized proteins (Fig. 2), however, indicates that the purification is substantial. The total amount of recoverable desmin in gizzard is quite small, however, and 200 g of gizzard muscle will typically yield 100-200 mg of moderately pure desmin (such as the cycle-2 desmin shown in Figs. 5 and 6). The actin and desmin that remain in the KIinsoluble residue are still associated with a considerable bulk of SDS-insoluble matrix, the composition of which is unidentified. Two-dimensional electrophoresis of the KI-residue reveals the presence of c~-and fl-desmin, of "1 and *2 (brackets), and of y-actin (Fig. 4). A band is seen next to desmin on one-dimensional gels of some KI-residue preparations. It has never been unambiguously observed on the corresponding two-dimensional gels, however, and is most likely either the a-or 0-component of desmin. We have not observed any marked tendency for desmin to be proteolyzed during our extraction procedures. The desmin from KI-extracted muscle has the same molecular weight and isoelectric variant composition as desmin from fresh muscle (15). Desmin, however, is slowly degraded at low pH. SOLUBILIZATION OF DESMIN: Crude desmin may be solubilized from KC1-or KI-extracted gizzard residues by 1 M acetic acid (reference 33 and this paper), concentrated ethylene diamine, 0.5% Sarkosyl NL-97, 3 M sodium trichloroacetate, or 3 M urea. Of these, 1 M acetic acid at 0~ was chosen for routine extraction of desmin because it is easy to work with and to remove, is reasonably selective, and does not appear to denature or rapidly damage desmin. Although acetic acid at elevated temperatures is reported to give better yields of desmin (33), its use was avoided because it produced extensive charge heterogeneity in both actin and desmin, and also solubilized considerable quantities of collagen and myosin. The major difference between the KI-residue and the KI-residue acetone powder is that the latter yields much purer desmin. CYCLIC P U R I F I C A T I O N OF DESMIN SOL-UBILIZED BY ACETIC ACID (SCHEME I I ) : When the first acetic acid extract (0~ is neu-FIGURE 4 Two-dimensional gel of the KI-insoluble residue (Fig. 1/). a,fl-desmin, proteins "1 and *2, and 3'actin are present. tralized, a fine precipitate containing desmin forms at about pH 4.0. As the pH is increased to 4.2, the precipitate coalesces into small flocculent masses. These are extremely sticky and easily trap air bubbles. A desmin-containing precipitate also forms if a 1-M acetic acid extract is brought to 0.3-0.4 M in NaC1. The first acid extract of a KI-residue (Fig. 5 a) was purified by four cycles of precipitation and solubilization as described in Materials and Methods (see Scheme II). The supernate from each precipitate was dialyzed against water and lyophilized. Precipitates 1-4 are shown by one-dimensional SDS-PAGE in Fig. 5b-e, and concentrated supernates 1-4 in Fig. 5f-i. During this process of cyclic precipitation, a constant protein composition is attained, with >90% of the actin and desmin precipitating in each cycle. The small amount of actin and desmin which remains in the final supernate-4 are in the same relative proportion as the actin and desmin in precipitate-4. Desmin (mol wt 50,000) isolated in this manner is associated with four other proteins: myosin (mol wt 210,000), two intermediate-sized proteins (mol wt "1: 45-47,000; mol wt *2: 43-45,000), and actin (mol wt 42,000) (Fig. 6). Other proteins sometimes appear in the region between actin and desmin, but "i and *2 predominate. further shown to be actin by reaction with antiactin antibody (see below). The amount of myosin that is observed with desmin is variable and is virtually absent from desmin extracted from KIresidue acetone powders. Every nondenaturing purification scheme that we have investigated so far has failed to selectively solubilize desmin away from either the * proteins or from actin. It is of interest that both desmin and the proteins that copurify with it have Pl's of -5 . 7 1 in 9 M urea/ 1% Nonidet NP-40. other. At least some of this variability appears to be artifactual, a-Desmin is particularly susceptible to conversion to a more acidic variant. Urea, heating, and the use of pH 4-6 Ampholines will all promote this conversion. In the presence of the pH 4-6 Ampholines, the a-desmin spot splits into BRUCE D. HUaEARO AND ELIAS LAZARIDES Copurification of Actin and Desmin two, with one remaining in the old position and one running -0 . 0 5 P, units more acidic (data not shown). DESMIN: The consistent copurification of actin and desmin suggested that they might be associated together as a complex. Because purified desmin is insoluble under conditions of low or high ionic strength, we studied desmin that had been solubilized with the medium-strength anionic detergent Sarkosyl NL-97. This detergent does not appear to denature desmin, because desmin preparations that have been solubilized with Sarkosyl will form 100-/~ filaments when the Sarkosyl is dialyzed away (see below). When Sarkosylsolubilized desmin is chromatographed by gel filtration on a column of Ultrogel AcA 34 (range 20,000-340,000 mol wt), it is fractionated into two populations: one containing actin, desmin, and HMW that is excluded from the column; and one containing actin, desmin, *~, and "2 that is barely included in the column (Fig. 7). Twodimensional electrophoresis (not shown) revealed the presence of a-and/3-desmin and of y-actin in both protein populations (fractions 17 and 37). When desmin is chromatographed on Bio-Gel P300 in the presence of 1 M acetic acid and 0.05 M NaCI (Fig. 8), the vast majority of desmin elutes in what is probably a monomeric position and is not associated with actin under these conditions. A small fraction of the desmin elutes with the excluded volume. The actin present does not appear to be monomeric, and most of it also elutes with the excluded volume. Copolymerization of Actin and Desmin FORMATION OF DESMIr~ GELS: When purified desmin is recovered from 1 M acetic acid (pH 2.4) by neutralization to pH 4.1, it generally precipitates as cohesive, cottonlike flakes. However, if the acetic acid is instead removed by dialysis against several changes of distilled water, three different and alternate phenomena are observed: (a) the spontaneous formation of a clear gel; (b) a clear solution; or (c) a cohesive precipitate. All three represent different states of desmin and of the proteins that copurify with it, as no differential participation of any of them is observed. A spontaneous gel is the most frequently observed state. The gels are extremely sticky, especially to themselves and to glass, but they will also coat the insides of plastic pipette tips. Desmin gels are strong enough to hold their shape when FIGURE 7 SDS-PAGE analysis of the fractions produced by the gel filtration of desmin and associated proteins on Ultrogel AcA 34 in the presence of 0.5% Sarkosyl NL-97. A KI-AP was washed with I M KCI and then with water. This resolubilizes some of the high molecular weight proteins. The washed pellet was extracted with 1 M acetic acid and this was cycled as described in Materials and Methods to produce a precipitate-2 (cycle-2 desmin). This was rinsed with 0.1 M Tris-HC1, pH 7.5, and then solubilized with 0.5% Sarkosyl NL-97, 100 mM NaCl, 10 mM Tris-HCl, pH 7.5, 10 mM 2-mercaptoethanol, and 10 mM NaNa. Ultrogel AcA 34 (range 20,000-350,000 tool wt) was equilibrated in this buffer and poured as a column of 2 • 40 cm. Blue dextran-2,000 eluted with a peak at fraction 17, and an included dye marker eluted with a peak at -fraction 120. In the actual run, all of the protein eluted between fractions 12 and 55. Fractions were heated with 0.2 vol of 5 x SDS-sample buffer before analysis on SDS-PAGE. We have not determined whether the micell structure of Sarkosyl NL-97 had any effect on the elution profile. extruded from the dialysis bag. This includes any notches and wrinkles that resulted from their association with the nonuniform contours of the dialysis membrane. If they are left undisturbed, the clear gels contract slowly and become translucent over a period of several days. This spontaneous contraction can be speeded up, so that it is complete within 1 h, if the ionic strength or divalent cation concentration of the dialysis medium is raised. Fig. 9 a shows an uncontracted gel in the dialysis bag in distilled water. Fig. 9b shows the same gel 1 h after the addition of 1 mM MgC12 to the dialysis medium. The dialysis of cycle-purified desmin from acetic acid into water occasionally results in a metastable, nongelled solution (state 2). If left undisi 7 4 THE JOURNAL OF CELL BIOLOGY" VOLUME 80, 1979 FIGUR~ 8 SDS-PAGE analysis of the fractions produced by gel filtration of desmin and its associated proteins on Bio-Gel P300 in the presence of 1 M acetic acid and 0.05 M NaC1. Blue dextran-2,000 eluted in fractions 1 and 2. BSA and Cyt were mixed with the desmin before it was loaded on the column. The majority of desmin elutes as a monomer while a small fraction of desmin elutes with the excluded volume. All of the actin chromatographs in a manner similar to the partially excluded desmin. The elution profile for desmin is retarded relative to that expected for a 50,000 mol wt protein; one possible reason is that desmin may interact with Bio-Gel P300 under the above conditions. If this is so, caution should be exercised in assigning a monomeric molecular weight to this desmin fraction. The proteins B1, B2, and B3 are most likely BSA degradation products and they are not present when desmin alone is run on the column. Other abbreviations are as in Figs. 1, 2, and 6. turbed in the dialysis membrane, these solutions will remain in this state for days at 5~ However, if even a small amount of an ionic substance (Table I) is dialyzed into the metastable solution, gelation is initiated and is complete within 1 h (state 1 above). Glass will also trigger gelation (Fig. 10). Syneresis of the gel occurs subsequently. While we have not measured the relative effectiveness of various ions in inducing gelation, we have not noticed any obvious requirements for any particular ions. All of the above gelation and syneresis phenomena are apparently passive in the sense that an extemal source of energy is not required. Finally, the gelation phenomenon is not reversed by the removal of any of the substances listed in Table I (by extensive dialysis against water). The gels may be resolubilized in 1 M acetic acid, however, and the gelation procedure repeated. Sarkosyl NL-97 solutions of desmin will also form spontaneous gels when the Sarkosyl is removed by extensive dialysis against water. These gels appear to be similar to the acetic acid gels, but were not investigated extensively. They are composed of fibrils with ~100-A diameters (data not shown). LIGHT MICROSCOPY OF DESMIN GELS: Phase microscopy of semicontracted desmin gels reveals a tangled network of branching fibers that are often embedded in an amorphous matrix (Fig. 11). These fibers are <1 /zm wide and appear to be bundles of many smaller fibrils. No differences in morphology are seen when spontaneous gels or ion-initiated gels (10 mM KC1 or 10 mM MgCh) are compared. The matrix and fibril bundles are intimately associated with each other (Fig. 11). We were unable to selectively solubilize either matrix or fibers at any of several concentrations of urea between 0.1 and 1.0 M. Desmin precipitated from acetic acid by rapid neutralization to pH 4.1 is usually amorphous, but occasionally exhibits fibril bundles similar to those observed from desrain gels produced by dialysis (data not shown). In indirect immunofluorescence, both the fibrous and matrix components of the desmin gels are uniformly reactive with anti-desmin (Fig. 12) and anti-actin (Fig. 13) at the level of resolution of the light microscope. No periodicities or differential reactivity has been observed. Control preimmune antisera were unreactive. ELECTRON MICROSCOPY OF DESMIN GELS: The insoluble gizzard residue which remains after extraction with 1.0 M KCI was investigated by negative staining to determine what fiber morphologies it contained before the extraction of desmin with acetic acid. High magnification pictures show tangled groups of well-preserved 100-,~ filaments (Fig. 14 a -c ). The 100-,/~fiber configurations of Fig. 14a-c are similar to those seen in association with dense bodies (1,5,39). The measured diameters of the fibers in Fig. 14 range from 120 to 140 ,~. The most characteristic microscopic feature of desmin gels is that the long, tangled fibers seen at low magnification (Fig. 15 a) actually represent a network of extensively intertwined fibrils that become visible at high magnifications (Fig. 15 b, c). Neither microtubules nor F-actin, both of BRUCE D. HUBBARD AND ELIAS LAZARIDES Copurification of Actin and Desmin 1"/$ FIGURE 9 The formation of a spontaneous desmin gel. Acetic acid-solubilized cycle-2 desmin was dialyzed against several changes of water for 2 days at 4"C. The gel that resulted was photographed in the dialysis membrane (Fig. 9a ). Dialysis was continued and 1 mM MgCl2 was added to the dialysis medium to initiate contraction of the gel. The gel was rephotographed 1 h after the addition of MgClz (Fig. 9b ). Light micrograph (phase optics) of a desmin gel similar to that depicted in Fig. 9b. Phase contrast dense fibers are seen embedded in an amorphous matrix. The fibers were probably oriented while the preparation was being flattened. Final magnification is • 1,000; 10/~m/crn. which can form gel-like solutions, show this mode of twisting self-interaction. The fibril morphology of these gels is not uniform and a single gel may contain fibrils which exhibit regular profiles, twisted ribbon-like profiles, and intertwined profiles (Fig. 15c). The most consistent interpretation of these profiles is that they are actually flat ribbons of 120-140 A width which have an inherent tendency to twist. The thin regions (60-80/~; Fig. 15c, arrow) would correspond to nodes where the ribbon is parallel to the electron beam. We have not observed any profiles which can be interpreted as being strictly cylindrical along their entire lengths. Solubility Properties of Desmin Desmin is present in smooth muscle in two forms: a "soluble" one that is released during extraction at LI with E G T A , and an "insoluble" one that remains after extraction of the muscle cells at HI. In both cases, desmin is associated with actin. The HI-insoluble desmin is solubilized either at low or high pH, or by agents that dissociate hydrophobic bonds. Two solubilizing agents were investigated in detail: acetic acid (1 M) and Sarkosyl NL-97 (0.5%). Acetic acid solubilizes des- 100 Tris-HC1 pH 6.9 10 KCI 10 NaC1 10 Glass min along with a variety of other proteins from KCI-or KI-insoluble residues. Of these, only desmin, actin, and two proteins designated * precipitate quantitatively at pH 4.0 (Fig. 5). The pH-dependent precipitation of desmin is probably not a simple isoelectric phenomenon, however. First, desmin does not resolubilize above pH 4 until a very high pH is reached (e.g., with ethylene diamine). Second, the dialysis of desmin from acetic acid into water sometimes results in a metastable soluble state. Exposure of these solutions to an ionic environment causes the immediate coprecipitation of both actin and desmin. These properties are unlike any previously described for actin (26). Desmin from avian muscle is resolved into two major isoelectric variants by two-dimensional electrophoresis (15,21). Both variants are always present in every preparation and appear to behave identically in each of the purification schemes that we have employed (i.e., salt extractions, acetic acid cycling, and gel filtration in the presence of Sarkosyl). There is usually an excess of/3-desmin over a-desmin in gizzard, however. One reason for this is the tendency of cz-desmin to become modified and focus as two or more species. It is presently unknown whether a-and fl-desmin are distinct gene products or if one arises by modification of the other. Copurification of Actin and Desmin The most significant finding of this research is the suggestion that desmin and actin form nonstoichiometric complexes. The evidence for this is as follows: (a) A small fraction of gizzard actin has solubility properties that are different from the bulk of the actin but which are the same as for desmin. (b) Both actin and desmin copurify during repeated cycles of acetic acid solubilization and pH 4 precipitation. A constant ratio of actin to desmin is attained and this ratio is found in both the pH 4 precipitate and the supernate (Fig. 5e and i). (c) Gel filtration in the presence of 0.5% Sarkosyl NL-97 reveals an included fraction of actin and desmin that comigrate through the column. (d) Both actin and desmin appear to copolymerize from a metastable soluble state to form a single species of 100-A-like filaments in which they are homogeneously distributed (see discussion below). The simplest interpretation of this is that actin and desmin are able to form stable, nonstoichiometric complexes with each other. We have hypothesized that desmin functions in muscle to bind separate actin-containing structures together into mechanically integrated units (22). The formation of an actin, desmincontaining polymer may provide a molecular basis for this hypothesis. It is important to note, however, that we have not excluded the possibilities of nonspecific interaction of actin and desmin or of separate populations of actin and desmin that simply have similar solubility properties. Copurification of other Proteins with Desmin At least two other proteins, termed "1 and *2, appear to be associated with desmin in a manner that is similar to that discussed for actin above. We wish to avoid giving these proteins names until we can determine whether they are specifically associated with desmin and whether or not they are cleavage fragments of desmin. This latter is a strong possibility. *2 Comigrates with a known proteolytic fragment of desmin (in preparation), and both "1 and *2 are seen occasionally as isoelectric doublets on two-dimensional gels (data not shown). This further suggests that they may be derived from the cleavage of ct-and fl-desmin. It is intriguing to note that some preparations contain similar amounts of actin, "1, and *2 (Fig. 6c). The remaining proteins that associate with desrain are HMW, myosin, a-actinin, and tropomyosin. These are all proteins that are known to bind to actin in the absence of desmin and are probably bound to the actin that copurifies with desmin. The HMW protein may be filamin, an actin-binding protein from smooth muscle (40,41). How the cell chemically specifies the interactions of BRUCE D. HUBBARD AND ELIAS LAZARIDES Copurification of Actin and Desmin 177 actin with these proteins remains a matter for speculation. Copolyrnerization of Actin and Desmin to l O0-7t-like Filaments Desmin appears to exist as an insoluble hydrophobic polymer under physiological conditions, These polymers may be solubilized by conditions of low or high pH (at low ionic strength) or by agents which dissociate hydrophobic bonds (5,22,33). Acetic acid-solubilized desmin forms either spontaneous gels or metastable solutions when the acetic acid is replaced with water by dialysis. The metastable solutions rapidly convert to gels if the P3GU~ 14 100-A-diameter fibers in a KCl-residue. Fig. 14a--c shows tangled groups of 100-A fibers with a well-preserved substructure and diameters of 120-140 A. FIGURE 12 Indirect immunofluorescence of desmin gels using desmin specific antibodies. Desmin fibers (described in Fig. 13) were reacted with anti-desmin and observed with phase contrast (Fig. 12a and c) and epiiiuorescence (Fig. 12b and d) optics. Fig. 12b shows a fine fluorescent network, between the fluorescent-filament bundles, that is nearly invisible in 12a. The larger fibers thus appear to be aggregates of thinner fibers. The fluorescence is uniformly distributed throughout the whole length of the fibers. Preimmune antisera (not shown) were completely negative, x 100 oil-immersion objective, NA 1.32; final magnification is x 1,000; 10 t~m/cm. FIGURE 13 Indirect immunofluorescence of desmin gels using actin specific antibodies. Desmin fibers from a gel were reacted with anti-smooth-muscle-actin and viewed with phase contrast (Fig. 13a ) and epifiuorescence (Fig. 13 b ) optics. As in the case with anti-desmin (Fig. 12), the fluorescence is uniformly distributed throughout the whole length of the fibers. No periodicities or differential staining of matrix or fibers is seen with either anti-actin or with anti--desmin. Preimmune antisera (not shown) were uniformly negative. The gels used for Figs. 12 and 13 were cycle-1 acetic acid extracts of KI-residue. The extracts were dialyzed against water and the resulting metastable solutions were induced to gel with 10 mM MgCl~. These extracts are relatively rich in desmin-associated actin, x 100 oil-immersion objective, NA 1.32; final magnification x 1,000; 10/axn/cm. FIGURE 15 Typical gels produced by the dialysis of cycle-2 acetic acid extracts of KI-residue acetone powders against water. Many long, straight fibers can be seen in Fig. 15a and b. Most of these are composed of twisted and intertwining fibrils of 130 A diameter (15 b ). Ribbonlike characteristics are evident in Fig. 15c. Occasional 11-15-A profiles, which may be protofilaments, are visible in Fig. 15c. The substructure in Fig. 15c resembles that of the 100 A filaments shown in Fig. 14b and c concentration of ions rises above roughly micromolar values or if they are exposed to ionic surfaces (e.g., glass). These gels are characteristically composed of a network of highly intertwined fibrils which measure 120-140 ~, in diameter. Most of the negatively stained fibril images are consistent with an interpretation of the fibrils as flat ribbons. These ribbons appear to intertwine to build up the macroscopic fibers that are visible in the light microscope, although it is possible that the fibers are an artifact of the negative staining procedure. Immunofluorescence indicates that desmin and the actin that copurifies with it are uniformly distributed in desmin fibers at a resolution limit of 2,500/~ (Raleigh criteria for self-luminous points at ~ of 530 nm and NA of 1.32; 32). Similarly, there is no overt evidence at the electron microscope level of separation into distinct filament morphologies. The gels also contain the same ratios of actin and desmin that the ungelled solution did. In addition, gel filtration in acetic acid indicates that most of the actin and desmin are unassociated under these conditions (Fig. 8). If the high molecular weight actin and desmin are not already associated in acetic acid, then they must become associated once it is removed. Thus, actin and desmin appear to copolymerize from solution. Comparison of the Gelation of Desmin and Actin A variety of cytoplasmic extracts have been discovered to undergo gelation and subsequent syneresis in vitro, and it is of interest to compare these with the gelation and syneresis of desmin. Typically, an extract that is capable of undergoing gelation is produced by homogenizing cells at 0~ in a buffered solution containing ATP, EGTA, and sucrose or glycerol. Upon being warmed, these extracts gel and then undergo syneresis if the gelled state is maintained. The gel-forming components of Acanthamoeba (23,25), Dictyostelium (38), pulmonary macrophages (12,37), and sea urchin eggs (16,17) have been fractionated. While these are not identical systems, gel formation generally appears to depend upon the polymerization of G-actin to F-actin and on the subsequent cross-linking of this F-actin by one or more accessory proteins. In most cases, syneresis is magnesium-ATP dependent and is based upon the interaction of the cross-linked F-actin with myosinlike proteins. Exceptions to this include the ATP-independent syneresis of sea urchin egg gels (16,17) and F-actin-filamin gels (41). The formation of desmin gels does not depend upon any conditions which are known to stabilize F-actin: gel formation can be triggered by subphysiological concentrations of many different ions; neither ATP nor calcium is required; gel formation is not reversible except by resolubilization in acetic acid or Sarkosyl NL-97; and syneresis occurs in the absence of myosin or ATP. The adhesiveness of desmin for itself suggests that the syneresis of desmin gels may result from an autoaggregation process. It thus appears that the gelation of extracts based predominantly on actin is significantly different from the gelation of extracts that are predominantly desmin. The physiological significance of gelation and syneresis remains to be determined. Are Desmin Filaments Related to I O0-A Filaments? Two lines of evidence indicate that desmin is a major subunit of the 100-/~ filaments of muscle. The first is based on the solubility properties of these filaments. Smooth muscle that has been extracted at high ionic strength is enriched in 100-A filaments and contains actin and desmin as its major protein constituents (5, 6, 22, 33. See also above). Urea solubilizes these two proteins and also removes the 100-A filaments from extraction enriched muscle (5). Second, the subsequent removal of solubilizing agents from desmin by dialysis has resulted in the production of ~100-Asized filaments from urea (5), acetic acid (reference 33 and this paper), and Sarkosyl NL-97 (data not shown). We have shown that most of the actin can be removed from desmin and that it will still polymerize to intermediate-sized filaments. These filaments are very similar to in vivo 100-A filaments, There are apparently two major differences between in vitro desmin fibrils and in vivo muscle 100-A filaments. First, if desmin fibrils are in fact ribbonlike, then they differ from in vivo 100-A, filaments, which are shown to have cylindrical cross sections in the vast majority of preparations (1,6,39). It is possible, however, that the negative staining and drying procedures induced artifactual flattening and twisting in otherwise cylindrical desmin fibrils. Second, the adhesiveness of in vitro desmin fibrils and precipitates is unex-BRUCE D. HUaRARD AND ELIAS LAZAmI~V.S Copurification of Actin and Desmin pected because the 100-A filaments of smooth muscle are not aggregated under normal circumstances. This phenomenon may result from either our in vitro assembly conditions or from an interaction of desmin with one of the proteins that copurifies with it. However, the formation of 100-A-filament aggregates in colcemid-treated cultured cells has been reported for a variety of cell types, including striated muscle (14), cardiac muscle (20), and smooth muscle (unpublished observations). Although the mechanism of this aggregation remains unknown, it may reflect the unmasking of adhesive properties in vivo which are similar to those shown by the desmin fibrils in vitro.
2014-10-01T00:00:00.000Z
1979-01-01T00:00:00.000
{ "year": 1979, "sha1": "b2fb2ef77b3e10153a4c14b4f8b052618e564f1a", "oa_license": "CCBYNCSA", "oa_url": "http://jcb.rupress.org/content/80/1/166.full.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "b2fb2ef77b3e10153a4c14b4f8b052618e564f1a", "s2fieldsofstudy": [ "Biology", "Chemistry", "Materials Science" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
53021479
pes2o/s2orc
v3-fos-license
Junctional Adhesion Molecules (JAMs) - New Players in Breast Cancer? 1.1 Global incidence of breast cancer Worldwide, breast cancer remains a leading cause of death amongst women. Annually, it is estimated that breast cancer is diagnosed in over a million women (Kasler et al., 2009) with over 450,000 deaths worldwide (Tirona et al., 2010). The incidence of the disease is highest in economically-developed countries, with lower rates in developing countries. Despite continual advances in breast cancer care which have led to reduced mortality, however, the incidence of the disease is still rising. The decrease in breast cancer-specific mortality has been attributed to improvements in screening techniques which permit earlier detection, surgical and radiotherapy interventions, better understanding of disease pathogenesis and utilization of traditional chemotherapies in a more efficacious manner. Consequently, early stage breast cancer is now a curable disease while advanced breast cancer remains a significant clinical problem. Breast cancer is a heterogeneous disease encompassing many subtypes, which differ both in terms of their molecular backgrounds and clinical prognosis. These breast cancer subtypes range from pre-invasive early stage disease to advanced invasive disease. The simplest classifications of disease subdivide breast cancer into pre-invasive and invasive forms; with the pre-invasive forms being ductal carcinoma in situ (DCIS) and lobular carcinoma in situ (LCIS). Carcinoma in situ is proliferation of cancer cells within the epithelial tissue without invasion of the surrounding stromal tissue (Bland & Copeland, 1998). DCIS arises in the terminal ductal lobular units (TDLU) and in extra-lobular ducts while LCIS occurs in the breast lobules, and is recognisable histopathologically by the presence of populations of aberrant cells with small nuclei (Hanby & Hughes, 2008). Invasive breast cancers are subclassified into invasive ductal breast cancer, invasive lobular breast cancer, inflammatory breast cancer and Paget's disease. Invasive ductal carcinoma (IDC) is the most common form of invasive breast cancer, accounting for around 85% of all cases. DCIS is frequently considered as an obligate precursor to IDC, progressing from lower to higher grades and then onto invasive cancer with progressive accumulation of genomic changes (Farabegoli et al., 2002). However it has alternately been suggested that there exist genetically-distinct subgroups of DCIS, only some of which have the potential to progress to invasion (Shackney & Silverman, 2003). Long-term natural history studies of DCIS have provided supportive evidence for both possibilities (Page et al., 1995; Collins et al., 2005; Sanders et al., 2005). Despite such controversies, the large extent to which the genome is altered in DCIS strongly suggests that genomic instability precedes phenotypic evidence of invasion (Hwang et al., 2004). This serves to underline the fact that malignant transformation in a heterogeneous disease like breast cancer is a dynamic process evolving through multiple multi-step pathway models. Many factors are thought to be responsible for the development of breast cancer. Genetic factors play a vital role in the predisposition to breast cancer, with mutations of BRCA1 and BRCA2 genes accounting for 5-10% of breast cancer cases and being responsible for 80% of inherited breast cancers (Nathanson et al., 2001). On a more complex level, much insight has been gained from the genetic profiling of thousands of tumours to generate gene signatures of prognostic value (Sorlie et al., 2001;van 't Veer et al., 2002;, which have spurred the development of commercially-available diagnostic tests. The importance of reproductive factors in the aetiology of breast cancer is also well recognised with early onset of menarche, nulliparity, late menopause, endogenous and exogenous hormones representing the main risk factors (Reeves et al., 2000;Key et al., 2001;Howell & Evans, 2011). Several other studies have reported an increased risk of breast cancer with lack of physical activity (especially in pre menopausal women), as well as increasing age and obesity (Clarke et al., 2006;Walker & Martin, 2007;Harrison et al., 2009;Rod et al., 2009;Awatef et al., 2011). These risk factors accentuate the abnormal growth control of cells by increasing the circulating levels of oestrogen thereby promoting tumourigenesis within the breast microenvironment. A proper understanding of the breast cancer microenvironment is essential for understanding breast cancer, and will be explored in detail in the next sections. Breast structure and breast cancer microenvironment The breasts are modified sweat glands with a specialized function to produce milk. In the adult, the mature breast extends from the second ribs to the seventh rib and from the lateral border of the sternum to the midaxillary line and projects into the axilla at the axillary tail of Spence (Monkhouse, 2007). The breast is located within the superficial fascia of the anterior thoracic wall and is made up of 15-20 lobes of glandular tissue (Bland & Copeland, 1998). Fibrous connective tissue forms the framework that supports the lobes and adipose tissue which fills the space between the lobes. Each lobe of the mammary gland terminates in a lactiferous duct which opens onto the nipple and is lined with breast epithelial tissue. These ducts have a sinus at the base beneath the areola called the lactiferous sinus ( Figure 1). Breast cancers are characterised by abnormal proliferation of breast epithelial cells and mostly originate in milk ducts (Sainsbury et al., 2000). Normal milk ducts consist of an outer myoepithelial cell layer and an inner luminal epithelial layer. Myoepithelial cells, which are of ectodermal origin, lie between the surface epithelial cells and the basal lamina. Both the epithelial and myoepithelial cells of the breast duct lie on a basement membrane composed of extracellular matrix factors secreted by those cells (Figure 2). The basement membrane is important for defining the barriers of the normal duct, and thus alterations in the basement membrane have been implicated in abnormal cell differentiation and the formation of metastases (Kleinman et al., 2001). Proliferation of cells within the breast ducts is controlled by growth-promoting protooncogenes and growth-inhibiting tumour suppressor genes. In most cases, normal cells divide as many times as needed and then stop. Carcinogenic mutations in either (or both) oncogenes and tumour suppressor genes (along with subsequent interactions between defective genes and the breast microenvironment) alter not just cell proliferation, but also differentiation, survival and genome stability (Hahn & Weinberg, 2002) of breast cells, leading to abnormal cell growth and potentially cancer. Much evidence supports the contention that the pathogenesis of breast cancer is influenced by complex interactions between ductal epithelial cells and the cells that compose the tumour microenvironment (Weaver et al., 1996;Polyak & Hu, 2005;Hu et al., 2008). The next section will focus on the cells of the microenvironment with respect to normal breast tissue structure and also their possible involvement in breast tumourigenesis. Cells of the breast microenvironment The abnormal epithelial cells composing a breast carcinoma form only one component of a complex microenvironment which influences the success or failure of a developing tumour. In fact the breast tumour microenvironment consists also of multiple cell types; including myoepithelial cells, fibroblasts, endothelial cells and immune cells such as macrophages ( Figure 2). In terms of their likely contributions to breast tumourigenesis, fibroblasts and macrophages are often considered as tumour promoters through downstream signalling from various secreted factors, while the endothelial cells which develop in tumourassociated blood vessels also support cancer development. In contrast, myoepithelial cells exert functions broadly considered as tumour-suppressive. Fibroblasts are an important structural component of the extracellular environment in the normal breast, where they help control the development of the breast epithelium (McCave et al. 2010). Their secretion of extracellular matrix components and cytokines has also implicated them in tumorigenic growth associated with invasive breast cancer (Orimo et al., 2005), and differences in cellular responsiveness to normal versus tumour-derived fibroblasts have been noted (Sadlonova et al., 2005). Many studies have highlighted the potential involvement of fibroblasts in promoting tumour progression both at genomic and transcriptomic levels, with reports of altered genetic signatures between normal and tumour-associated fibroblasts supporting a complex role for fibroblasts in influencing tumour progression Hu et al., 2008;Ma et al., 2009). Macrophages within the breast cancer microenvironment have been shown to enhance tumour growth through the secretion of pro-angiogenic factors like vascular endothelial growth factor (VEGF); (Murdoch et al., 2004;Lamagna et al., 2005 ;Lewis & Hughes, 2007). They have also been implicated in promoting a metastatic phenotype, via the secretion of pro-migratory factors such as EGF (Wyckoff et al., 2004) which enhance cellular dissemination from a primary tumour. Accordingly, the enhanced physical juxtaposition of macrophages, tumour cells and endothelial cells has been proposed as a new prognostic histopathological marker associated with increased risk of metastases in human breast cancer (Robinson et al., 2009). Endothelial cells which line the blood vessels are derived from angioblasts forming the vascular network. Enhanced vessel density occurring as a result of tumour-associated angiogenesis is a major contributor to both the survival of primary breast tumours (via the delivery of systemic growth factors) and the risk of metastasis (via increased access of disseminated tumour cells to a circulatory source). Expression of pro-angiogenic factors such as VEGF has been shown to increase in haematological malignancies (Fiedler et al., 1997;Molica et al., 1999) in addition to solid tumours including breast, renal, ovarian, gastric and lung cancer Burger, 2011;Gou et al., 2011;Sharma et al., 2011). VEGF promotes neovascularisation via mitogenic and pro-migratory effects on endothelial cells (Asahara et al., 1999). Finally, myoepithelial cells are known to play a role in the formation of the basement membrane and thereby assist in maintaining polarity of the breast ductal epithelium. They also interact with epithelial cells to regulate the cell cycle and suppress breast cancer cell growth, invasion and angiogenesis (Weaver et al., 1996;Alpaugh et al., 2000;Barsky, 2003). Tumour and non-tumour primary myoepithelial cells have been described to differ in functional properties relating to the secretion of extracellular matrix components such as laminin-1 (Gudjonsson et al., 2002), and accordingly myoepithelial cells reportedly lose their established tumour-suppressive properties during tumour progression (Polyak & Hu, 2005). Taken together, the many cell types within the breast tumour microenvironment can both individually and coordinately regulate several functions relevant to tumour progression. In order to better understand their relative contributions to breast cancer, it is necessary to dissect the signals that regulate their own functions. Since adhesive functions are central to the behaviour of all of these cell types, the remainder of this chapter will focus on their potential regulation by a family of adhesion proteins termed the Junction Adhesion Molecules (JAMs), whose role in breast cancer initiation and progression is just emerging. Cell-cell adhesion and the functional roles of JAMs in epithelial/endothelial cells 2.1 Introduction to cell-cell adhesion complexes and JAMs Cells within the breast tumour microenvironment physically interact with each other and with the extracellular matrix through a range of cell adhesion proteins. Cell adhesion proteins play fundamental roles in normal physiology (such as the control of cell polarity and epithelial barrier function), but their dysregulation has been shown to participate in tumour cell migration, invasion and adhesion (for review, see Brennan et al.,2010). Adhesion proteins rarely exist in isolation from each other on the cell membrane, rather they form components of multi-cellular adhesion complexes containing a network of adhesion, scaffolding and signalling proteins. Breast epithelial cells express various types of adhesion complexes, namely hemidesmosomes and focal adhesions at the cell-matrix interface, with tight junctions, adherens junctions, desmosomes and gap junctions at the cell-cell interface. Collectively, adhesion complexes are composed of integral membrane proteins and cytoplasmic scaffolding proteins that organise signalling complexes and anchor cell-cell contacts to intermediate filaments (at desmosomes and hemidesmosomes) or to actin filaments (at adherens junctions, tight junctions and focal adhesions). Tight junctions (TJs) play a vital role in regulating the paracellular flux of ions, small molecules and inflammatory cells as well as defining distinctly-polarized membrane domains and facilitating bi-directional signalling between the intracellular and extracellular compartments. These functions of the TJ are regulated by the balance of three different types of integral membrane proteins; (1) Occludins and Tricellulin, (2) Claudins and (3) Immunoglobulin Superfamily (IgSF) members. Of most interest in this chapter is the Junctional Adhesion Molecule (JAM) subfamily of the IgSF, and its potential contribution to cancer initiation and progression. The JAM family consists of 5 proteins (JAM-A, -B, -C, -4, -L) which are major components of TJs in endothelial and epithelial cells in a variety of vertebrate and invertebrate tissues (Martin-Padura et al., 1998;Liang et al., 2000;Liu et al., 2000;Arrate et al., 2001;Aurrand-Lions et al., 2001;Itoh et al., 2001;Hirabayashi et al., 2003;Tajima et al., 2003). JAM proteins are also expressed on the surface of haematopoetic cells such as platelets, neutrophils, monocytes, lymphocytes, leukocytes and erythrocytes; in addition to connective tissue cells such as fibroblasts and smooth muscle cells (Azari et al., 2010;Kornecki et al., 1990;Naik et al., 1995;Malergue et al., 1998;Williams et al., 1999;Cunningham et al., 2000;Palmeri et al., 2000;Arrate et al., 2001;Aurrand-Lions et al., 2001;Moog-Lutz et al., 2003;Morris et al., 2006). JAMs are type I transmembrane proteins consisting of an N-terminal signal peptide, an extracellular domain (consisting of two immunoglobulin-like domains), a single membranespanning domain and a short cytoplasmic tail (Martin-Padura et al., 1998;Liu et al., 2000;Sobocka et al., 2000;Aurrand-Lions et al., 2001;Naik et al., 2001;Santoso et al., 2002). The cytoplasmic tail is thought to play a major role in the assembly of adhesion signalling complexes, since it has been reported to bind to PDZ domain-containing scaffold proteins such as ZO-1 (Bazzoni et al., 2000;Ebnet et al., 2000), AF-6 (Ebnet et al., 2000) and MUPP1 (Hamazaki et al., 2002). JAMs -A, -B and -C exhibit a short cytoplasmic tail of 45-50 residues that ends with a type II PDZ binding motif, while JAM-4 and JAM-L have longer cytoplasmic tails (of 105 and 98 residues respectively). JAM-4 and JAM-L differ in that the cytoplasmic tail of the former ends in a canonical type I PDZ binding motif, while that of the latter lacks a PDZ-binding motif (Mandell & Parkos, 2005). The cytoplasmic tails of JAM proteins also contain consensus phosphorylation sites that may serve as substrates for protein kinase C, protein kinase A and Casein Kinase II (Naik et al., 1995;Cunningham et al., 2000;Ozaki et al., 2000;Sobocka et al., 2000;Arrate et al., 2001;Naik et al., 2001). Indeed, evidence suggests that specific phosphorylation sites may be critical for targeting of JAMs to intercellular junctions (Ozaki et al., 2000;Ebnet et al., 2003). JAM proteins have been implicated in a diverse array of physiological functions involving cell-cell adhesion/barrier function (Liang et al., 2000;Liu et al., 2000;Mandell et al., 2004), leukocyte migration (Martin-Padura et al., 1998;Palmeri et al., 2000;Johnson-Leger et al., 2002;Ostermann et al., 2002), platelet activation (Kornecki et al., 1990;Naik et al., 1995;Gupta et al., 2000;Ozaki et al., 2000;Sobocka et al., 2000;Naik et al., 2001;Babinska et al., 2002;Babinska et al., 2002) and angiogenesis . These functions will be further discussed in the next sections. JAM proteins regulate epithelial/endothelial cell-cell adhesion and barrier function JAM proteins are well-known to be important for cell-cell adhesion in both epithelial and endothelial cells (for review see Mandell & Parkos, 2005), but emerging evidence supports the possibility that they also regulate cell-matrix adhesion complexes. Interestingly, JAM-A knockdown in endothelial cells and MCF7 breast cancer cells has been shown to reduce adhesion to fibronectin and vitronectin (McSherry et al., 2011;, while JAM-C overexpression in endothelial cells reportedly decreases attachment to fibronectin, vitronectin, and laminin (Li et al., 2009). This apparent incongruity may relate to the fact that JAM-A may activate β1 integrins (McSherry et al., 2011), while JAM-C has conversely been described to inactivate β1 integrins (Li et al., 2009). An inverse relationship between JAMs -A and -C has also been observed in terms of tight junction function, with JAM-A promoting tight junction sealing while phosphorylated JAM-C increases paracellular leakiness due to its redistribution away from TJs (Li et al., 2009). Furthermore, adhesion of the lung carcinoma cell line NCI-H522 to endothelial cells was significantly blocked by soluble JAM-C (Santoso et al., 2005). The contribution of JAM proteins to cell-cell adhesion and the assembly of epithelial/endothelial TJs relates to their ability to promote the localization of ZO-1, AF-6, CASK and occludin at points of cell-cell contact. Evidence suggests that both homophilic and heterophilic interactions, as well as an intact PDZ binding motif, are important for such protein functions of JAMs. Accordingly, JAMs have been shown to physically interact with the PDZ proteins, ZO-1 (Bazzoni et al., 2000;Ebnet et al., 2000), AF-6 (Ebnet et al., 2000), CASK (Martinez-Estrada et al., 2001), PAR-3 (Ebnet et al., 2001;Itoh et al., 2001) and MUPP-1 (Hamazaki et al., 2002); which are involved in actin cytoskeletal rearrangement (Fanning et al., 2002), cell signalling (McSherry et al., 2011;Boettner et al., 2000) and the control of cell polarity. However JAMs can also bind to non-PDZ proteins such as cingulin (Bazzoni et al., 2000), and indirectly bind occludin (Bazzoni et al., 2000) and claudin 1 via their interactions with ZO-1 (Hamazaki et al., 2002). Although the manner in which JAMs interact with some of these proteins is incompletely understood, it appears that homo-dimerisation of JAM proteins is important for regulating some key downstream functions. This has been illustrated by the fact that dimerisation-blocking anti-JAM-A antibodies (Liu et al., 2000) and soluble Fc-JAM-A (Liang et al., 2000) delay the recovery of electrical resistance (a marker of TJ function) in epithelial cells following transient depletion of extracellular calcium. JAM proteins regulate epithelial/endothelial migration In general cell adhesion and cell migration are inversely related, and serve to control important physiological functions and pathophysiological events. However, in the case of JAM family members, close functional associations with cell polarity proteins may act as a switch between increased adhesion (predisposing to slow, directional migration) and decreased adhesion (predisposing to faster, more random motility). For example, JAM-A re-expression in JAM-A-/-mouse endothelial cells has been shown to reduce the occurrence of spontaneous and random motility. This ability of JAM-A to influence the polarised movement of cells was reliant on its interaction with polarity proteins through its PDZ binding motif (Bazzoni & Dejana, 2004). JAM-A deletion mutants lacking their PDZ-binding residues have been shown to have increased availability of Par3 (Ebnet et al., 2001), resulting in PKCζ inactivation and the loss of contact-dependent inhibition of cell motility (Mishima et al., 2002;Bazzoni & Dejana, 2004). These data show that loss of functional JAM-A results in faster random motility with reduced cell-cell contact inhibition of migration. Interestingly, JAM-C redistribution away from TJs stimulates β1 and β3 integrin activation, resulting in increased cell migration and adhesion (Aurrand-Lions et al., 2001). Furthermore, JAM-A and JAM-4 have been found to induce the formation of actin-based membrane protrusions, an essential part of cell migration, in endothelial and COS-7 cells (Mori et al., 2004). Together these data suggest loss of JAM-A promotes random motility, while JAM-A, JAM-C and JAM-4 promote directional cell migration through their effects on integrin function and cytoskeletal reorganization. In the context of cancer, knockdown of JAM-A has been shown to enhance invasiveness of the breast cancer cell lines MDA-MB-231 and T47D, and the renal cancer cell line RCC4 (Naik et al., 2008;Gutwein et al., 2009). Conversely, the overexpression of JAM-A in MDA-MB-231 cells reportedly inhibits both migration and invasion through collagen gels (Naik et al., 2008), suggesting that loss of JAM-A expression increases cancer cell dissemination and invasion. However, the specific contribution of JAM-A to breast cancer progression remains controversial. McSherry et al showed a significant association between high JAM-A gene or protein expression and poor survival in 2 large cohorts of patients with invasive breast cancer, and concurrently a decrease in the migratory abilities of high JAM-A-expressing MCF-7 cells upon knockdown or functional inhibition of JAM-A (McSherry et al., 2009). Reduced motility after JAM-A loss was subsequently linked to reduced interactions between JAM-A, AF-6 and the Rap1 activator PDZ-GEF2, resulting in reduced activity of Rap1 GTPase (McSherry et al., 2011), a known activator of β1-integrins (Sebzda et al., 2002) and a regulator of breast tumourigenesis (Itoh et al., 2007). Complementary evidence in a recent publication by Gotte et al. has also supported the theory that JAM-A overexpression is of more functional relevance in breast cancer than JAM-A loss, since over-expression of micro RNA (miR)-145 in breast cancer cells led to a decrease in cellular migration and invasion via downregulation of JAM-A expression (Gotte et al., 2010). Still more recently (during the proofing stage of this chapter), additional histopathological evidence has been provided for a link between JAM-A over-expression and poor prognosis in breast cancer patients (Murakami et al., 2011). This, along with the finding that JAM-A promotes the survival of mammary cancer cells (Murakami et al., 2011), strongly suggests that JAM-A depletion or antagonism could offer promise in reducing breast tumour progression. Furthermore, depletion of JAM-A has been found to inhibit bFGF-induced migration of human umbilical vein endothelial cells (HUVEC) on vitronectin, through effects on integrin function . In other cell systems, silencing of the JAM-A gene has been shown to block the migration of inflamed smooth muscle cells (Azari et al., 2010) and to increase the random motility of dendritic cells (Cera et al., 2004). JAM-A has also been shown to be required for neutrophil directional motility (Corada et al., 2005), and to promote neutrophil chemotaxis by controlling integrin internalization and recycling (Cera et al., 2009). Thus while the area remains controversial, increasing evidence is suggesting that JAMs promote migration and invasion through the regulation of integrin expression and activation (McSherry et al., 2011;Li et al., 2009;McSherry et al., 2009). In breast cancer, the formation of metastases at distant sites is the leading cause of cancerrelated death. In order for breast cancer cells to metastasize, they must first migrate out of the primary tumour before ever reaching a distant organ and potentially proliferating into a secondary tumour. While JAMs are already known to regulate migration, the possibility that they are also involved in the regulation of proliferation will be referred to in section 3.3 of this chapter. All together these data highlight the role of JAM family members in controlling the balance between cell adhesion and migration. Although much remains to be understood about the exact role of JAMs in breast cancer cell migration, the classic description of tumours as "wounds which do not heal" (Riss et al., 2006) suggests that the migratory mechanisms employed by JAMs in physiological responses (such as wound healing) may also be utilised by cancer cells to promote tumour progression or survival. Potential role of JAM proteins in epithelial/endothelial differentiation In previous sections we discussed the biphasic role of JAM family members in regulating cell adhesion and migration. In this section we will outline the emerging contribution of the JAM family to cellular differentiation. Cell differentiation in the context of normal tissue usually involves the transition from an undifferentiated stem/progenitor cell to a terminally-differentiated cell such as an epithelial, muscle or nerve cell. JAM-A, JAM-B, JAM-C and JAM-4 have been found to be highly expressed on hematopoietic stem cells (HSCs) in the bone marrow, with their expression decreasing during the acquisition of a more differentiated state (Nagamatsu et al., 2006;Sakaguchi et al., 2006;Sugano et al., 2008;Praetor et al., 2009). Furthermore JAM-A expression has been reported to be high on undifferentiated HC11 mammary epithelial cells relative to differentiated cells (Perotti et al., 2009). In support of a potential association between high JAM-A and poor differentiation status, high JAM-A gene or protein expression has been associated with a poorer grade of differentiation in tissues from patients with invasive breast cancer (McSherry et al., 2009). Conversely, JAM-A has been found to mediate the differentiation of CD34+ progenitor cells to endothelial progenitor cells and to facilitate CD34+ cell-induced re-endothelialization in vitro (Stellos et al., 2010). This suggests that JAM-A is required for circulating CD34+ progenitor cells to recognise a site of injury, differentiate into endothelial cells and proliferate to repair the injured endothelium. In addition, JAM-A is reportedly upregulated during the differentiation of pancreatic AR42J cells (Yoshikumi et al., 2008), while JAM-A mRNA and protein levels have been shown to be increased during differentiation of human monocytic cell THP-1 into mature dendritic cells (Ogasawara et al., 2009). JAM-L is also induced during differentiation of myeloid leukaemia cells, with expression of JAM-L in myeloid leukaemia cells resulting in enhanced cell adhesion to endothelial cells (Moog-Lutz et al., 2003). This upregulation of JAM-A during differentiation is reportedly followed by increased expression of the polarity proteins par3 and PKCλ (Yoshikumi et al., 2008), which have been previously shown to affect cell polarity and migration. While these data suggest conflicting roles for JAMs in stem cell populations versus their role in differentiation, at this early stage the exact role(s) of JAMs in stem cell renewal or differentiation can only be speculated upon. Fundamentally, it is also unknown whether the expression of JAMs is actively required or passively upregulated in stem cell populations. However, based on the increased expression of JAM-A in poorly-differentiated breast cancers (McSherry et al., 2009) and the emerging role of JAM-A in regulating proliferation and apoptosis (Azari et al., 2010;Nava et al., 2011;Murakami et al., 2011), it will be interesting to determine if JAM-A is upregulated on cancer stem cell populations and whether its expression promotes self-renewal. JAM proteins regulate endothelial angiogenesis As already alluded to, JAM proteins are highly expressed on endothelial cells and have been crucially implicated in the control of barrier function and cell motility. In the context of cancer, however, endothelial cells assume a new importance via the development of neovascularisation sites to support growing tumours (Hanahan & Folkman, 1996). This section will review the evidence currently linking JAM proteins to angiogenesis as a contributory mechanism to cancer progression. Angiogenesis in response to enhanced growth factor signalling is of particular relevance in tumour microenvironments. A body of work from Naik et al has convincingly shown an important role for JAM-A in angiogenesis induced by basic fibroblast growth factor (bFGF). Specifically, bFGF signalling facilitates the disassembly of an inhibitory complex between JAM-A and αvβ3 integrin, permitting JAM-A-dependent activation of MAP kinase which leads to endothelial tube formation, a surrogate for angiogenesis . JAM-A has also been shown to activate extracellular signal-related kinase (ERK) signalling in response to bFGF, facilitating endothelial migration in a matrix-specific context . In vivo, JAM-A expression has been linked with the very early stages of murine embryonic vasculature development (Parris et al., 2005), and although deletion of JAM-A appears to be dispensable for vascular tree development, homozygous JAM-null mice were found to be incapable of supporting FGF-2-induced angiogenesis in isolated aortic ring assays (Cooke et al., 2006). In the context of tumour neovascularisation, others have reported reduced angiogenesis in a model of pancreatic carcinoma in JAM-Anull mice (Murakami et al., 2010). Other JAM family members appear to contribute similarly to angiogenesis; with functional blockade of JAM-C being shown to decrease aortic ring angiogenesis and block angiogenesis in hypoxic vessels of the murine retina (Lamagna et al., 2005;Orlova et al., 2006). Furthermore, soluble JAM-C shed into the serum of patients with inflammatory conditions (presumably following cleavage by ADAM enzymes) was noted to induce endothelial tube formation in a Matrigel model (Rabquer et al., 2010). An interesting dichotomy, however, is that amplification of JAM-B in a trisomy-21 mouse model of Down's syndrome has been linked with reductions in VEGF-induced angiogenesis and thus anti-tumour effects in a lung carcinoma model in these mice (Reynolds et al., 2010). Taken together, these studies illustrate that by influencing angiogenic functions in endothelial cells, JAMs may indirectly influence the ability of tumours to survive and progress. While there appears to be a consensus that JAMs -A and -C activate signalling cascades that promote angiogenesis, it is possible that clear roles for the other family members in the regulation of angiogenesis will also emerge in time. It is tempting to speculate that pharmacological antagonism of JAMs will show promise as an option for blocking tumour progression, similar to the VEGF-A-neutralizing antibody bevacizumab (avastin) (Van Meter & Kim, 2010). JAM proteins regulate trafficking of leukocytes In addition to the potential regulatory roles of JAM proteins on the vascular endothelium, effects exerted on JAM-expressing leukocytes within the breast tumour microenvironment may also have relevance to cancer progression. For instance, JAMs are known to play important roles in the transendothelial migration of monocytes, which differentiate into macrophages once in the breast tissue. Accordingly, a function-blocking monoclonal antibody directed against JAM-A (BV11) has been described to inhibit spontaneous and chemokine-induced monocyte transmigration both in vitro and in vivo (Martin-Padura et al., 1998). Furthermore, treatment of mice with a monoclonal antibody directed against JAM-C has been shown to reduce macrophage infiltration into a murine lung tumour model (Lamagna et al., 2005), and to promote reverse transmigration of monocytes back into the bloodstream from inflamed tissue sites . Given the existence of a breast tumour-promoting paracrine loop between epidermal growth factor secreted by macrophages and colony-stimulating factor-1 secreted by tumour cells (Goswami et al., 2005), this implies that JAM-based regulation of monocyte transmigration could have a profound and self-amplifying influence on macrophage trafficking and tumour proliferation. In the context of leukocytes other than monocytes/macrophages, many studies have implicated JAMs in the functional control of neutrophil transmigration across both epithelial (Zen et al., 2004;Zen et al., 2005) and endothelial (Sircar et al., 2007;Woodfin et al., 2007) barriers. As yet nothing is known about JAM-dependent events that might control neutrophil trafficking or activation within the breast tissue, despite the fact that neutrophils accumulate in highly aggressive inflammatory breast cancers. In other tissues, JAM-A has been shown to be required for efficient infiltration of neutrophils into the inflamed peritoneum or into the heart upon ischemia-reperfusion injury; as evidenced by increased adhesion and impaired transmigration in JAM-A-deficient mice (Corada et al., 2005). Interestingly, in this model JAM-A expression on the neutrophil appears to be more important than that on the endothelium; since selective loss of endothelial JAM-A did not phenocopy the transmigration deficits (Corada et al., 2005). In addition, soluble JAM-A shed from cultured endothelial cells has been shown to reduce in vitro transendothelial migration of neutrophils and to decrease neutrophil infiltration in vivo (Koenen et al., 2009). Recent evidence also proves that family members other than JAM-A can participate in leukocyte trafficking, with JAM-C over-expressing mice exhibiting an increased accumulation of leukocytes into inflammatory sites or during ischaemia/reperfusion injury, while JAM-C neutralization or loss reduces leukocyte recruitment in models of lung, kidney or muscular inflammation (Aurrand-Lions et al., 2005;Scheiermann et al., 2009). Finally leukocytic expression of JAM-L has been shown to promote attachment to endothelium (Luissint et al., 2008), and functional inhibition of JAM-B is reported to decrease migration of peripheral blood lymphocytes across cultured human umbilical vein endothelial cells (HUVECs) (Johnson-Leger et al., 2002). Collectively these data highlight an important role for JAMs in the migration of immune cells across endothelia, a mechanism that could be hijacked by JAM-overexpressing cancer cells as they leave the breast and invade into blood vessels. JAM proteins and the regulation of stromal cells The final grouping of breast cancer microenvironmental cells which will be discussed are stromal cells, broadly including fibroblasts and myoepithelial cells. Although little is known about JAM-mediated control of breast stromal cells specifically, insights from other cellular systems may suggest that this multifunctional family of proteins could have a hand in influencing the mesenchymal element of tumourigenic processes. JAM-C expression has been noted on the surface of primary fibroblasts derived from human lung, skin and cornea (Morris et al., 2006). The same authors observed JAM-A and JAM-C expression on the widely-studied NIH-3T3 fibroblast cell line. Interestingly, high JAM-C expression on synovial fibroblasts has been associated with the pathology of murine experimental arthritis, and JAM-C antagonism shown to have functional benefits in reducing the severity of inflammation (Palmer et al., 2007). An immunohistochemical study in human arthritis has also demonstrated JAM-C expression on the synovial fibroblasts of both osteoarthritis and rheumatoid arthritis patients, in conjunction with JAM-C-dependent adhesion of myeloid cells to these fibroblasts (Rabquer et al., 2008). Enhanced expression of JAM-A has also been described on the skin of patients with the inflammatory disorder systemic sclerosis, in comparison to that on normal dermal fibroblasts (Hou et al., 2009). Aside from facilitating adhesion of leukocytic cells to stromal elements such as fibroblasts, another way in which JAM family members could influence the breast cancer microenvironment is by altering proliferation of fibroblasts or other accessory cells. JAM-A has been reported to be required for proliferation of vascular smooth muscle cells, since JAM-A gene silencing exerted anti-proliferative effects in this system (Azari et al., 2010). Whether this is through direct or indirect mechanisms remains uncertain, particularly in light of conflicting evidence in intestinal epithelial cells suggesting that JAM-A expression restricts proliferation by inhibiting Akt-dependent Wnt signalling (Nava et al., 2011). However functional inhibition of the extracellular domain of JAM-A has been shown to inhibit bFGF-induced endothelial cell proliferation, and overexpression of JAM-A was also found to increase endothelial cell proliferation . Accordingly, very recent evidence has suggested that JAM-A expression exerts a negative tone on apoptosis in the mammary epithelium (Murakami et al., 2011). It is likely that processes as crucial as proliferation are strictly regulated in a spatial manner, which could account for tissuespecific differences as observed from the little available evidence to date. Whether or not JAM family members may influence proliferation of breast stromal cells like fibroblasts and the myoepithelium remains to be investigated. However, it is tempting to speculate that the acquisition of a proliferative phenotype in tumours may be co-ordinately linked to the promigratory "mesenchymal" phenotypes observed in many aggressive, poorly-differentiated breast cancers, to which evidence has already linked members of the JAM family. Co-culture models which better recapitulate the complexity of the breast cancer microenvironment than mono-cultures (Holliday et al., 2009) may offer promise in dissecting the relative cellular contributions of JAMs to tumour progression at a reductionist level. JAMs as novel potential drug targets in breast cancer The pleiotrophic roles of JAM family members in regulating both the breast epithelium and cells of the microenvironment may suggest JAMs as novel therapeutic targets for the future management of breast cancer. Whether by aiming to block migratory behaviour, angiogenesis, proliferation or to promote polarisation and differentiation, selective pharmacological targeting of JAM molecules could prove particularly useful in cancers that overexpress one or more JAMs. This naturally pre-supposes that JAMs are causally involved in the disease process rather than simply acting as passive biomarkers, a fact that remains to be solidified. However, irrespective of the last caveat, another facet worth exploring is the potential of targeting JAMs to promote drug delivery. Since tight junctions (TJs) as a whole are primary regulators of paracellular transport across epithelial cells (Gonzalez-Mariscal et al., 2005), successful drug delivery may require modulation of TJ proteins to allow drug molecules to pass (Matsuhisa et al., 2009). However disruption of TJ proteins for drug delivery purposes is a double-edged sword, given the risk of disrupting homeostatic mechanisms of polarity, differentiation and migration which are tightly regulated by TJs in normal tissues and whose dysregulation may themselves promote tumourigenesis. As yet, there are no cancer therapies on the market which specifically target tight junctions. However several tight junction proteins have been described as receptors for specific molecules or organisms, and as such, these might provide valid and novel targets for drug delivery. A particular precedent exists with the claudin family of TJ proteins; Claudins-3 and -4 having been suggested as drug delivery targets since they act as the receptor for Clostridium perfringens enterotoxin (CPE). The ability of CPE to rapidly and specifically lyse cells expressing claudin-3 or -4 could potentially be exploited in the treatment of breast cancers over-expressing these proteins (Katahira et al., 1997;Morin, 2005;. Sub-lytic doses of CPE could alternatively be used to compromise TJs thus enhancing the influx of drug molecules across the epithelium. This could be of particular benefit in accessing hypoxic tumour cores, around which the tumour cells may be very tightly packed and thus relatively inaccessible to chemotherapeutic drugs. To date CPE administration has been shown to reduce growth of claudin-4 overexpressing pancreatic tumour cells (Michl et al., 2001;Michl et al., 2003), but their potential use in other cancer settings remains an open question. How JAM molecules might be therapeutically targeted also remains an unanswered question, but one could predict value in using monoclonal antibodies or small molecule inhibitors to block the signalling functions which contribute to processes such as migration and angiogenesis. However, to date, the role of JAMs as chemotherapeutic targets (or even prognostic/predictive biomarkers) in the clinical setting of breast cancer has yet to be elucidated and validated. Following the lead of JAM-A as a potential biomarker and therapeutic target for breast cancer (McSherry et al., 2009;Gotte et al., 2010;McSherry et al., 2011;Murakami et al., 2011), we speculate that this will be a lucrative area of research in the future. Conclusion To conclude, breast cancer remains a leading cause of cancer worldwide (Jemal et al., 2008), and the search for new targets of prognostic and therapeutic relevance will continue particularly in this era where semi-personalised medicine is becoming more of a likelihood than an aspiration. This chapter has attempted to summarize the known roles of the JAM family in controlling cell adhesion, polarity and barrier function, and their emerging roles in controlling functional behaviours within cells of the breast tumour microenvironment which promote cancer progression. Finally, it introduced the topic of JAM as a potential drug target in breast cancer; whether to directly influence JAM-dependent oncogenic signalling or indeed to interfere with cell-cell adhesion for the purposes of enhancing drug delivery. Continued expansion in our understanding of the cell and molecular biology of JAMs and their roles in tumour progression may open up new horizons supporting their evaluation as breast cancer biomarkers and drug targets of the future.
2019-03-30T13:02:53.864Z
2011-12-14T00:00:00.000
{ "year": 2011, "sha1": "81e990752a2daad73cffaeff0cee08566b222f3f", "oa_license": "CCBY", "oa_url": "https://doi.org/10.5772/22110", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "459f09732a2883f79d4e62d1dc14e03b8d8ec846", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology" ] }
267495956
pes2o/s2orc
v3-fos-license
Perceived cognitive performance in off‐prescription users of modafinil and methylphenidate: an online survey Abstract Introduction Modafinil and methylphenidate are used off‐prescription for cognitive enhancement in healthy individuals. Such use is often reported in online surveys but it is unclear whether drug use for cognitive enhancement is motivated by perceived poor cognitive performance or a desire to improve good cognitive performance. The current study investigated whether off‐prescription users of modafinil and methylphenidate differed in their self‐perceived cognitive performance from people who do not take these drugs. Method An online survey targeting forum sites assessed self‐perceived cognitive function via the Adult Attention Deficit/Hyperactivity Disorder Self‐Report Scale, the Cognitive Failures Questionnaire, and the General Procrastination Scale. Results There were 249 respondents, of whom 43% reported no use of modafinil and methylphenidate (the control group) and 58% reported use of one or both drugs without a prescription for cognitive enhancement. This created an independent samples design with three groups. On both the Adult Attention Deficit/ Hyperactivity Disorder Self‐Report Scale and General Procrastination Scale, modafinil and methylphenidate users reported higher scores than the control group, indicating higher levels of perceived inattention and procrastination. Scores on the Cognitive Failures Questionnaire indicated that modafinil and methylphenidate users rated themselves as having fewer cognitive failures than controls. Conclusion These findings suggest that at least some reported off‐prescription users of modafinil and methylphenidate may be seeking to reduce the impact of self‐perceived poorer performance, particularly in forms of cognition that are likely to impact on self‐directed or self‐motivated work. INTRODUCTION Cognitive enhancing drugs (CEDs) are prescribed for conditions such as attention deficit hyperactivity disorder (ADHD) and dementia (Lanni et al., 2008;Outhoff, 2016).The effectiveness of CEDs such as methylphenidate in treating ADHD is well-recognized (Castells et al., 2011;Van der Oord et al., 2008).Benefits of CEDs in dementia have also been noted in patients with mild to severe dementia (Rattinger et al., 2013).However, the use of such drugs by healthy individuals for enhancing nonclinically impaired functions is growing internationally (Dursun et al., 2019;McCabe et al., 2005;Teodorini et al., 2020). Of the many CEDs available on the market, the two most discussed and noted for their positive effects on cognition in healthy adults are methylphenidate and modafinil (for reviews, see Dubljević & Ryan, 2015;Repantis et al., 2010).Motivations for using CEDs include to improve concentration, to reduce fatigue and to "get more done" (DeSantis et al., 2008;Faraone et al., 2020;Rabiner et al., 2009;Teodorini et al., 2020).However, less is known about whether CED use is motivated purely by enhancement of already good performance or to improve poorer cognitive functions.Answers to this question could help inform both the ethical debate about fairness and access to CEDs (Sahakian & Morein-Zamir, 2011) and to help direct CED users to support for problems with attentional control.Therefore, the current online self-report study investigated whether off-prescription users of modafinil and methylphenidate may be self-medicating for perceived poor cognitive functions. The off-prescription use of modafinil and methylphenidate has been reported in schools, universities and in the workplace (Aikins, 2011;Leon et al., 2019;Singh et al., 2014;Vargo & Petróczi, 2016).Modafinil and methylphenidate have different pharmacological profiles, clinical uses, and potentials for abuse (Ballon & Feifel, 2006;Jasinski, 2000;Minzenberg & Carter, 2008;Wood et al., 2014).Despite this, there has been a tendency to treat off-prescription CED users as a homogenous group, even though they are using a wide variety of substances (Rubin-Kahana et al., 2020), and moreover, reports of the benefits obtained from CEDs differ between users of modafinil and methylphenidate (Teodorini et al., 2020).A further aim of the current study was, therefore, to explore whether users of modafinil and methylphenidate differ from both each other and non-CED users in their self-perceived cognitive performance. Undiagnosed problems with attention have been identified in community samples, with high levels of ADHD symptoms reported by 10% of college students without a formal diagnosis (Garnier-Dykstra et al., 2010) but only a few studies to date have explored whether this figure is higher among people who use CEDs off-prescription (Arria et al., 2011;Garnier-Dykstra et al., 2010;Ilieva & Farah, 2019;Peterkin et al., 2011;Poulin, 2007).Poulin (2007) Other studies (e.g., Arria et al., 2011;Peterkin et al., 2011) used the World Health Organization (WHO) Adult ADHD Self-Report Scale (ASRS) (Kessler et al., 2005) to assess ADHD symptoms.The Adult ASRS is a standardized and well-validated tool for assessing adult ADHD symptoms (Gray et al., 2014).Peterkin et al. (2011) by Rabiner et al. (2009) who found that nonmedical use of stimulants was associated with symptoms of inattention rather than hyperactivity.Ilieva and Farah (2019) also reported that the off-prescription use of ADHD medication, including methylphenidate, related positively to self-perceived attention problems measured by the Barkley and Murphy ADHD Symptom Checklist (Murphy & Barkley, 1995). These links between CED use and self-perceived symptoms of ADHD arises from research that has mostly been conducted on student populations.To address this concern, therefore, the current study sought to extend this work through the recruitment of a more diverse sample via an online survey aimed at forum users across the world.In addition to investigating the link between CED use and ADHD symptomology, the current study also explored potential differences between CED users and nonusers in procrastination and everyday, real-world, cognitive lapses.Given that previous work in this area has already suggested ADHD symptomology may be higher among student CED users (Arria et al., 2011;Ilieva & Farah, 2019;Peterkin et al., 2011;Rabiner et al., 2009), exploring whether these differences extend to other areas of real-world cognitive control would help inform the understanding of why CEDs are used. The frequency with which an individual makes absentminded errors has been found to vary due to individual differences and includes perceptual, action, and memory failures (Broadbent et al., 1982;Unsworth et al., 2012).Everyday cognitive slips are experienced by everyone, but these slips occur more frequently in individuals with conditions affecting cognition, such as dyslexia (Smith-Spark et al., 2004), ADHD (Kim et al., 2014), and Parkinson's disease (Poliakoff & Smith-Spark, 2008).These three studies used the 25-item Cognitive Failures Questionnaire (CFQ) (Broadbent et al., 1982) to identify the frequency of cognitive failures within the past 6 months and the CFQ has good external validity (e.g., de Paula et al., 2017;Ekici et al., 2016;Kim et al., 2014;Poliakoff & Smith-Spark, 2008;Smith-Spark et al., 2004;Wallace & Vodanovich, 2003).If off-prescription users of modafinil and methylphenidate are self-medicating for poor cognitive performance, the CFQ may capture how such failures impact on everyday life. In addition to inattention (as measured via self-reports of ADHD symptoms) and cognitive failures, a common barrier to accomplishing tasks is procrastinatory behavior, defined by Steel (2007) as a conscious delay of a planned course of action, even though this delay is likely to have negative outcomes.Procrastination is noted as a problem particularly for both students (Rabin et al., 2011) and workers (Nguyen et al., 2013).As common motivations for using CEDs include to "avoid procrastination" and "to get more done" (Teodorini et al., 2020), "increased academic performance" (Fond et al., 2016) and "productivity" (Novak et al., 2007;Sharif et al., 2021), it remains an open question as to whether CED users report higher levels of procrastination than a comparable group of non-CED users.The 20-item General Procrastination Scale (GPS) (Lay, 1986) was developed to assess trait procrastinatory behavior.Using this scale, Ferrari and Sanders (2006) compared the self-perceived levels of procrastination in patients with ADHD and healthy controls and found significantly higher rates of procrastination in the ADHD group.Niermann and Scheres (2014) recruited university students who had tested positive on a self-report scale for ADHD and reported that ADHD-related symptoms of inattention, but not hyperactivity or impulsivity, were positively correlated with procrastination.Procrastination has, therefore, not only been associated with ADHD but overcoming procrastination has also been noted as a motivation for using CEDs (Aikins, 2011;Teodorini et al., 2020). Three questionnaires, the ASRS (Kessler et al., 2005), the CFQ (Broadbent et al., 1982), and the GPS (Lay, 1986) were, therefore, used in the current study to investigate whether modafinil and methylphenidate users may perceive themselves as experiencing cognitive problems or failures and may, therefore, be using modafinil and methylphenidate to alleviate these.Since higher rates of recreational drug use have been reported among CED users (McCabe et al., 2006) and some recreational drugs can impact upon cognitive performance (e.g., Indlekofer et al., 2009), questions were also asked about rates of use of the three most used recreational drugs, namely nicotine, alcohol, and cannabis (Hultgren et al., 2021).It was hypothesized that, compared with a non-CED-using group, CED users would selfreport worse performance on the ASRS and, given the link between ADHD and procrastination, would self-report worse performance on the GPS.Additionally, as proposed earlier, if CED users were to be self-medicating for poor cognitive performance, it is likely that this would be reflected in their everyday life; therefore, it was also hypothesized that CED users would self-report worse performance on the CFQ compared with non-CED users. Respondents A convenience sample of CED users and non-CED-using controls were Materials Qualtrics XM survey software was used to design and administer the survey.In total, it contained 99 questions (although, depending on their responses, the participants were not required to answer every question).The estimated response time varied from 8 to 30 min.The survey was divided into a number of sections (detailed below). Demographics This section covered age, gender, nationality, and education details, whether respondents were currently engaged in study including vocational, continuing professional development and "high school/ A Level" in addition to university degrees. Cannabis, nicotine, and alcohol use This section comprised three subsections relating to cannabis, nicotine, and alcohol use, respectively.The questions in this section focused on current, frequent use of cannabis and nicotine.For the purpose of analyzing the data collected on cannabis use, a clearer understanding of the frequency of use of cannabis was required.Therefore, responses to the question "in the past 6 months how regularly have you taken cannabis" were condensed: "everyday/almost everyday," "three to four times per week," and "once per week" were grouped into the variable "once or more per week."The responses "once or twice per month" and "up to three times in total" were grouped into the variable "less than once per week" and the response "none" was renamed "none in the past 6 months" for clarity. In Section 3.2.4,the Alcohol Use Disorders Identification Test (AUDIT; Babor et al., 1992) was used to identify problematic alcohol use.The AUDIT is a 10-item questionnaire created by the World Health Organization (WHO) as a brief screening tool to identify individuals with hazardous and harmful alcohol use behavior (Babor et al., 2001).Questions are presented with a 5-point Likert scale response and total scores range from 0 to 40.Scores between 8 and 15 represent a medium level of self-perceived alcohol problems and scores above 16 represent a high level of alcohol problems (Babor et al., 2001). Modafinil and methylphenidate use These two sections were devoted to modafinil and methylphenidate use.These sections asked about age of first use, doses used, and the usual route of administration of both drugs. 2.6 Adult ADHD Self-Report Scale (ASRS-V1.1) The ASRS (Kessler et al., 2005) suring, so respondents were unaware that they were completing an ADHD questionnaire.The ASRS has been reported to have good testretest reliability, with Cronbach's alpha of .885,followed by a 2-week test-retest reliability Cronbach's alpha of .878(Kim et al., 2013).The Cronbach's alpha for the current study was .85. Total scores range from 25 to 100, with higher scores indicating higher levels of susceptibility to cognitive slips.The questionnaire has good test-retest reliability.Broadbent et al.'s (1982) study reported two groups: one retesting after 21 weeks gave a correlation of r = .824 (n = 57) and the other one retesting after 65 weeks gave a correlation of r = .803(n = 32).Additionally, Broadbent et al. (1982) reported the results of a sample of 98 women between the ages of 20 and 40 years, with the coefficient alpha in this case being .89,demonstrating good internal consistency.The Cronbach's alpha for the current study was .85.Broadbent et al. (1982) argued that the CFQ provides a measure of general cognitive failure, which is important for external validity, as the individual's view of themselves is also shared by those who know them.While Broadbent et al. (1982) did consider whether a total score or individual item score would be more appropriate by performing a factor analysis, the factors found could not be replicated and it was concluded that the CFQ's structure is unidimensional; thus using a total score of all items as representative.Later studies have also attempted to identify factors within the CFQ (e.g., Larson et al., 1997;Pollina et al., 1992;Wallace et al., 2002) but there has been no commonality between the studies in the factors so identified.That said, the only study that was retested and confirmed by confirmatory factor analysis (Wallace, 2004) was the analysis by Wallace et al. (2002), which found four factors, memory relating to memory errors and forgetfulness, distractibility relating to disruption of internally focused attention, blunders of a social nature, and names.Wallace's (2004) fac-tors were used in the current study to resolve an issue with missing data as explained in Section 2.9. General Procrastination Scale (GPS) The 20-item GPS (Lay, 1986) was also presented.Again, a 5-point Likert scale was used, with each response being scored as "extremely untrue" = 1, "moderately untrue" = 2, "neutral" = 3, "moderately true" = 4, and "extremely true" = 5, although 10 of the items are reverse scored.Higher scores indicate higher levels of self-perceived procrastination.Total scores range from 20 to 100 with higher scores indicating higher levels of procrastination.The GPS is a unidimensional questionnaire, and this was confirmed by Sirois et al. (2019), with a coefficient alpha of .82(Lay, 1986) and Ferrari (1989) reported good test-retest reliability of .80.The Cronbach's alpha for the current study was .88. Analysis An independent-sample design with three groups was used, a control group of respondents who reported never taking modafinil or methylphenidate, a "modafinil-only" group who reported taking modafinil but not methylphenidate, and a methylphenidate group who reported taking methylphenidate and may or may not have taken other CEDs (including modafinil).Henceforth, these groups are referred to by their names (modafinil-only, methylphenidate, and control).These groups were formed post hoc, based on the self-reports provided by respondents. Due to an error in Qualtrics, item 1 of all three questionnaires was not answered by some participants.For the ASRS, the data from 22 participants who did not provide a response to the first item (all from the control group) were removed from the analysis because the analysis requires all responses to all items.For the CFQ, the four-factor model proposed by Wallace et al. (2002) was used and the mean score for all other items of the Distractibility factor was used to replace the missing score for 106 participants (all from the control group).A factor analysis was then performed to assess the robustness of this approach.For details of the factor analysis and scree plot, see Supplementary File S2. For the GPS, item 1 was removed for all participants to ensure equivalency between groups and the scores reported below are based on the Kruskal-Wallis and post hoc Mann-Whitney U tests were also performed on self-perceived nicotine and alcohol use under three conditions of group type.A log-linear test was conducted on educational status and chi-square analyses were conducted on cannabis use under the three conditions of group type. Procedure Ethical approval was granted for this study by the School of Applied Sciences Research Ethics Committee at London South Bank University (SAS1733).With permission from the forum moderators, an advertisement and link to the survey were posted on the selected forum sites.The advertisement detailed the nature of the survey and invited individuals to participate if they had taken either modafinil or methylphenidate and also if they had not taken either drug.Following a brief and informed consent, questions relating to demographic information were presented first, followed by questions asking about modafinil and methylphenidate use and then the ASRS, CFQ, and GPS. Participant characteristics The final sample size was 249, following the removal of the data from 76 respondents who reported having been prescribed either modafinil or methylphenidate. A total of 35% (N = 86) of respondents reported that they took modafinil-only and 23% (N = 57) indicated that they took methylphenidate and may or may not have taken other CEDs (only 8%, N = 19, reported taking methylphenidate-only).The remaining 43% (N = 106) of respondents reported that they had not taken either modafinil or methylphenidate. Gender, age, and nationality The majority of both the modafinil-only group (79%, N = 68) and the methylphenidate group (77%, N = 44) identified as being male, whereas the control group reported roughly equal numbers of males (55%, There was a roughly equal number of modafinil-only respondents who reported being North American (28%, N = 24) or British (27%, N = 23), whereas a greater number of the methylphenidate group reported being North Americans (35%, N = 20) compared with those who reported being British (16%, N = 9).There was also a greater number of the control group who reported being North American (36%, N = 38) compared with those who reported being British (13%, N = 14). See Table 1 for details of all reported nationalities. Current educational status In the modafinil-only group, 42% (N = 36) reported that they were studying for a university degree.A similar pattern was found with the methylphenidate group: 39% (N = 22) reported that they were studying for a university degree.The control group showed a slightly lower number, with 20% (N = 21) reported that they were studying for a degree. A log-linear test was conducted for group type (modafinil-only, methylphenidate, and control groups) and currently studying for a qualification, and the analysis produced a final model with a likelihood ratio of χ 2 = 0.00, p = n.s., indicating that the model fitted the data well. The model indicated that there was no significant two-way interaction between group type and current university study status, χ 2 (2) = 4.86, p = .088. Full details of respondents reporting their current studies can be found in Table 2. Cannabis and nicotine Please see Supplementary File S3 for cannabis and nicotine findings. Alcohol Total scores on the AUDIT were highest in the control group, with a mean of 13.96 (SD = 4.71), followed by the methylphenidate group (mean = 7.30, SD = 5.08) and the modafinil group (mean = 6. Modafinil The mean age of first use of modafinil stated by reported modafinil users was 28 years (SD = 9.24) with a range of 50 years (18 -68).The only route of administration reported by this group was by oral ingestion. There was almost an equal split between those who reported always taking the same dose (52%, N = 45) and those who reported not always taking the same dose (48%, N = 41).Frequency of use of modafinil can be found in Supplementary File S4.Full details of reported dosage levels can be found in Table 3. Full details of the maximum and minimum reported dose of modafinil can be found in Table 4. Methylphenidate The mean age of first use of methylphenidate was 21 years (SD = 6.15), with a range of 29 years (13 -42).The majority of respondents reported that they did not always take the same dose (70%, N = 40). Dosage levels of reported methylphenidate use can be found in Table 5. Full details of the maximum and minimum reported dose of methylphenidate can be found in Table 6. The most commonly reported route of administration was reported to be swallowing a pill, although 14% reported snorting methylphenidate.The most commonly reported formulation of methylphenidate was extended release.Full details can be found in Table 7. 3.5 Self-reports of cognition 3.5.1 Adult ADHD Self-Report Scale (ASRS) On Part A, 34% (N = 29) of the modafinil-only respondents, 49% (N = 28) of the methylphenidate respondents, and 11% (N = 12) of the control respondents scored at the level indicating symptoms highly consistent with ADHD. A Kruskal-Wallis test was performed on the complete ASRS to test for the effect of group type (modafinil-only, methylphenidate, and control).The differences between the mean ranks of 126.32 (the methylphenidate group), 119.84 (the modafinil-only group), and 95.28 (the control group) were significant, H( 2 ) = 9.58, p = .008. Post hoc Mann-Whitney U tests revealed that both the scores of the modafinil-only group, U = 2707.50,N modafinil-only = 85, N control = 82, p = .012,and the scores of the methylphenidate group, U = 1702.50,N methylphenidate = 57, N control = 82, p = .006,were significantly higher than the scores of the control group.There was no significant difference in scores between the modafinil-only group and the methylphenidate group, U = 2269.00,N modafinil-only = 85, N methylphenidate = 82 p = .521. A Kruskal-Wallis test was performed on the complete ASRS inattentive scores to test for the effect of group type.The differences between the mean ranks of 130.32 (the methylphenidate group), 127.39 (the modafinil-only group), and 86.37 (the control group) were significant, DISCUSSION This study investigated the self-perceived cognitive performance of self-identified off-prescription users of modafinil and methylphenidate among an international sample of individuals who were visitors of online forums.The results suggest that both the CED user groups reported significantly greater symptoms of inattention and procrastination and significantly lower cognitive failures than the control group. Scoring on both Part A of the ASRS and the whole questionnaire revealed that both the modafinil-only group and the methylphenidate group reported symptoms highly consistent with ADHD but this pattern was not found for the control group.The modafinil-only and methylphenidate group differences appeared to be mostly driven by items that measure inattention, not hyperactivity.It should be noted that these are self-reported perceptions of respondents and, as such, may overreflect or underreflect prevalence rates of ADHD.However, this finding does suggest that reported modafinil and methylphenidate respondents feel that they have difficulties with attention that may be similar to those experienced by people with a diagnosis of ADHD. This finding is consistent with Peterkin et al. (2011) and Arria et al. (2011), while Francis et al. (2022) have also found that symptoms of inattention significantly predicted prescription stimulant misuse in college students with and without a diagnosis of ADHD.This result partially supports the hypothesis that self-perceived modafinil and methylphenidate users will score significantly higher on the ASRS compared with controls and is consistent with the findings reported by Arria et al. (2011).Arria et al. (2011) argued that problems with inattention, rather than hyperactivity and impulsivity, are more likely to be associated with the nonprescription use of stimulants.They based this conclusion on finding a relationship between inattention and academic performance difficulties, but no relationship between hyperactivity/impulsivity and academic performance.Similarly, Rabiner et al. (2009) also reported an association between off-prescription use of stimulants and symptoms of ADHD.They reported that students scoring high on attention difficulties were almost twice as likely to be nonmedical users of ADHD medications as students scoring lower on attention difficulties.Additionally, hyperactive/impulsive symptoms were not found to predict nonmedical ADHD use. Analysis of the scores on the GPS revealed, as predicted, that both the modafinil-only and methylphenidate groups scored significantly higher than the control group.This part of the hypothesis was therefore supported.Both Ferrari and Sanders (2006) and Niermann and Scheres (2014) reported associations between ADHD symptomology and self-reported levels of procrastination, but neither research team looked specifically at CED-using groups.Niermann and Scheres (2014) also reported that procrastination was only associated with ADHD symptoms of inattention and not hyperactivity and impulsivity.Further to this, in their event-related potential study with low and high academic procrastinators, Michalowski et al. ( 2020) found that procrastinators have specific deficits in attention that can be observed at a neuronal level.It seems that the CED users who participated in the current study are self-reporting cognitive performance that is in keeping with associations between inattention and procrastination but not hyperactivity and procrastination. In contrast, however, both the modafinil-only and methylphenidate groups scored significantly lower on the CFQ compared with the control group.If the CFQ is considered in relation to the factors identified by Wallace et al. (2002), which are memory, distractibility, blunders, and memory for names, the lower scores would suggest that offprescription users of modafinil and methylphenidate do not perceive themselves as having particular problems overall with these every day cognitive failures.In fact, it would suggest that these off-prescription CED users perceive less problems with cognitive failures than other online forum users.One possible explanation is that those who may be self-medicating might be doing so for problems with attention but not for the problems which the CFQ tests for, given that cognitive failures are considered to cover all conceivable everyday errors of memory, attention, and language (e.g., Broadbent et al., 1982;Unsworth et al., 2012).Self-report questionnaires focusing on attention may be better attuned to probing these issues, such as the Everyday Life Attention Scale (Groen et al., 2018).The higher rates of alcohol use among the controls may also contribute to their higher scores on the CFQ, although previous research has linked increased cognitive failures among heavy drinkers more to experience of withdrawal from alcohol rather than its use per se (Carrigan & Barkus, 2016). There were also some differences between CED users and nonusers in their reported use of alcohol, nicotine and cannabis.Previous studies have often reported higher rates of illicit drug use among CED users (McCabe et al., 2006), so questions on rates of illicit drug use were included to see if CED use is related to increased use of common recreational drugs.CED users were more likely to report being daily users of nicotine, higher lifetime cannabis use and their lower scores on the AUDIT suggest less frequent or less potentially problematic use of alcohol.The more frequent self-perceived use of nicotine could be expected among CED users as nicotine can also act as a cognitive enhancer and it, therefore, might be used for this very same purpose by CED users (Heishman et al., 2010).The lower scores on the AUDIT, however, suggest that CED users may show more restraint in the use of a drug that is known to have cognitively impairing effects, particularly impacting on working memory, encoding and prospective memory (memory for delayed intentions; Winograd, 1988), abilities CED users may be trying to improve (Van Skike et al., 2019).Cannabis has well documented acute effects on cognition, including as measured by the ASRS (Petker et al., 2020); therefore, current and frequent use could, in theory, lead to higher self-perceived inattention and procrastination. The CED users and controls only differed in cannabis use over their lifetime. While giving important insights into the self-rated cognitive performance the current study does, however, have a limitation.Self-reports of cognitive and behavioral performance are subjective perceptions that have been demonstrated to differ quite substantially from performance as measured by objective tests (Ilieva & Farah, 2019) and this must be taken into consideration when interpreting the results. That said, this may be due to the fact that they measure performance at different levels (Stanovich, 2009).Stanovich (2009) argued that performance-based objective tests under laboratory conditions measure optimal performance whereas self-report rating measures assess typical performance in everyday life.The data reported here, do, however, suggest that perceived poorer cognitive performance may be related to CED use but objective, laboratory-based tests would be needed to be certain that CED users genuinely do experience poorer attentional control and higher levels of procrastination. CONCLUSION These findings suggest that some reported off-prescription users of modafinil and methylphenidate (at least those who frequent online forums) are self-prescribing for perceived problems with inattention and procrastination and that these drugs are perceived as improving these problems.This finding has implications, which are important, not only in informing policy, but also in highlighting the possible existence of a population of CED users who may struggle with undiagnosed ADHD.Able et al. (2007) and Okumura et al. (2021) have raised the point that individuals with undiagnosed ADHD manifest functional and psychosocial impairments which create a significant burden in their lives.Additionally, as argued by Scope et al. (2010), inattention may exist along a continuum and, as such, there may be CED-using individuals who suffer with subclinical levels of inattention and, by extension, procrastination.Further research is needed to determine whether these subjectively perceived problems are reflected via objective measures. uses a 5-point Likert scale response to indicate the frequency of occurrence of symptoms within the past 6 months, with scores on each item ranging from 0 to 4 (never = 0, rarely = 1, sometimes = 2, often = 3, and very often = 4).The ASRS is comprised 18 items; the first six items of the scale (Part A) are designed to screen adults for ADHD and consist of three questions addressing inattention and three questions addressing hyperactivity and impulsivity.Frequency scores for item 7 onward (Part B) are intended to provide additional information rather than serving as a diagnostic tool.Items 1-4 and 7-11 addressed inattention and all other items addressed hyperactivity and impulsivity.In the current study, the items were presented without any indication of what the questionnaire was mea- remaining 19 questions.Different strategies were used to resolve this Qualtrics error because different scoring methods for the questionnaires and different questionnaire factor structures required different approaches to be taken for each questionnaire.Data relating to performance on the ASRS, CFQ, and GPS were analyzed using SPSS software, version 21.Kruskal-Wallis tests (and post hoc Mann-Whitney U tests)were performed on the complete ASRS, the ASRS inattentive scores and the hyperactive/impulsive scores under three conditions of group type (detailed above), the CFQ and GPS scores under three conditions of group type. Nationalities by group. TA B L E 1a Percentages relate to group and not to the whole sample.TA B L E 3 Dosage levels of reported modafinil use.aPercentage refers to group only. Dosage levels of reported methylphenidate use. a Percentages relate to group only.TA B L E 5a Percentages relate to group only. TA B L E 7a Percentages relate to group only. tically significant, H( 2 ) = 1.41, p = .494.A Kruskal-Wallis test was performed on the separate inattentive and hyperactive ASRS scores to compare the scores of the three participant groups.The differences between the mean ranks of the methylphenidate group (130.32), the modafinil-only group (127.39), and the control group (86.37) were significant, H( 2 ) = 22.49, p < .001.
2024-02-07T05:07:45.598Z
2024-02-01T00:00:00.000
{ "year": 2024, "sha1": "89d9a53720b31af914c7a35b3a406200b7590545", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "89d9a53720b31af914c7a35b3a406200b7590545", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [] }
18199780
pes2o/s2orc
v3-fos-license
Connecting generalized parton distributions and light-cone wave functions The relation of generalized (skewed) quark distributions to nucleon wave functions is discussed in the context of light-cone quantization. INTRODUCTION Hard (semi-)inclusive processes, like deep inelastic lepton-nucleon scattering, the Drell-Yan process, or W -production have revealed a lot of the hadronic substructure in the last three decades. When the hard scale of the process becomes large enough, the resolution is sufficient to probe structures down to fractions of femtometer, i.e. one is sensitive to the substructure of hadrons in terms of quarks and gluons. The interpretation of the hard reactions in the context of the (QCD improved) parton model relies on the concept of factorization: The hard process-dependent parts are calculated according to the rules of perturbative QCD, and the soft process-independent parts, which describe properties of the hadrons involved in the reaction, are parameterized as soft functions, parton distribution and fragmentation functions. Those soft functions are universal, i.e. once determined in a hard reaction they can be used in the same form as input to predict any other hard reaction. A strict formal definition as hadronic matrix elements of quark and gluon field operators can be given for the soft functions and their logarithmic scale dependence is well-understood in terms of the DGLAP evolution. A simple probabilistic interpretation of parton distribution and fragmentation function arises within the context of light-cone quantization. Actually, those functions contain information on the structure of hadrons as seen by highly relativistic probes, the distribution functions are probabilities for the quanta of the independent dynamical fields, the so-called 'good' components of quark and gluon fields. The operator combinations in the definitions of leading twist soft functions are bilocal, the distance between the arguments of the field operators is light-like. Another type of hadronic matrix elements of quark fields is involved in the description of exclusive reactions. Elastic form factors, or transition form factors are defined via matrix elements of currents expressed in terms of the elementary (quark) fields between different hadronic initial and final states. The combination of field operators here is local, but the momenta characterizing the initial and final states are different. A generalization of the above described types of hadronic matrix elements is involved in the description of exclusive reactions like Compton scattering and deeply virtual (hard) meson production. The operators contributing at leading order in a twist expansion are bilocal (with a light-like distance) and the matrix elements are off-forward (or non- Much theoretical interest was turned to the investigation of SPDs, since they provide links between inclusive and exclusive quantities [1][2][3][4][5][6][7][8][9][10]. Moreover, skewed parton distributions give access to the angular momentum of partons inside hadrons as was first recognized by Ji [11] . In this contribution the connection of quark SPDs to another fundamental quantity, the light-cone wave function of the nucleon, which describes how the nucleon is built up from partons in a specific configuration, is discussed. GENERALIZED (SKEWED) PARTON DISTRIBUTIONS Skewed parton distributions are a new tool for the investigation of hadronic substructure. They are closely related to the ordinary (forward) parton distributions and to form factors. To make the relationship evident it is most easy to start from the operator definition of quark SPDs The notation for the momenta involved is indicated in Fig.1; the momentum transfer is denoted ∆ = p ′ − p. We use light-cone components of four-vectors defined as a ± = (a 0 ± a 3 )/ √ 2 for an arbitrary vector a µ , and the argument of the second quark field is a shorthand for the point (0, z − , 0 ⊥ ) on the light-cone. The SPDs depend on the fractional momentum of the emitted quark x = k + /p + , the 'skewedness' parameter ζ = −∆ + /p + which denotes the difference of momentum fractions on the two quark lines, and the invariant momentum transfer squared t = ∆ 2 . The above definition has to be compared with the one for the (polarized) quark distribution function in a nucleon In the forward case, i.e. for a matrix element diagonal in the nucleon momenta, there are no analogues of the helicity-flip terms K ζ (x, t) and L ζ (x, t). In contrast, helicity-flip terms show up in the definition of the electromagnetic nucleon form factors where initial and final nucleon momenta are different. The form factors F 1 (t), F 2 (t), G A (t), and G P (t) are the Dirac, Pauli, the axial and the pseudoscalar form factors, respectively. By comparison reduction formulas can be read off as where the formal forward limit t → 0 implies ζ → 0. The lowest moments in x, from which the ζ dependence drops, relate the SPDs to form factors CONNECTION TO LIGHT-CONE WAVE FUNCTIONS Like the ordinary (forward) parton distributions also SPDs acquire a simple interpretation as probability densities in terms of quanta of 'good' components of the fields. The 'good' components of the quark fields are projected out as ψ + (z) = P + ψ(z) with P + = (γ − γ + )/2 and have a momentum decomposition where we use a collective notation for the dependence on the plus and transverse momentum components, and on the helicity in the form f (x i , k ⊥i , µ i ) = f (ω i ). The operators b i and d † i are the annihilator of the plus component of the quark fields and the creator of the plus component of the antifields, respectively. They fulfill the equal light-cone time anticommutation relations [12] b The key point for a probabilistic interpretation is the observation that the quark field operator in the definition of the hadronic matrix elements is a density in terms of the 'good' components, i.e. The quark SPD describes the emission of an (anti-)quark from the nucleon with a certain momentum fraction x and its subsequent reabsorption with a different momentum fraction x − ζ. In addition there is a kinematical region where the nucleon emits (or absorbs) a quark-antiquark pair. In fixing the notations for momenta and their fractions one has to define the longitudinal direction (i,.e. to chose a frame of reference). Two different popular choices are indicated in Fig.2 characterized by the momentum fractions (x, ζ) and (x, ξ), respectively. In this contribution we adhere throughout to the first choice and follow the notations in [7]. A connection between light-cone wave functions and form factors can be established by assuming that the nucleon states may be replaced by a superposition of partonic Fock states containing quanta of the 'good' light-cone components of (anti-)quark and gluon fields with and where Ψ N β (x, k ⊥ ) is the light-cone momentum wave function of the N parton Fock state. The index β is a collective quantum number and labels the different ways of coupling the partons into the nucleon. Using the momentum decomposition of the quark field operators and the commutation relations for the creation and annihilation operators one derives the well-known Drell-Yan formula for the contribution of the N parton Fock state to the Dirac form factor [13] F a(N ) 1 where the index j denotes the active parton and runs over all partons of type a with charge e a . The full form factor is obtained by summation over all Fock states. The shifted transverse momenta in the argument of the final state wave function are to be taken as k ⊥ ′ i = k ⊥i − x i ∆ ⊥ for the spectator partons, and k ⊥ ′ j = k ⊥j + (1 − x j ) ∆ ⊥ for the active quark. Note that the arguments of the wave functions are light-cone momentum fractions and transverse momenta of the partons with respect to their parent nucleon momenta, which are different for initial and final nucleon. The most convenient way to identify those arguments consists in performing transverse boosts to reference frames where the parent hadrons have no transverse momentum components. The shift in the transverse momenta of the spectator partons is the result of the appropriate transverse boosts, whereas the shift for the active quark combines the effects of absorbing the virtual photon and the one of the transverse boost. The contribution of the N parton Fock state to the ordinary quark parton distributions are straightforwardly obtained as based on the probabilistic interpretation. Given the relations (9) and (10) and the close relationship between SPDs, forward distributions and form factors it is suggestive that there must be a similar way to obtain the SPDs from light-cone wave functions. Indeed, following along the same lines as in the derivation of the above formulas results in a generalization of the Drell-Yan formula valid in the partonic regimes, i.e. for −1 + ζ ≤ x ≤ 0 and ζ ≤ x ≤ 1. the index j running over all quarks of flavor a. The shifted arguments in the final state wave function are to be taken aš for the spectator partons anď for the active quark. Equation (11) was was obtained in [14] identifying the dominant contributions in a diagrammatic approach. In [14] and [15] the connection between SPDs and light-cone wave functions was exploited phenomenologically by explicitely modeling the wave function. Different inclusive and exclusive quantities like the electromagnetic form factors of proton and neutron, unpolarized and polarized forward parton distributions, and cross sections of wide angle real and virtual Compton scattering were studied in a consistent way. A detailed presentation of the derivation of an overlap formula for SPDs in the context of light-cone quantization, as briefly indicated in this contribution, will be given elsewhere [16].
2014-10-01T00:00:00.000Z
2000-09-26T00:00:00.000
{ "year": 2000, "sha1": "780c0b08dd5d83b4b2bb9ea4cae2fae06811a914", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-ph/0009296", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "9c8d4b72d856605a4f429bf5adc4c58fb42ca9be", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
60546182
pes2o/s2orc
v3-fos-license
Transmission Protocols in Cognitive Radio Mesh Networks ABSTRACT explains the CRN architecture of both licensed users (licensed network) and unlicensed users (unlicensed network). The unlicensed network is a collection of unlicensed users with or without a unlicensed base station, all of which are outfitted with CR capacities. An unlicensed network with a base station is known as Infrastructure based CR Network; the base station acts as a center point to gather the perceptions and consequences of spectrum investigation performed by every CR unlicensed user and make a decision how to reduce interference with licensed user. According to the choice, every CR unlicensed user reconfigures his transmission limit. A unlicensed user without a base station is known as infrastructure less Cognitive Radio Wireless Mesh Network's (CRWMN'S). In a CRWMN'S, the unlicensed users utilize cooperative approach to interchange gathered data among the devices to expand their insight on the whole network, and decides their events. Licensed Network includes licensed users and single or additional base stations, and are not equipped with CR capabilities. Therefore, if unlicensed network imparts a licensed spectrum band with licensed network, the unlicensed network is licensed to be capable to identify the existence of a licensed user and directs the unlicensed transmission to an alternative accessible band that will not interfere with licensed transmission. Figure 1 explains the opportunistic spectrum white space access and exchanging of frequency bands by unlicensed user at the occurrence of utilization of licensed user. Figure 2 explains the CRN architecture of both licensed network and unlicensed network, with and without infrastructure. The existing spectrum sharing and spectrum allocation approaches as per three conditions [21]: (a) Spectrum bands being used by unlicensed users; (b) network architecture and (c) access behavior of unlicensed users. Utility of Spectrum Bands by Unlicensed User According to the spectrum bands used by the unlicensed user, the approaches on sharing of spectrum can be divided as open spectrum sharing and approaches for spectrum access hierarchical. In the approach open spectrum sharing, the unlicensed users get to the unlicensed spectrum and no user possesses any spectrum permit; subsequently, all the users have the same rights to access for utilizing the unlicensed spectrum. In the approaches for spectrum access hierarchical [6], the unlicensed users share the licensed spectrum with licensed users. Since licensed users does not require adopting the CR because they have the priority to utilize the spectrum bands. Subsequently, when a licensed user recovers a spectrum band for utilization, the unlicensed user right now utilizing spectrum band and adjacent spectrum bands will need to change their operational limits for avoiding interference with licensed users. The hierarchical spectrum access approach can further divided into two classifications, according to the limitation of unlicensed users: 1) Underlay of spectrum 2) Overlay of spectrum Network Architecture According to the network architecture, the spectrum can be partitioned into two approaches -as centralized architecture approach and distributed architecture approach. In centralized approach, a central entity manages and coordinates the allocation of spectrum and access of unlicensed users. In distributed approach, the users take their own decisions according to the spectrum access based on their local examination of dynamic spectrum. The centralized approach is more expensive and further not appropriate for mesh emergency, army services etc., comparatively the distributed approach is less expensive and can be utilized in infrastructure less approach. Access Behaviour of Unlicensed Users According to the access behavior of unlicensed users, the spectrum sharing can be classified as either co-operative or non co-operative. In co-operative approach, the unlicensed users often belong to identical service provider and co-ordinate among themselves to enhance the profit to the whole group. In Non Co-Operative approach, the unlicensed users access the open spectrum to enhance the benefits of their own spectrum resources. COGNITIVE RADIO MAC PROTOCOLS In this segment, we are concentrating on the spectrum access issue in which various CR users are distributing the spectrum and decide when and who gets the access to the channel. Here, we are disusing various MAC protocols that have been proposed for both the infrastructure and decentralized CRN'S. The MAC protocols for both classes can be either Random access/Time Division Medium Access (TDMA) or both. The TDMA-based MAC protocols needs network wide synchronization and works by into various time-slots for both the control channel and data transmission. In additional, the Random access protocols does not require time-synchronization, and are based on the principle Carrier Sense Multiple Access/Collision Avoidance (CSMA/CA) in which the secondary user examine the spectrum bands to identify the existence of any other transmission and if so, transmits later backing off for a random span for reducing collisions due to simultaneous transmissions. Centralized MAC Protocols for Cognitive Radio Mesh Networks 2.1.1. Random Access MAC Protocols CSMA based random access MAC protocol [7] was proposed for an infrastructure based CRN'S under the supposition of utilization of one transmitter-receiver and in-band signaling. The protocol encourages the co-existence of the licensed user and unlicensed users by adjusting their transmission frequency to preserve the interference to the licensed users inside a predecided limit. The licensed users coordinate with a primary user base station and the unlicensed users coordinate with a secondary user base station, and secure a direct single hop association with their own base stations. The licensed user's takes after the existing CSMA protocol as indicated by which a licensed users senses the channel for a time period (tp) before sending the Request-To-Send (R-T-S) data packet to its base station for which the later may respond with a Clear-To-Send (C-T-S) signal if accessible for the information exchange. The unlicensed users have lot of CS time period (ts, where ts >>tp) so that the licensed users have the priority to access the predecided spectrum. The unlicensed user's base station makes a decision on the transmission power and information rate for exchanging based on the spectrum from itself and unlicensed users. Unlicensed users are permitted to send only one data packet for every transmission to minimize or avoid interference and collisions with licensed users. The random access protocols need an appropriate connection between the licensed and unlicensed networks; generally, the unlicensed users are unaware of any unsuccessful transmissions of licensed users. Additionally, the transmission power of the unlicensed users needs to be separate discrete levels to dependably protect the licensed users from interference and to exploit throughput. Decentralized MAC Protocols for Cognitive Radio Mesh Networks 2.2.1. Random Access MAC Protocols Based on MAC protocol authors proposed [9] a Distributed Channel Assignment (DCA) which uses several transceivers, for signaling which is committed Out-Of-Band control channel and additionally spectrum pooling to dependably recognize the action of the licensed network. Every node preserves a list of channels utilized at that time of its neighbor nodes and list of free channels got from the previous and the spectrum pool. At the period of R-T-S & C-T-S handshake, the sender and recipient coordinate with their free channels and concur on a same channel to utilize. The R-T-S & C-T-S messages likewise encourage the neighboring unlicensed users to upgrade their utilized channel and free channel records. The major disadvantage of the DCA protocol is the necessity for a different control channel to support the R-T-S & C-T-S operation, furthermore there is no licensed user related adjustment for channel utilization. In [10], the Single Radio Adaptive Channel (SRAC) algorithm proposed to utilize a FDM model wherein a unlicensed user transmits data packets on a bigger spectrum however gets return affirmations over littler spectrum groups for efficient utilization of spectrum. A CR node keeps list of received spectrum groups of all its neighbor nodes. At the point when a CR node faculties its present transmission channel to be involved by an licensed user, it sends a notice data packet in the received spectrum bands of its neighbor nodes, and changes to the band that is affirmed to by all the neighbor nodes. In the meantime, the CR node transmits on the received spectrum band of a neighboring node that is yet to recognize for the notice data packet. The downside is that signaling the traffic overhead connected with keeping up the upgraded received bands of all the neighbor nodes. Likewise, control messages that are not sent on the receive spectrum groups of a node are not listened to, prompting longer Deaf periods. In [11] & [12], the CREAM-MAC (Cognitive Radio Enabled Multichannel MAC) and SCA-MAC (Statistical Channel Allocation MAC) protocol are the models of MAC protocols that expect the presence of a worldwide CCC that is approved upon by all the CR nodes in their neighborhood. Under this statement, the working of this class of MAC protocols emulates that of the CSMAstandard for infrastructure networks. While CREAM-MAC is composed in view of a four way handshake process(R-T-S, C-T-S, C-S-T & C-S-R data packets) on the GCCC, the SCA-MAC utilizes just a two-route dialog process of the control frames (C-R-S & C-C-S) on the GCCC to support the sender and recipient to tune their transceivers to a commonly settled upon information channel. In [13] & [14], the Opportunistic Cognitive-MAC (OC-MAC) and the latest Decentralized Non-Global MAC (DNG-MAC) protocols are the samples of MAC protocols that don't require the existence of a worldwide CCC for choosing spectrum access among neighboring unlicensed users. OC-MAC expect that the CRN exists together with a WLAN and utilization of the IEEE 802.11DCF (Distributed Coordination function) method at the CR nodes to rival each other for information channel reservation. The DNG-MAC protocol utilizes the TDMA to allocate the control channel to all the accessible CR nodes; the CCC is one of the best accessible channels chose by the first CR node that starts the information correspondence. The CCC is partitioned into time-slots of static length; every time-slot containing a listening period and a transceiving period. The reason of DNG-MAC is that since all CR nodes starve for a data channel to utilize, there won't be wastage of the assets with the task of a period space of the control channel for each CR node. In spite of this assumption improves the model of DNG-MAC and stays away from the complex synchronization overhead normally seen with time-opening based MAC protocols, it is complex to suppose the information channels to be accessible for the same time span as that of the time-slots of the control channel and the time-slot every CR node must be re-computed upon the consideration/rejection of a CR node in the network. This likewise suggests that the MAC protocol to be additionally not adaptable for changes in the network topology because of node versatility. Time Division MAC Protocols We examine the C-MAC [15] supports synchronized time divisions by including the utilization of a Rendezvous channel (Rc) and a Backup channel (Bc). The Rc exists maximum time for utilization for unlicensed users all over the network and is utilized for node coordination, licensed user identification, and also multi channel asset reservation. The Bc is mainly decided at every unlicensed users, all through of-band estimations, and is utilized as alternative spectrum on account of appearance of an unlicensed users. In C-MAC protocol, every spectrum includes recurring of super-frames. Every super-frame is made up of a Beacon Period (BP) and a Data Transmission Period (D-T-P). Every BP is a time division so that the individual unlicensed user can transmit their signals without interference. The Rc is utilized to switch the BP schedules of nodes to prevent concurrent transmission over all the spectrum. An Unlicensed user declares the requirement for any new spectrum information through the beacons, and likewise illuminates about any spectrum change over the Rc. Occasional tuning to the Rc permits unlicensed user to re-synchronize and acquire the latest neighborhood topology data. The time division nature of C-MAC encourages the utilization of a non overlapping Quiet Period (QP) for every spectrum, through which one could separate licensed user from unlicensed user. The major disadvantages of the C-MAC are that it requires the Rc to be a committed spectrum that is not utilized by any primary user, which is hard to ensure in decentralized networks. Additionally, because of the requirement to incorporate the beacon signals with the load and channel utilization data in the BP of a super-frame, the protocol is not adaptable for a large number of unlicensed users. It is complex to support the non-overlapping nature of the BP's and the quiet periods, without the existence of a central element. A distributed time division protocol [16] was proposed to avoid the utilization of Rc by giving in-band signaling through a committed control window in addition to the beacon signal and information exchange periods. CONCLUSION In this study, we have presented an extensive study and analysis of cognitive radio networks MAC protocols basec on centralized and decentralized networks and existing solutions. From this point of view, primary user need not be aware of the secondary users, and there should be no significant stage in the quality of service for the licensed users. While the results proposed for centralized and decentralized CRNs are regularly interpreted to give execution benchmarks to the suitable model, the results proposed for distributed or co-operative and decentralized mesh CRN'S and execute bottlenecks in real time executions. Majority of research done in the area of CRN'S focused on spectrum sensing, spectrum allocation and spectrum sharing and MAC.
2019-02-13T14:08:10.622Z
2015-12-01T00:00:00.000
{ "year": 2015, "sha1": "58ce7b3386a5d34cdc15b2a06d3b128e0dea040e", "oa_license": "CCBYNC", "oa_url": "http://ijece.iaescore.com/index.php/IJECE/article/download/5769/4498", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "297e8d04abe52d543ee341ef63a6c85bfb1741d1", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
199491790
pes2o/s2orc
v3-fos-license
Polarization dependence of second harmonic generation from plasmonic nanoprism arrays The second order nonlinear optical response of gold nanoprisms arrays is investigated by means of second harmonic generation (SHG) experiments and simulations. The polarization dependence of the nonlinear response exhibits a 6-fold symmetry, attributed to the local field enhancement through the excitation of the surface plasmon resonances in bow-tie nanoantennas forming the arrays. Experiments show that for polarization of the input light producing excitation of the plasmonic resonances in the bow-tie nanoantennas, the SHG signal is enhanced; this despite the fact that the linear absorption spectrum is not dependent on polarization. The results are confirmed by electrodynamic simulations which demonstrate that SHG is also determined by the local field distribution in the nanoarrays. Moreover, the maximum of SHG intensity is observed at slightly off-resonance excitation, as implemented in the experiments, showing a close relation between the polarization dependence and the structure of the material, additionally revealing the importance of the presence of non-normal electric field components as under focused beam and oblique illumination. Nanocomposite materials, containing dielectric, semiconductor or metallic nanoparticles with typical sizes in the 1-100 nm range, have attracted considerable interest for their optical properties, and potential applications, such as optical signal processing 1 , and chemical or biological sensing [2][3][4] . When the distribution of the nanoparticles is random or they are structured in subwavelength arrays and diffraction effects are absent, the nanocomposite is perceived by light as a uniform effective medium, for which a term of 'metamaterial' or 'metasurface' was coined. The main feature of such metamaterials is the capability of tailoring their optical properties by the manipulation of their structure: nanoparticle 'meta-atom' composition, dielectric contrast, shape, orientation and spatial distribution are some of the parameters that can be varied to engineer and optimize their optical response. Metamaterials composed of metallic nanostructures have the additional advantage that due to their plasmonic nature, the incident electromagnetic field can be concentrated into very small regions around or between the meta-atoms, and thus greatly enhanced field amplitudes can be produced. This local field enhancement can be exploited to significantly increase the materials nonlinear optical properties, or for chemical and biological sensing through enhanced fluorescence or Raman signals [5][6][7] . The third-order nonlinear response of these materials, manifested through effects such as nonlinear absorption and refraction, has been extensively studied, mainly for metallic disordered systems either embedded in glass or in solution 8,9 , but also in more ordered structures 10,11 . Although macroscopically centrosymmetric, these nanocomposites present interfaces that break the symmetry, and therefore allow the observation of second-order nonlinear optical effects such as SHG. These are usually surface effects, where the contribution comes only from a very thin layer of material at either side of the interface. For nanoparticles, however, given their small dimensions, this can include most of its bulk, and again, local field enhancement can boost the SHG signal produced in these materials to significant levels 12 . While effective second-order susceptibilities are comparable to those of nonlinear crystals 13 , it is important to notice that since the interaction volumes are usually very small, the overall SHG response cannot be expected to be comparable to that of phase-matched bulk noncentrosymmetric media, with interaction lengths in a millimetre to centimetre range. However, the metallic nonlinearity and efficiency of SHG is very high 12 and furthermore, since SHG is very sensitive to the symmetry of the sample structure, SHG can be seen as a tool for studying the nanoscopic morphology of the materials 14,15 . In particular, this approach has been applied to a system consisting of elongated metallic nanoparticles embedded in glass and aligned along a preferential direction, for which a very close relation of the measured SHG signal with the light polarization, sample orientation, and structure was established 14,16 . Recently, third-order nonlinear optical properties of metasurfaces consisting of ordered arrays of nanoprisms have been thoroughly studied [17][18][19] . Placing the nanoelements, in this case triangularly shaped nanoprisms, in an ordered array with a well-defined geometry allows having a response that is the coherent addition of the individual responses of each nanoparticle, which is larger than that of an equivalent disordered nanoparticle assembly. On the other hand, in these materials the field is further enhanced by a nanoantenna effect in a local region between two nanoprisms in a bow-tie configuration, as shown in finite element method (FEM) simulations 19 . The polarization dependence of the nonlinear absorption was correlated to this enhancement and with the symmetry of the sample, with a good agreement between the simulations and the experiment 19 . Therefore it poses an even more interesting question if there is such a correlation for the SHG, which is highly dependent on symmetry, and its relationship to localized surface plasmon resonances (LSPR) 15 , and more recently, to surface lattice resonances 20 . The elucidation of the relation between the field enhancement obtained in these metasurfaces, and their nonlinear response can help on improvements in the design of chemical and biological sensors based on them. In this article we present an experimental and numerical study of SHG from a honeycomb array of Au nanoprisms and its relation to the polarization of the fundamental light. The observed dependency is correlated to the symmetry properties of the metasurface, which allows accessing a wide variety of the components of the nonlinear susceptibility tensor, and reveals the importance of the focused illumination. Experimental Metallic nanoprism arrays fabrication. Two -dimensional arrays of Au nanoprisms were fabricated using a nanosphere lithography technique. The manufacturing process is described in detail in 21 and is only briefly outlined here. Polystyrene nanospheres of a 522 nm diameter are first self-assembled on a surface of a clean silica glass substrate, forming a colloidal layer. Afterwards, Au is deposited on the substrate by thermal evaporation. Then, the nanospheres are removed using a solvent. Finally, a silica layer is deposited by magnetron sputtering on the resulting Au nanoprisms to prevent possible oxidation effects and physical damage of the nanoarray. The top protective layer is estimated to be 146 nm thick. The resulting nanostructures, observed using a field emission scanning electron microscope (FE-SEM, model Sigma HD, Zeiss) are shown in Fig. 1a. The geometric parameters of the nanoarray are depicted in Fig. 1b: α 0 = 522 ± 5 nm is the lattice parameter (equal to the polystyrene nanospheres diameter), d = 290 ± 9 nm is the distance between the nanoprisms, L = 155 ± 3 nm is the side length of each nanoprism and h is their height. The latter, measured by atomic force microscopy (AFM, model NT-MDT Nova Solver-PRO in non-contact mode) is found to be h = 34 ± 2 nm. The measured absorption spectrum of the sample taken with unpolarized incident light is shown in Fig. 1c. The spectrum shows a well-defined absorption band centered around 1030 nm, which corresponds to the dipolar localized surface plasmon (LSPR) response of the nanoprisms. The other absorption features at shorter wavelengths correspond to the quadrupolar, and higher order multipolar LSPRs, as it has been shown by FEM simulations previously performed on similar arrays 19 . These studies showed experimentally that the absorption spectrum of the array taken with linearly polarized light does not depend on the polarization angle θ measured with respect to the array structure (shown in Fig. 1b), a fact agreeing with symmetry considerations 22 also corroborated by the simulations 19 . SHG experiments. Second harmonic generation experiments were conducted in a transmission mode using a focused beam to excite the sample at normal incidence. The schematic experimental setup is shown in Fig. 2. The fundamental light was generated by a Ti:Sapphire oscillator (Coherent Mira 900) pumped at 532 nm. The oscillator produces ultrashort 90 fs linearly polarized pulses at a 76 MHz repetition rate. The pulses had a spectral width of 12 nm and were centered at an 810 nm wavelength. The polarization azimuthal angle θ of the excitation (fundamental) beam was rotated using a λ/2 wave-plate, in order to study the dependence of the SHG signal on the polarization direction of the fundamental light. The beam, with a 3 mm diameter, was focused onto the sample by means of a L 1 = 50 mm focal length lens, down to a 17 μm diameter on the sample surface, resulting in incident peak irradiances of up to 6 GW/cm 2 . Given this spot size, only about 30 unit cells of the array are illuminated during the SHG measurements, so long-range spatial variations of the structure are not important. The SHG signal generated in the forward direction was collected using a second lens of a L 2 = 30 mm focal length to image the point spread function at the monochromator entrance. Finally, the signal was detected using a photomultiplier tube (PMT) connected to a current/voltage pre-amplifier circuit and a digital oscilloscope. A polarizer cube (PBS) is located between the λ/2 wave-plate and the first lens to measure the dependence of SHG on the incident power. Numerical modelling. Numerical analysis of the nonlinear interaction of light with the nanoprisms array, in both linear and nonlinear regimes, was performed in a frequency domain using a finite element method (COMSOL Multiphysics software). The nanostructure was illuminated by a plane electromagnetic wave at various angles with respect to the surface normal (the z axis in Fig. 2), also varying the azimuthal polarization angle θ. The infinite honeycomb structure of the arrays was modelled by calculating the full vectorial structure of the fields in the corresponding rhomboidal unit cell containing two nanoprisms, using Bloch boundary conditions on its sides. Setting the proper field phase delays on the pairs of the opposite boundaries allowed modelling various incident and azimuthal angles of the incident wave. In the case of the generated second harmonic spectral component the phase delay was set to twice that of the fundamental wave. To avoid field singularities at the sharp edges, which are particularly undesirable in modelling nonlinear optical effects, the corners and the edges of the nanoprisms were rounded with a radius of 10 nm. The top and the bottom boundaries of the simulation domain were interfaced by perfectly matched layers to ensure the absence of back reflections. The refractive index of silica was considered to be constant in the studied frequency range (n = 1.46), while for gold it was taken from 23 . The SHG generation from the gold nanoprisms array was numerically simulated within an undepleted pump approximation (weak nonlinearity) using a two-step model 24,25 . In the first step the interaction of the fundamental incident wave with the nanostructures was modeled to determine the local electromagnetic fields at the nanoprism surface. In the second step, the local distribution of the obtained fundamental field was used to calculate the nonlinear polarizability of the nanoprisms 24,25 . Gold is a centrosymmetric material, so the second order nonlinear response, requiring symmetry breaking, occurs at the nanostructure boundaries and is related to the anharmonic dynamics of the free electron gas in the field gradients, and screening electron concentration gradients at the nanostructure surface. Generally, such process can be described by the hydrodynamic model 6 , which in the first approximation leads to the treatment of the nonlinear response in the framework of surface nonlinearity 26 , with the leading term of the second harmonic polarization usually considered to be normal to the surface 27 : where χ ⊥⊥⊥ is the corresponding surface nonlinear susceptibility component and Eω ,⊥ (ω) is the fundamental electric field component, perpendicular to the nanoprism surface. In order to model the surface metallic nonlinearity, a very thin gold surface layer at the boundary of the nanoprisms was considered and the generated nonlinear polarization was implemented in it using Eq. 1. Then, in the second simulation step, the nonlinear www.nature.com/scientificreports www.nature.com/scientificreports/ The polarization dependence of the SHG was measured, with the polarization azimuthal angle defined in relation to the laboratory reference frame as shown in Fig. 1b, i.e. the 0° angle corresponds to the polarization direction along the x-axis. Figure 3b shows a polar chart for the measured polarization dependence of the SHG-signal, taken at an average fundamental power of 88 mW. A clear six-fold symmetry, with SHG maxima every 60° starting from the 30° angle is clearly seen, coinciding with the geometry of the sample. The average ratio between subsequent minima and maxima was found to be approximately ~1.21. Another test performed was to measure the dependence of the signal on the input irradiance. It was done at two different polarization angles as shown by the red and black arrows in Fig. 3b, corresponding to a SHG maximum and a SHG minimum, respectively. Figure 3c shows the former case, corresponding to the excitation polarized along the bow-tie nanoprism structure. With the data displayed in a log-log scale, it fits very well to a linear relationship with slope 2 for average powers lower than 100 mW (corresponding to E pulse = 1.3 nJ). This is an indication that we have indeed a second order nonlinear process, as expected for SHG. At average powers higher than 100 mW, a deviation from this behavior can be seen. This could either indicate a 'saturation' of the signal, due to a high density of the surface electrons that depletes the harmonic plasmon oscillation, according to Sugita et al. 15 , or the onset of damage in the sample 14 , but no conclusive evidence of either could be established. Figure 3d shows the case of the other polarization direction, producing a minimum in the SHG signal. For this angle, the data at low powers seems to have a slope close to 1, indicating the presence of linearly scattered light in the detected signal. At powers higher than 100 mW the signal is well fitted with a line with a slope 2, indicating a clear SHG signal. In the numerical model, the nanoprisms geometrical parameters, determined from SEM and AFM given above, were further optimized to get the best match possible to the experimental absorption spectrum. The simulation was performed considering a plane wave illumination at normal incidence with polarization aligned along the nanoprism symmetry axis, as shown in the inset of Fig. 4. The time-averaged power transmission coefficient T p is obtained via an S-parameter T p = |S 21 | 2 . Using this value it is possible to calculate the absorbance A through the relation T p = 10 −A , the simulated spectrum is presented in Fig. 4. The fitting parameters used for comparing experimental and theoretical values were L = 170 nm and d = 300 nm, keeping the height of the nanoprisms fixed at 34 nm. The spectrum shows characteristic absorbance peaks corresponding to the dipolar and quadrupolar resonances of the nanoprisms at 980 and 620 nm wavelengths, respectively. The agreement in the experimentally measured and calculated peak positions is quite good considering that only a selected set of geometrical parameters was used to reproduce both resonances in the simulations, and they are very close to those experimentally observed, without changing drastically the original geometry. In addition, it can be observed that calculations exhibit narrow resonance widths in comparison with the experimentally measured. This is probably due to the variability of the nanoprisms geometrical parameters across the actual fabricated array, such as gaps separating prisms and apex shapes. In order to study the effect of the polarization angle of the incident light on the absorbance spectrum, a set of simulations were carried out for various azimuthal angles. The results show that the shape of the absorption spectrum is independent on the polarization, which is consistent with previously published results 19 . The calculated peak occurs at λ = 980 nm, while the experimental peak is at the slightly longer wavelength of 1030 nm, at the same time being considerably broader (Fig. 1c). In order to take into account this discrepancy in the SHG simulations, and considering that the experiments were not conducted exactly at resonance, but rather above it, the wavelength at which the simulations were conducted was corrected. This is done by keeping in the simulations the same absorbance A(@λ peak )/A(@λ laser ) ratio measured experimentally, which was A(@1030 nm)/A(@810 nm) = 1.46. The same condition for the simulated absorbance was fulfilled by A(@λ sim ) = A(@980 nm)/1.46, thus the simulations were performed at λ sim = 960 nm. SHG simulations were performed for normal incidence onto the sample for different polarization angles. To analyse the SHG response of the structure as function of the polarization angle of the incident field, the forward www.nature.com/scientificreports www.nature.com/scientificreports/ radiated SHG output power was calculated using the surface integral of time average power outflow on the surface below the structure. Figure 5 shows the results, with a SHG signal at normal incidence that does not depend on polarization angle θ (red line). However, when the SHG signal is calculated for an incident angle different from 0°, a 6-fold polarization angular response is observed, which coincides with the 6-fold symmetry observed in the experimental results (Fig. 3b). Figure 5 shows the results for angles of incidence of 5° and 10°, which indicate that the depth of modulation increases with the incidence angle. One important point to notice here is that the simulations were made assuming plane wave illumination, while the experiments used a weakly focused beam, with a distribution of the incident angles. Thus the facts that no polarization direction dependence was theoretically seen at the normal incidence, and that the experiments coincide well with the simulation at a 5° incidence angle suggest that the observed 6-fold variation is due to the presence of non-normal electric field components in the focused beam, and the complex polarization structure produced in the focal plane. For the beam diameter and focusing conditions employed, the light fills a cone with a 3.4° convergence angle. The simulated electric field maps corresponding to three different polarization angles of the fundamental light incident obliquely at 5° with respect to the surface normal are presented in Fig. 6. The peak SHG signals at the 30° and 90° polarization angles observed in the experiment (Fig. 3b) are consistent with the electric field enhancement observed in the simulated field maps, in Fig. 6(b,c). In fact, each time the fundamental light polarization is aligned along the symmetry axis of any prism pair in the array, a field enhancement is produced. By contrast, illumination at any other polarization angle will produce a smaller field enhancement, as exemplified in Fig. 6(a) for θ = 0°. The fact that the experimental angular response is somewhat elongated along the 120°-300° direction, can be either due to experimental error, or to the presence of a smaller component of the response with a different symmetry dependence, such as a quadrupolar contribution, but further investigation would be required to clarify this point. To analyse the nonlinear response at an angle of incidence of 5°, the SHG electric field distribution is plotted for two different polarization angles, 0° and 30° (Fig. 7). As it can be seen from the figure, the SHG signal is stronger at a polarization angle of 30°, thus confirming the fact that the high confinement of the fundamental field generated at every 60 degrees results in an enhancement of the SHG signal. Given that we observe a 6-fold modulation in the polarization dependence of the SHG signal with a focused beam even at normal incidence, we decided to explore its dependence with incidence angle β. Fig. 3(b). The data shows again the same 6-fold modulation, with a deeper modulation contrast, which is in good agreement with the simulation results shown in Fig. 5. We then kept the polarization angle θ fixed at a value for which a maximum is observed, marked as b) in Fig. 8(a), and varied the incidence angle β, for the same input average power. Figure 8(b) shows that the signal increases with β, reaches a maximum around 10°, and then starts decreasing. A simulation conducted for the same conditions is also shown in Fig. 8(b), and qualitatively coincides with the behaviour observed, showing a well-defined maximum, albeit happening at a larger β value of 24°. This discrepancy again probably has to do with the fact that the simulations are made considering a plane wave input beam, while a focused beam is employed in the experiments. Nevertheless, these results at oblique incidence are consistent with the fact that we observed a modulation of the signal with the polarization angle when we use a focused beam even at normal incidence. It should be noted that in addition to the LSP-related resonant enhancement, nonresonant excitation also leads to some field enhancement at the sharp tips of the nanoprisms. Both effects lead to the local field distribution that ultimately determines the SHG efficiency. Because of this, we conducted simulations of the SHG process for different wavelengths across the dipolar and quadrupolar resonances for oblique incidence at 5°. Figure 9(a) shows the SH signals calculated for polarizations at 0°, 30°, 60°, and 90° at wavelengths around the dipolar LSPR, which display in most cases the 6-fold symmetry expected, and a lack of modulation for wavelengths away from resonance, even if there were non-vanishing signals. Figure 9(b) plots the SH power maxima to minima modulation depth, (P SH (30°) − P SH (0°))/P SH (0°) calculated from the simulation results, demonstrating the importance of the LSPR for the polarization modulation of the SHG signal. It is interesting to notice that the maximum modulation depth seems to happen at wavelengths slightly shorter than the actual LSPR peak, situated at 980 nm in the simulation. Exploration for wavelengths around the quadrupolar resonance at 620 nm showed a stronger signal with no appreciable variation with polarization, even under the same oblique incidence conditions. www.nature.com/scientificreports www.nature.com/scientificreports/ Conclusions In conclusion, we have studied the SHG in a honeycomb gold nanoprism array, and its dependency on the input polarization. For such a thin metasurface, a reasonably strong signal having a well-defined 6-fold symmetry was observed, that is closely related to the microscopic structure of the material. Electromagnetic simulations showed an excellent agreement with the experimental observations, revealing the corresponding enhancement of the fundamental and the related SHG fields for polarization angles that are aligned with the 6-fold symmetry of the nanoprism array structure. Furthermore, they established that the non-normal components presented in the angular distribution of the incident light are of the key importance for the observation of the effect.
2019-08-09T16:34:04.071Z
2019-08-08T00:00:00.000
{ "year": 2019, "sha1": "a61fb5aec036450d207880aedb7e99cd4e90f5c8", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-019-47970-3.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a61fb5aec036450d207880aedb7e99cd4e90f5c8", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
220266013
pes2o/s2orc
v3-fos-license
Dynamical Spectral Function From Numerical Renormalization Group: A Full Excitation Approach For a given quantum impurity model, Wilson's numerical renormalization group (NRG) naturally defines a NRG Hamiltonian whose exact eigenstates and eigenenergies are obtainable. We give exact expressions for the free energy, static, as well as dynamical quantities of the NRG Hamiltonian. The dynamical spectral function from this approach contains full excitations including intra- and inter-shell excitations. For the spin-boson model, we compare the spectral function obtained from the present method and the full density matrix (FDM) method, showing that while both guarantee rigorous sum rule, the full excitation approach avoids the causality problem of FDM method. I. INTRODUCTION Wilson's numerical renormalization group (NRG) method 1,2 is powerful for studying quantum impurity models. Since its invention, NRG has witnessed a series of development, including z-averaging to mitigate the discretization error, 3 improvements in the logarithmic discretization, 4,5 extension to bosonic systems, 6 full density matrix algorithm, 7 extension to time dependence, 8 and merging with matrix product state 9-11 and tensor network, 12 etc. Today, both the sophistication and applicability of NRG have been advanced significantly compared to Wilson's original work. The calculation of spectral function of an quantum impurity model from the NRG-produced eigenstates and eigenenergies is an important problem. The patching method 13 combines the spectral functions from successively lower energy shells to produce a full spectral function which does not guarantee the exact sum rule. Using the reduced density matrix of the full system to combine the spectral functions of different energy shells, Hofstetter 14 developed the density-matrix NRG that can take int account the influence of the low energy states to the high frequency spectral function. In the full density matrix (FDM) NRG method, 7 the Lehmannn representation of the spectral function is treated with a complete set of eigenstates and simplified by the NRG approximation. FDM NRG fulfils the sum rule rigorously and accurately describes the spectral features at energies below the temperature. Now, FDM method is the most widely used method for producing spectral functions of quantum impurity models within NRG. Although the FDM method is highly accurate and efficient in general, in this paper, we illustrate that FDM has a problem of causality which, in certain situations, leads to negative spectral functions. This problem arises from the approximate treatment of the unitary time evolution of operators in the Green's function by the NRG approximation used in FDM. With this approximation, the excitations between different NRG shells (inter-shell excitations) are approximated by the kept-discarded ex-citations within each NRG shells (intra-shell excitations). We demonstrate this problem using the spin-boson model (SBM) in the parameter regime of strong coupling and finite bias. To circumvent this problem of FDM NRG, first, we point out that the algorithm of NRG naturally defines an effective projective HamiltonianH N , dubbed NRG Hamiltonian. The complete basis proposed by Anders et al. 8 is the set of exact eigenstates ofH N , with their eigenenergies being generated by NRG calculation. Then, we propose an algorithm to calculate the exact free energy, static, as well as dynamical quantities ofH N , which constitute well-controlled approximations to those of the original impurity model. The obtained spectral function contains both the intra-and the inter-shell excitations. It satisfies the rigorous sum rule and positiveness. Hereafter this new algorithm is called full excitation (FE) NRG method. II. FE FORMALISM In this section, we derive the formalism of FE method for general quantum impurity models. The Hamiltonian of a generic quantum impurity model reads H = H imp + H bath + H c . A small quantum system described by H imp is coupled through H c to a continuous non-interacting reservoir described by Here, c † i creates a particle (fermion or boson) with energy ǫ i . The indices such as spin and orbital are included in i. The impurity is coupled directly to the local bath degrees of freedom The NRG algorithm consists of three steps. 1,2 (i) The continuous bath degrees of freedom are discretized into bath sites with exponentially descending energies ω n ∼ Λ −n . Λ 1.0 is the logarithmic discretization parameter. This step introduces the logarithmic discretization error which diminishes as Λ decreases to unity. (ii) The discretized Hamiltonian is canonically transformed into a semi-infinite chain of the form (truncated to length N and neglecting possible indices of spin, orbital, etc.) Here, A is an impurity operator. H N (Λ) is a function of Λ. Both ǫ n and t n decay as Λ −n/2 for fermionic bath (Λ −n for bosonic bath). (iii) The chain Hamiltonian is diagonalized iteratively. Starting from the longest chain H n0 whose all eigenstates can be kept, we add one bath site and diagonalize the enlarged system. This is done iteratively until all the chain sites are added and diagonalized. To handle the divergence of the Hilbert space in this process, after diagonalizing H n , only the M eigenstates with lowest eigenenergies are kept. The matrix of H n+1 is built in the product space of these kept states and the bare states of the newly added site. The truncation error introduced in this step diminishes in the limit M = ∞. NRG calculation generates many eigenstates |s n and eigenenergies E ns 15 (n ∈ [n 0 , N ], s ∈ [1, D n ]). Here D n is the number of produced eigenstates by diagonalizing H n . For each n, the lowest M states are kept and the higher D n − M ones are discarded. They are denoted as |s K n and |s D n , respectively. For the last shell n = N , all the states are regarded as discarded. Let us analyse the structure of the eigen spectrum generated by NRG. Suppose H n+1 = H n + ∆H n (n = 0, 1, ..., N − 1). ∆H n contains the on-site energy of the newly added bath site n + 1 and the hopping between sites n + 1 and n. If ∆H n is not considered, adding bath site n + 1 will increase the degeneracy of each eigenstate of H n by a factor of d. If ∆H n is fully added, all these degeneracies will be lifted. In NRG calculation, ∆H n is added only partly. That is, the matrix of ∆H n is constructed in the space of kept states of H n multiplying the bare states of bath site n+1. Therefore, the degeneracies in the extended spectrum ofH n are partly lifted. The resulting Hamitonian matrix corresponds to the Hamil-tonianH n+1 =H n + P n ∆H n P n (2) instead of to the theoretical H n+1 = H n + ∆H n . Here P n is the projecting operator of the kept space ofH n . The full spectrum ofH n+1 (rectangular boxes in Fig.1 for n = 1 and 2) is composed of those low energy eigenstates obtained from lifting the degeneracies ofH n by P n ∆H n P n (red horizontal levels in Fig.1), and those high energy eigenstates generated by multiplying new bath states to previous eigenstates while maintaining degeneracies (green and blue horizontal levels in Fig.1). A schematic picture is shown in Fig.1 for illustration of the above process, using the number of kept states M = 4, Hilbert space dimension of bath site d = 4, and chain length N = 3. Detailed explanation is in figure caption. Grouping all the discarded states (extended to include the degeneracies) generated in the calculation for H N (e.g., the rectangular box of H 3 in Fig.1), we obtain not only a complete basis set for H N , 8 but also the exact eigenstates of the following NRG HamiltonianH Ñ Here, |se D n = |e n ⊗ |s D n = |σ N σ N −1 ...σ n+1 ⊗ |s (1) with the control parameter M and H N approximates H with the control parameter Λ. Thanks to the exponential separation of energy scales due to the logarithmic discretization and the truncation scheme of NRG,H N has very accurate low energy states. Note that the extended kept states |se K n (n = 0, 1, ..., N − 1) are not exact eigenstates ofH N . We now consider to produce the exact physical quantities ofH N from the obtained {|s n } and eigenenergies {E ns }. The partition function Z at temperature T reads The exact free energy ofH N is The statistical average of an impurity operatorÔ reads The above expressions were already employed in the FDM method which treats the density matrix exactly. 7,12 FE and FDM differ in their formalisms for dynamical quantities. Consider, for example, the time correlation function A(t)B = T r [ρA(t)B] of two impurity operators A and B. The density operator reads ρ = e −βH /Z. Once the exact expression Eq.(4) is used to evaluate the matrix elements of A(t), excitations of the form E D ns − E D n ′ s ′ will be generated, which include both inter-(n = n ′ ) and intra-shell (n = n ′ ) excitations. Before we present the FE formalism, we first make a briefly analysis for the FDM method. 7 To obtain the formalism of FDM, we first reduce Eq.(7) into the single-shell form with the help of the exact relation . We obtain The exact relation ρ|se D n = e −βE D ns /Z is used for the density operator ρ. To calculate the matrix elements of e iHt Ae −iHt in the second and third terms, the NRG approximation H N |se K n ≈ E K ns |se K n is used on one side of A and the exact Eq.(4) is used on the other side. The e ±iHt factors on two sides of A are hence not treated on equal footing. The obtained expression reads Here, the matrix elements are defined as O KK is not diagonal. For fixed n, s and s ′ , the prefactor of exp i E K ns − E D ns ′ t is not guaranteed to be positive unless the same NRG approximation is used for ρ (n) KK . In summary, in the FDM formalism, the density operator ρ is evaluated exactly but the matrix elements of A(t) are treated with the NRG approximation. The inter-shell excitations are approximately replaced by the kept-discarded intra-shell excitations. As a result, albeit the spectral function fulfils the rigorous sum rule, the positiveness of the diagonal spectral function is lost. In contrast, in deriving the FE formalism, we start from Eq.(7) and use Eq.(4) only. The obtained expression for the spectral function is exact forH N . Naturally, it has no causality problem. Below, we focus on the retarded Green's function (GF) of the impurity operators A and B, G A,B (ω), inserting the complete relation of the complete basis, and using Eq.(4) to compute the matrix elements of both ρ and e ±iHt , we obtain the FE formula for GF. Details of the derivation are summarized in Appendix. We obtain with the weight w In the equation, O (m) . (12) The transition matrix V is calculated recursively through with the initial value Here, the matrices U III. RESULTS AND COMPARISON Below, we use the spin-boson model (SBM) 17 to demonstrate FE algorithm and to make comparison with the patching method and FDM method. SBM describes a two-level quantum system coupled to a dissipative bosonic bath. It has been widely studied in many contexts ranging from superconducting qubit 18 to photosynthetic biosystems. 19 NRG has played an important role in the understanding of this model. 6,[20][21][22] The Hamiltonian reads The two-level system is described by Pauli matrices and the influence of bath is encoded into the spectral function J(ω) = π i λ 2 i δ(ω − ω i ), for which we use J(ω) = 2παω s ω 1−s c (0 < ω < ω c , ω c = 1.0) with coupling strength α and exponent s. As usual, we truncate the Hilbert space of each boson site to N b states in the occupation basis. 6 Now, the NRG Hamiltonian becomes H N (Λ, M, N b ) and FE method produces the exact quantities for it. In this paper, we study the Fourier transform of the anti-symmetric dynamical correlation function, In Fig.2(a), we plot the regular part of C(ω) 23 obtained from the patching method, FDM method, and FE for a sub-Ohmic bath s = 0.3 at α > α c , ǫ > 0, and a low temperature. In this paper, we use the standard log-Gaussian broadening for the spectral function at all frequencies, being different from the fermion case where a Lorentzian broadening is used instead for ω < T . 13 The broadening is controlled by the width B of the log-Gaussian function. The curve from FDM method agrees well with that of FE in the frequency regime ω/∆ 10 −3 but becomes negative in lower frequencies. Both curves fulfil the sum rule ∞ −∞ C(ω)dω = 1.0 to machine precision. The curve from the patching method is higher in the intermediate regime and matches the FE result in the low frequency regime. It violates the sum rule since the spectral function is obtained by approximately patching up the spectral function of each energy shell. 7 The Lehmann representation of FDM-produced C(ω) can be written as C(ω) = k w k δ(ω − ǫ k ). We separate the positive and negative components as Here, w k and ǫ k are the weight and energy of the kth pole in C(ω), respectively. In Fig.2(b), we compare C (+) (ω), |C (−) (ω)|, and C(ω). In a wide frequency range including where FDM agrees well with FE, C (+) (ω) and |C (−) (ω)| are larger than C(ω), showing that a cancellation of errors occurs in the FDM-produced C(ω). In contrast, FE produces C (−) (ω) = 0 at machine precision for all parameters. We find that it is easier for the FDM-produced C(ω) to become negative when smaller Λ and smaller broadening parameter B are used. In contrast, using larger Λ and B can recover a positive C(ω) which is in quantitative agreement with the FE result, even though C (−) (ω) is still present. Fig.3 compares the result of C(ω) from FDM and FE, obtained at the same parameters as in Fig.2 except for using larger Λ = 4.0 and broadening parameter B = 1.2. Although the FDM curve still contains significant negative contribution C (−) (ω), the full curve C(ω) becomes positive in all frequencies and agrees quite well with that of FE. This is achieved, however, at the expense of introducing larger logarithmic discretization error and larger broadening error. We explore whether the negative C(ω) of FDM NRG appears commonly or accidentally only at special NRG parameters. Fig.4(a) shows |C (−) (ω)|/C (+) (ω) obtained from FDM method for s = 0.8 at a low temperature T = 10 −8 ∆, in the localized phase α > α c , and with a FDM method should produce the exact C(ω) of We expect that the negative weight problem in C(ω) will disappear at sufficiently large M . Fig.4(b) shows how the FDM-produced C(ω) evolves with increasing M . The high frequency regime of C(ω) (ω/∆ > 10 −4 ) converges already for M = 60. With increasing M , the frequency regime with converged C(ω) extends slowly towards lower frequency. The converged part agrees well with the FE curve. However, we find that the integrated negative weight |W (−) | does not decrease with increasing M . To understand this observation, using a smaller N b = 4, we show in the inset |W (−) | as functions of chain length N for different M . For a fixed M , |W (−) | is zero for the short chain whose states are all kept. As N increases further, |W (−) | first increases exponentially and then saturates when N ≫ ln M/lnN b . For larger M value, the whole curve shifts to the right but the saturated value of |W (−) | does not decrease. This means that in FDM, the negative weight vanishes only when M covers the whole Hilbert space of the chain, i.e., when M ≫ N N b . In Fig.5, we explore how the FDM-produced negative weight, |W (−) | ≡ | +∞ 0 C (−) (ω)dω|, changes with physical parameters α, ǫ, and T . Fig.5(a) and Fig.5(b) show that |W (−) | is largest in the parameter regime α α c and intermediate ǫ. For α ≪ α c , |W (−) | is larger in intermediate T . For α α c , it is larger in low T . We also find that the smaller s is, the larger |W (−) | is. For s = 0.3 shown in Fig.5(b), |W (−) | could be as large as 0.16, a significant portion of the total weight of the regular part of C(ω), considering the sum rule ∞ −∞ C(ω)dω = 1.0 and that C(ω) contains cδ(ω) with c > 0 for σ z = 0. Note that |W (−) | > 0 does not necessarily imply that C(ω) becomes negative or it deviates significantly from the FE curve, because the errors in C (−) (ω) and C (+) (ω) may cancel each other quite accurately in C(ω) as demonstrated in Fig.3. Now we show an example in which the negative C(ω) obtained from FDM method hinders the observation of physical phenomenon. Fig.6 shows C(ω) curves for several α > α c values at a finite temperature T /∆ = 10 −5 . So far, the spectral function of SBM has not been stud- ied in detail in this parameter regime. We plot both FDM and FE results. From the FE curves (solid lines), one can find a narrow frequency window around ω T where C(ω) ∼ ω s occurs (dot-dashed straight eye-guiding lines). Below an α-dependent low frequency ω r , C(ω) increases sharply. In the range ω r ω T , a pseudogap forms. As T decreases (not shown here), the lower boundary of this pseudo-gap range shifts towards lower frequency, forming an extended range with C(ω) ∼ ω s behavior. In the limit T = 0, the expected Shiba relation for the symmetry broken phase 24 will be recovered. In contrast, FDM method produces negative or irregular curve (dashed lines) in the C(ω) ∼ ω s range and the pseudo-gap range, failing to give the complete scenario. Finally, we investigate the computing time of the FDM and FE methods. Fig.7 and Fig.8 show the scaling of the computation time of FE (Fig.7) and FDM (Fig.8) with respect to NRG parameters N , M , and N b . They show that FE is more computationally demanding, with computing time proportional M 4 N 2 b N 2 , while FDM method is much faster, with the scaling M 2 N 2 b N 1/2 . This is expected because the FE formalism of GF includes intershell excitations while the FDM one includes only intrashell excitations. It is an open question how to modify the FE method to accelerate the computation while keep its advantages of positiveness. Both FE and FDM algorithms can be implemented with efficient parallel computing. IV. DISCUSSION AND SUMMARY The time-dependent NRG 8 employs the same NRG approximation as FDM does and the unitary quantum evolution is not treated accurately. Similar to the equilibrium situation, the exact result forH N will provide well-behaved time evolution of O(t) . Therefore, the present FE for equilibrium state can be extended to timedependent NRG for studying quantum quench problems. A comparison study will shed light on to what extent the FE method can improve the result for non-equilibrium time evolution of interested quantities. The concept of exactly solvable effective projected Hamiltonian, such as theH N in the present work, can also be extended to other algorithms. The energy-based truncating criterion used in ordinary NRG algorithm does not produce the optimal matrix product eigenstates. By replacing the energy-based truncating criterion of NRG with the density matrix-based criterion, or using the variational scheme of matrix product states, 10 NRG algorithm can be improved and a bridge between NRG and density matrix renormalization group (DMRG) has been established. [9][10][11] The idea of FE could also be applied to these new NRG algorithms for better precision. For the one-dimensional quantum many-body systems with short-range entanglement, it is an interesting open question whether the exactly solvable effective projected Hamiltonian likeH N can be constructed and an accurate full spectrum algorithm for the dynamical quantities can be developed. In summary, we propose the FE algorithm for calculating the dynamical quantities of quantum impurity models in the equilibrium state. This algorithm is based on the exact solution of the projected NRG Hamiltonian and hence it circumvents the negative spectral function problem of FDM NRG. We demonstrate the effect of FE and its advantage over FDM method by a comparison study of C(ω) for SBM. which has been written in the matrix product state representation. The orthonormal relation for a single shell, V. ACKNOWLEDGMENTS According to Ref.8, a complete orthonormal basis for the full NRG chain Hamiltonian H N can be constructed by the discarded states |s D n and the environment states |e n , For the last chain site n = N , all the eigenstates of H N are regarded as discarded. Similarly, one can construct the kept states {|se K n } but they do not form complete orthonormal basis. These states have the following properties. 7 (i) Orthonormal relation. For the same shell n = m, For different shell n < m, D n se|s ′ e ′ X m = 0 (X = K, D). (ii) Inner product. For n < m, Here, δ e>m,e ′ >m equals to unity if σ e N ...σ e m+1 of environment e equals to σ e ′ N ...σ e ′ m+1 of environment e ′ . It equals to zero otherwise. Here d 0 is the number of eigenstates of H n0 . In this work, we suggest the following exact relation, (iv) Eigenstates ofH N . The NRG HamiltonianH N here is defined in Eq.(3) of the main text. This equation, together withH N |se K n ≈ E K ns |se K n , was called NRG approximation in Ref. 7. In fact, Eq.(A10) is an exact equation while the corresponding equation for the kept states is an approximation. In the derivation of FDM formalism, both equations were used. 7 In this work, we only use the exact equation Eq.(A10) for FE. We start from Lehmann representation of the Fourier transformation of A(t)B (A13) Using Eq.(A6) and D n se|A|sẽ K n = D n s|A|s K n δ e,ẽ , we further obtain for m > n (A14) Here, we have assumed that A is a local operator defined in the impurity Hilbert space. D m s ′ e ′ |B|se D n (m > n) can be obtained similarly. We then obtain the nominator of Eq.(A12) for m > n as KD . The sum over environmental indices σ i (i = n + 1, n + 2, ..., m) contains exponentially large number of terms. We carry out this summation efficiently using the recursive formula Eqs.(12)- (14) of the main text. The expression for m < n can be obtained from Eq.(A15) by using the exchange m ↔ n, s ↔ s ′ , e ↔ e ′ , A ↔ A † , B ↔ B † and taking complex conjugate. We split the summation m,n in Eq.(A12) into those for m = n, m > n, and m < n. Inserting the respective expressions and after some simplification, we obtain (14) of the main text. Eq.(A16) gives the particle part of the retarded GF. The hole part (1/i) ∞ 0 T r [ρBA(t)] e i(ω+iη)t dt can be obtained similarly. From them, one obtains the full expression for G f /b AB (ω), i.e., Eqs.(10)- (14) of the main text. One can estimate that the FE computation time for GF scales as N 2 M 4 . Parallel computation can be easily implemented for this formalism.
2020-07-01T01:01:34.662Z
2020-06-30T00:00:00.000
{ "year": 2020, "sha1": "5ff6e0869782d5f0d12e8de987473dfbccb1f322", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2006.16488", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "5ff6e0869782d5f0d12e8de987473dfbccb1f322", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
271480360
pes2o/s2orc
v3-fos-license
F1ALA: ultrafast and memory-efficient ancestral lineage annotation applied to the huge SARS-CoV-2 phylogeny Abstract The unprecedentedly large size of the global SARS-CoV-2 phylogeny makes any computation on the tree difficult. Lineage identification (e.g. the PANGO nomenclature for SARS-CoV-2) and assignment are key to track the virus evolution. It requires annotating clade roots of lineages to unlabeled ancestral nodes in a phylogenetic tree. Then the lineage labels of descendant samples under these clade roots can be inferred to be the corresponding lineages. This is the ancestral lineage annotation problem, and matUtils (a package in pUShER) and PastML are commonly used methods. However, their computational tractability is a challenge and their accuracy needs further exploration in huge SARS-CoV-2 phylogenies. We have developed an efficient and accurate method, called “F1ALA”, that utilizes the F1-score to evaluate the confidence with which a specific ancestral node can be annotated as the clade root of a lineage, given the lineage labels of a set of taxa in a rooted tree. Compared to these methods, F1ALA achieved roughly an order of magnitude faster yet with ∼12% of their memory usage when annotating 2277 PANGO lineages in a phylogeny of 5.26 million taxa. F1ALA allows real-time lineage tracking to be performed on a laptop computer. F1ALA outperformed matUtils (pUShER) with statistical significance, and had comparable accuracy to PastML in tests on empirical and simulated data. F1ALA enables a tree refinement by pruning taxa with inconsistent labels to their closest annotation nodes and re-inserting them back to the pruned tree to improve a SARS-CoV-2 phylogeny with both higher log-likelihood and lower parsimony score. Given the ultrafast speed and high accuracy, we anticipated that F1ALA will also be useful for large phylogenies of other viruses. Codes and benchmark datasets are publicly available at https://github.com/id-bioinfo/F1ALA. Introduction Phylogenetics can play an important role in tracing the spread of emerging virus variants by integrating lineage information into a phylogenetic tree.During the COVID-19 pandemic, the Phylogenetic Assignment of Named Global Outbreak Lineages (PANGO lineages) has been widely utilized to categorize SARS-CoV-2 sequences into specific lineages to assist public health control measures (Rambaut et al. 2020).With clade roots of these lineages being annotated at ancestral nodes in the tree, lineage information of descendant samples can be efficiently determined while the ancestral nodes with annotations can provide the evolution history for the virus variants (McBroome et al. 2021).We call the problem of identifying and annotating the clade roots of lineages in a phylogeny to be ancestral lineage annotation (ALA) (Fig. 1). Ancestral character reconstruction (ACR) can be used to infer evolutionary dynamics by estimating the states of ancestral nodes for a character of interest (e.g.ecological, phenotypic, and biogeographic traits) in a phylogenetic tree when character labels are given for some or all taxa (Ishikawa et al. 2019).If lineages are the characters of interest in ALA, an ACR method, e.g.PastML (Ishikawa et al. 2019), would construct ancestral states of lineages, and subtrees with identical lineage states are considered as clusters for the annotation of corresponding lineages. Most conventional ACR methods are not suitable for the ALA in a huge SARS-CoV-2 phylogeny.pUShER is currently the default inference pipeline for lineage assignment in SARS-CoV-2 PANGO lineage nomenclature system (pangolin) (O'Toole 2022).pUShER applies its packaged tool "matUtils" to annotate PANGO Figure 1.Illustration of the algorithm for ALA.Given a tree with 5 taxa (Nodes 5-9) and 4 internal nodes (Nodes 1-4) where Nodes 5-7 are labeled as lineage A and Nodes 8-9 are labeled as lineage B, the ALA is computed in three steps.Step1: Extract potential annotation nodes for lineages A (Nodes 1-3 and 5-7) and B (Nodes 1, 3-4, and 8-9) [shown in the headers (black background) of the top two tables].Step2: Determine the order of lineages for ancestral annotation based on the annotation confidence score (the largest F1-score for each lineage, i.e.A = 4/5 and B = 1, marked by underlines in the top two tables).So lineage B is assigned first and then A, as shown by ① and ② in the bottom two tables.Step3: Assign the annotation for B at Node 4 first (middle table), then for A at Node 1.When recalculating F1-scores for potential annotation nodes of lineage A, the taxa at Nodes 8 and 9 are excluded from formulae (1-3) due to Nodes 8 and 9 having been already assigned to the confirmed annotation of Node 4 (bottom table).The F1-score tables for lineage B are the same in Step 2 and Step 3. lineages of SARS-CoV-2 at ancestral nodes in a reference tree (i.e.ALA problem) (McBroome et al. 2021, Turakhia et al. 2021).For ALA, matUtils constructs consensus sequences for all SARS-CoV-2 sequences with the same lineage label on a given phylogenetic tree.It then searches for the optimal node to insert a consensus sequence to the tree for each lineage resulting in the lowest additional parsimony score by the phylogenetic placement method UShER (Turakhia et al. 2021).This optimal node is defined as the clade root of the lineage.However, occasions of multiple optimal nodes for a single consensus sequence in the ALA by matUtils were frequently observed, e.g., 1139 out of 1248 PANGO lineage members in the benchmarking 100K dataset in this scenario ("Materials and Methods" section).Multiple optimal nodes would cause uncertainty in the phylogenetic placement to determine which node should be considered as the clade root of a lineage.At the same time, the quality of consensus sequence inferred for a lineage will be affected by the quality of sequences belonging to this lineage.This observation was verified by our simulation benchmarks that the accuracy of matUtils dropped significantly when the error rate of sequences increased and the number of used sequences for ALA decreased (details in "Results" section).Its runtime and memory usage were still substantial (see Table 1 for details). Here, we present a novel ALA approach (F1ALA) that applies the F1-score (Powers 2008) to evaluate the confidence with which ancestral nodes in a tree can be annotated as the clade roots of lineages.When compared to PastML and matUtils (pUShER) on medium, large, and huge SARS-CoV-2 phylogenies, F1ALA achieved roughly an order of magnitude faster than these methods with ∼12% of their memory usage, which is able be run on a laptop computer even for the ALA in a 5.26M-taxa tree (Table 1).(Turakhia et al. 2021) are constructed by the online tree updating method UShER (Turakhia et al. 2021), where new SARS-CoV-2 genome sequences are sequentially inserted into a backbone tree.However, as repeated sample insertions do not update the backbone tree, any error in prior insertions cannot be corrected.Hence, tree optimization is required to detect and correct potential mis-insertions using techniques, such as nearest-neighbor interchange or subtree-pruning-regrafting (SPR) which remain timeconsuming (Price et al. 2010).We propose a new tree refinement method by iteratively removing all inconsistently labeled taxa relative to their closest annotation nodes, as detected by F1ALA, and reinserting them back using online tree updating tools such as UShER and TIPars (Turakhia et al. 2021, Ye et al. 2024).This achieved both larger tree log-likelihood and smaller parsimony score for the refined tree. Algorithm for ancestral lineage annotation To trace the spread of viral lineages, ALA is to infer the clade roots (as annotation nodes) for these lineages when providing a set of taxon names for each lineage in a rooted phylogenetic tree.It should ensure that taxa under these annotation nodes remain monophyletic for all lineages (McLennan 2010).Nevertheless, because the provided taxa from pangolin are sometimes non-monophyletic in a given tree (McBroome et al. 2021), simply using the most recent common ancestor does not yield accurate inference of their clade roots.Instead, F1ALA calculates F1-score for unlabeled ancestral nodes and iteratively assigns a lineage annotation to the ancestral node with the largest F1-score (this ancestral node with lineage annotation is called "annotation node").F1-score is a metric of predictive performance being as the harmonic mean of the precision and recall.A true positive (TP) is defined as the provided lineage label of a taxon being the same as its closest annotation node.Then, the precision is the number of TP taxa divided by the number of all taxa in subtrees of the annotation nodes, including those identified incorrectly (their given lineage labels different from those of their closest annotation nodes), and the recall is the number of TP taxa divided by the number of all taxa with provided lineage labels. In a rooted tree T, with taxon nodes V, let {L i } be all members of the lineages, L, and {L i,j } be the lineage labels given to a set of taxa {V i,j } belonging to the lineage L i , where i = 1 : |L| and j = 1 : |L i |.F1ALA computes ALA in three steps; that is to determine the clade root CR i in tree T to annotate lineage L i (CR i becomes an annotation node).The lineage label of any internal or external node in tree T is inferred from the lineage of its closest annotation node (Fig. 1). Step 1. Extract potential annotation nodes.A potential annotation node of a lineage must be among the ancestral nodes of the taxa belonging to this lineage.A recursive function determines all ancestral nodes, N i , for a lineage, L i , where N i are all unique ancestral nodes for any taxon V i,j in L i (for j = 1 : |L i |,with lineage label L i,j ) to the root of tree T. Step 2. Determine the order of lineages for ancestral annotation.For a lineage L i , let the subtree under any potential annotation node where TP is the number of taxa within subtree T i,k that have the lineage label L i . The highest F1-score ) is referred as the annotation confidence score for lineage L i .Smaller annotation confidence score for a lineage means there is more uncertainty about its potential annotation in tree T. F1ALA computes the annotation confidence scores for all lineages L and sort them in descending order. Step 3. Assign the annotation for each linage according to the order from Step 2. To compute the annotation for lineage L i , let the taxa of subtrees of previously confirmed annotation nodes for lineages ranking in front of L i in the sorted order be V C .Then F1ALA re-computes the F1-scores for any potential annotation node N i,k ∈ N i by excluding V C in {V i,j k } when calculating formulae (1-3), we have Precision : where TP ′ is the number of taxa in {x ∈ {V i,j k } and x ∉ V C } which have the lineage label L i .An example of this re-computation is given in the bottom table of Fig. 1.Lineage L i is annotated at the node with the highest F1-score , which is the clade root CR i of lineage L i .Each lineage will only be annotated at a node of tree T as a monophyletic group.This does not guarantee all taxa in a tree are assigned under the annotation nodes but the assignments are generally high quality (Supplementary Table S1).This is the same case with matUtils (McBroome et al. 2021) and PastML (Ishikawa et al. 2019). Algorithm for tree refinement Given a rooted phylogenetic tree, lineage labels and sequences for all or a set of taxa, the algorithm uses ancestral annotation information to refine the tree topology.After ancestral lineages are annotated by F1ALA, all taxa with labels different from their closest annotation nodes are removed from the tree.The removed taxa are sorted in ascending order by the number of ambiguous nucleotides in their sequences.An online tree updating method (e.g.TIPars or UShER) is used to re-insert them sequentially into the reduced tree.This refinement process is repeated until there is no improvement of the accuracy of ALA or a maximum iteration limit is exceeded. Since errors in PANGO lineages labeling SARS-CoV-2 sequences are a well-known problem (O'Toole et al. 2021), 70 757 taxa from the 100K dataset that had identical lineage labels based on the annotations by F1ALA, PastML, and matUtils (Fig. 2e) were considered as a "ground truth."A phylogenetic tree was constructed using these sequences by FastTree2 v2.1.11(doubleprecision version) under the GTR GAMMA20 model using hCoV-19/Wuhan/WIV04/2019/EPI_ISL_402124 as the root and the output binary tree was collapsed to a polytomous tree using the "ape" R package (tolerance = 1.0*E-6).The accuracy of ALA was evaluated when wrong lineage labels were artificially introduced to this 70 757-taxa reference tree.PANGO lineages labeling errors (replacement of the original lineage label by a false one) was randomly applied to 5%, 10%, 20%, and 50% of the taxa in the tree with 100 replicates of these labeling "errors."Independently lineage labels were masked for 5%, 10%, 20%, and 50% of taxa in the tree with 100 replicates. F1ALA was benchmarked against PastML (1.9.34) and matUtils (pUShER; v0.6.2) using the precision and recall metrics.A TP was defined as the lineage label given to a taxon being the same as its closest annotation node, if not, it was a false positive (FP).Then, precision = TP/(TP + FP) (i.e. the fraction of tips correctly classified as a specific lineage out of all tips the model predicted to belong to that lineage), and recall = TP/(total number of labeled taxa) (i.e. the fraction of tips in a linage that the model correctly classified out of all tips in that lineage).In addition, pairwise single nucleotide polymorphism (SNP) distances between sequences within a lineage and between lineages were also used for evaluation, which were calculated by snp-dists v.0.8.2 (https://github.com/tseemann/snp-dists).A lower mean SNP distance within a lineage indicates a better ALA; and a larger mean SNP distance between lineages indicates a better ALA. Since PastML may generate multiple clusters for a specific lineage, the biggest cluster was chosen to be annotated as a monophyletic group (McLennan 2010).PastML was run under the DOWNPASS model to minimize changes in ancestral states.matUtils was run using the annotate function with "set-overlap = 0." Computational performance The computational performances of F1ALA, PastML, and matUtils (pUShER) were compared on the 100K-, 660K-, and 5.26Mtaxa SARS-CoV-2 phylogenies (Table 1).F1ALA annotated 2277 PANGO lineages in the 5.26M-taxa phylogeny using 12 min and 42 s, roughly an order of magnitude faster than other methods.F1ALA significantly optimized the memory requirement to be 3.6 GB, a reduction of around 88% of that in PastML, which allows ALA of a huge phylogeny to be run on a laptop or general computer. Ancestral lineage annotation of PANGO lineages The accuracy of ALA was evaluated by precision and recall ("Materials and Methods" section) (Fig. 2a and b).F1ALA achieved the highest precision with the 100K-taxa and 660K-taxa phylogenies (higher than PastML by 4.5% and 10.0%, respectively).For the 5.26M-taxa phylogeny, F1ALA ranked the second highest with 99.8% precision, less than 0.2% below PastML.For recall, F1ALA had the best performance on the 660K-taxa phylogeny (higher than PastML by 2.8%) and had a difference of 0.02% and 0.1%, respectively, to PastML on the 100K-taxa and 5.26M-taxa phylogenies.matUtils (pUShER) showed the worst performance on all benchmarks (Supplementary Table S1). F1ALA achieved significantly smaller mean pairwise SNP distance within a lineage and larger distance between lineages than other compared methods in 100K dataset (P-value < 0.01 in paired t-test; Supplementary Table S2).The calculation of SNP distances in 660K and 5.26M datasets cannot be done within 96 h using 32 threads in an AMD EPYC 9654 Processor, due to a large pairwise computation requirement. F1ALA can generate an html file to allow visualization of the ALA.An example using the 5.26M-taxa phylogeny is presented in Fig. 2c, which by default shows the 50 largest lineages.F1ALA can also output a lineage-collapsed tree (Fig. 2d), where each lineage is represented by its annotation node and the original tree topology is preserved. On simulated datasets with labeling errors (Fig. 3a and b and Supplementary Table S3), F1ALA achieved high and robust precision and recall values for the different percentages of taxa with lineage labeling errors.The precision of F1ALA is significantly better than PastML in all settings (P-value < 0.05).The accuracy of matUtils (pUShER) dropped significantly when the error rate increased.For masked labels (part of the lineage labels of taxa were masked) (Fig. 3c and d and Supplementary Table S3), F1ALA achieved high precision and recall though those of PastML were significantly better (P-value < 0.05).matUtils (pUShER) performed more stably with masked labels, but still showed significantly lower precision and recall than F1ALA and PastML in all tests.(c) and (d) Linear regression of Gamma20 log-likelihood and parsimony score against precision (c) and recall (d) using trees generated in 0-4 iterations of tree refinement with F1ALA + TIPars and F1ALA + UShER.All regressions, except log-likelihood on precision, are statistically significant (P-value < 0.05).The dashed area shows the 95% confidence interval of the regression.Large R 2 values (>0.85) are marked in red.The differences of some points are too small to present in the graphs (overlapping), especially those from the third and fourth iterations of tree refinement.(e) Lineage annotation accuracy after tree refinement."Reference" is the tree built by IQTREE2.Only one iteration of refinement by "F1ALA+X" (TIPars or UShER) is reported.(f) Gamma20 log-likelihood and parsimony score after tree refinement; other details as in (e). F1ALA performed accurately given both kinds of errors, with over 99.4% precision and recall when error rates were ≤20%.With errors at 50%, precision and recall decreased by less than 1% for labeling errors and 2% with masked labels relative to the results with 5% of taxa having errors. Linear regression of the Gamma20 log-likelihood and parsimony scores against precision (Fig. 4c) and recall (Fig. 4d) for the original and 4 iterations of tree refinements using TIPars and UShER (data from Fig. 4a and b).Precision and recall explained 99.3% and 89.1% of variance in the tree parsimony score, respectively, showing that their usage as evaluation metrics can reflect the tree parsimony score.matOptimize (v0.6.2) (Ye et al. 2022) is currently applied to optimize the huge SARS-CoV-2 phylogenetic trees in GISAID and Genome Browser, which uses fast subtree pruning and regrafting (SPR) moves.Compared to our proposed method (using F1ALA for ALA and TIPars or UShER for online tree updating; denoted as "F1ALA + TIPars" and "F1ALA + UShER" in Fig. 4e and f), the 100K taxa tree refined by matOptimize achieved the highest recall in ALA (Fig. 4e) and the smallest tree parsimony score, but the lowest tree log-likelihood [even lower than that of without refinement (the reference tree) by 0.9%] (Fig. 4f)."F1ALA + TIPars" improved the tree with the best log-likelihood by 0.5%. Discussion ALA, particularly for pathogens affecting public health, has become a more pressing challenge given the extent of sequence data that can be obtained now.This is demonstrated by the need for annotation of PANGO lineages in the huge SARS-CoV-2 phylogenies.We present a novel and practical method, F1ALA, to achieve this, which was demonstrated to be highly efficient, in runtime and memory usage, on an extremely large phylogeny (Table 1) and have high accuracy on empirical and simulated SARS-CoV-2 datasets (Figs 2 and 3). Lineage assignment can be seen as a multi-class classification problem, where precision and recall are two metrics to measure the quality of model predictions and how well the model did for the actual observations.Notably, a higher precision may come with a lower recall.For example, the model only returns the highly confident prediction such that the precision is high but with a low recall (only a small proportion of instances is reported).F1score is a trade-off between precision and recall.F1ALA applies F1-score to evaluate the confidence with which ancestral node can be annotated as the clade root of a lineage which allows to emphasize one specific lineage since F1ALA determines annotations of lineages one at a time, even if there are imbalanced classes/lineages, which are real cases in SARS-CoV-2.matUtils is based on a parsimony-based phylogenetic placement (UShER) that places the consensus sequence of each lineage into the tree, where the placed node is the clade root.PastML is a conventional ancestral state reconstruction method that can use either parsimony or maximum likelihood method. We acknowledged there may be bias toward F1ALA because F1score, the harmonic mean of the precision and recall, are also used for metrics in ALA performance.To eliminate this potential bias, pairwise SNP distances between sequences within a lineage and between lineages were also used for evaluating ALA performance.The results were consistent with the performance using precision and recall that F1ALA achieved a significantly smaller mean pairwise SNP distance within a lineage and larger distance between lineages (Supplementary Table S2).On the other hand, the regression analysis in Fig. 4c and d shows precision and recall explained 99.3% and 89.1% of variance in the tree parsimony score, respectively, suggesting that their usage as evaluation metrics were practical. Errors or omissions in the lineage labels assigned to taxa may introduce bias and affect the accuracy of ALA (Fig. 3).F1ALA performed robustly in these cases.PANGO nomenclature labeling errors were introduced and labels were masked to simulate missing data, which are frequent in the real SAR2-CoV-2 sequence data (Shu and McCauley 2017, McBroome et al. 2021, O'Toole et al. 2021).F1ALA and PastML performed well and comparably on these tests but matUtils (pUShER) was worse, particularly for labeling errors (Fig. 3a and b).ALA in matUtils (pUShER) relies on the consensus sequence of each lineage, so labeling errors or omissions lead to an incorrect or inadequately specified consensus sequence that might lead to inaccurate phylogenetic placements (Turakhia et al. 2021). F1ALA, PastML, and matUtils (pUShER) had higher precision and recall in 5.26M compared to 100K and 660K datasets.A possible reason is the different version of pangolin downloaded for the three datasets according to the timepoint to generate them.PANGO nomenclature system has utilized two inference pipelines for lineage assignment, pangoLEARN (default used in pangolin versions 1 to 3) (O'Toole et al. 2021) and pUShER (default in v4 that was released in April 2022) (O'Toole 2022).pangoLEARN is a machine learning method while pUShER is based on phylogenetic placement.The PANGO lineage labels in 100K and 660K datasets belong to pangolin v2 (downloaded in January 2021) and v3 (downloaded on 6 September 2021), respectively, while those in 5.26M are v4 (downloaded on 19 February 2023).Pangolin v2 and v3 are based on machining learning method for lineage assignment (pangoLEARN) while v4 is based on phylogenetic placement method (pUShER).The ALAs in F1ALA, PastML and matUtils are all based on tree topology rather than machine learning which is expected to be more consistent with pangolin v4 than v2 and v3.A recent study (de Bernardi Schneider et al. 2024) demonstrated only 82.13% and 84.68% concordances between pangoLEARN and pUShER in pangolin v3.1.13but 97.28% and 97.35% in pangolin v4.0.2 in their two testing datasets that are consistent with our results in Fig. 2. As a double check, we also applied the latest pangolin version v4.3.1 on the 100K and 660K datasets, and both F1ALA and PastML achieved significant higher precision and recall (Supplementary Table S4). We have proposed a tree refinement method that utilizes the annotations from F1ALA in conjunction with online tree updating software (e.g.TIPars and UShER) to optimize a phylogenetic topology, increasing its log-likelihood and decreasing its parsimony score (Fig. 4a and b).Particularly, the optimized tree using TIPars for tree updating achieved larger Gamma20 log-likelihood than that of UShER [−1 944 123 (TIPars) versus −1 950 256 (UShER)].However, the tree parsimony score of UShER was smaller [184 487 (TIPars) versus 183 762 (UShER)].matOptimize, the commonly used method for tree refinement in huge SARS-CoV-2 phylogenies (Ye et al. 2022), improved the tree with the smallest parsimony score compared to our proposed method (F1ALA + TIPars or F1ALA + UShER) but the lowest log-likelihood (even lower than the reference tree) (Fig. 4f).This can be explained by UShER and matOptimize being fully parsimony-based methods that have limited consideration of the tree log-likelihood. The improvement of tree refinement is mostly observed in the first iteration which suggests a small number of iterations are required (Fig. 4a and b).Updating a tree by TIPars or UShER takes about 21 or 2 s to insert 100 SARS-CoV-2 genomes into a 100Ktaxa phylogeny (Ye et al. 2024).These make the proposed tree refinement approach feasible in large trees. After refinement of the 100K-taxa phylogeny, the precision and recall of ALA was approximately 95% (Fig. 4).Further investigation is needed to determine whether the remaining 5% of inconsistently annotated taxa are positioned incorrectly in the phylogeny due to the tree-building method, an error in ALA or their PANGO lineages being inaccurately labeled. With the rapid advancement of high-throughput sequencing technology and increasing recognition of the utility of genomic information in studying viruses, a substantial increase in the generation of new genomic sequences for various viruses is expected.When confronted with the huge phylogenetic tree resulting from a vast amount of genomic sequences, our method, F1ALA, is anticipated to be useful in providing efficient and accurate ALA.For example, ALA by F1ALA can be used to infer lineage label for query samples and trace the virus evolution by the visualization of a lineage-collapsed tree (Fig. 2c and d) given a dataset with reference sequences and customized query samples, and the reconstructed phylogenetic tree.The detection of tips with potential mislabeled lineage in the phylogeny for one gene or a segment in a genome, using the lineage labels defined from a phylogeny for another gene or another segment, may provide evidence for reassortment or recombination. Figure 2 . Figure 2. Accuracy of ALA for the 100K-, 660K-, and 5.26M-taxa SARS-CoV-2 phylogenies.(a) Precision (b) Recall for ALA by F1ALA, PastML, and matUtils (pUShER).(c) The ALA using the 5.26M-taxa SARS-CoV-2 phylogeny by F1ALA, showing the top 50 lineages by the number of assigned taxa, with the Omicron lineage highlighted (branch lengths are not to scale to allow differentiation of the lineages).Annotation information (annotation node, distance to tree root, F1-score, and number of TPs) is shown when a mouse hovers over the nodes displayed in a browser.(d) The collapsed tree of 2277 PANGO lineages from the 5.26M-taxa SARS-CoV-2 phylogeny.Each lineage is represented by its annotation node in the tree.Branch length shows the number of mutations (instead of substitution rate) (McBroome et al. 2021) and Omicron sublineages (BA.1, BA.2, BA.5, and XBB.1.5)are highlighted.(e) Venn diagram showing the number of individual and shared TPs (proportions over all taxa) for the annotations by F1ALA, PastML, and matUtils. Figure 3 . Figure 3. Accuracy of ALA for the datasets with simulated errors.(a) Precision and (b) recall when introducing PANGO lineages labeling errors to 5%, 10%, 20%, and 50% of taxa in the tree (100 replicates).(c) Precision and (d) recall when lineage labels were masked for 5%, 10%, 20%, and 50% of taxa in the tree (100 replicates).Paired t-tests were statistically significant (P-value < 0.05) for all pair-wise comparisons among F1ALA, PastML, and matUtils (pUShER).The whiskers represent the minimum and maximum values while the box shows the lower and upper quartiles with the median crossing the box. Figure 4 . Figure 4. Accuracy of tree refinement.(a) Precision and recall for four iterations of tree refinement.Each iteration contains an ALA by F1ALA and online updating of the tree by TIPars or UShER.(b) Gamma20 log-likelihood and parsimony score for four iterations of tree refinement.Gamma20 log-likelihood was calculated by FastTree2 (reoptimizing the branch lengths with a fixed topology).Tree parsimony score was calculated by UShER.(c) and (d) Linear regression of Gamma20 log-likelihood and parsimony score against precision (c) and recall (d) using trees generated in 0-4 iterations of tree refinement with F1ALA + TIPars and F1ALA + UShER.All regressions, except log-likelihood on precision, are statistically significant (P-value < 0.05).The dashed area shows the 95% confidence interval of the regression.Large R 2 values (>0.85) are marked in red.The differences of some points are too small to present in the graphs (overlapping), especially those from the third and fourth iterations of tree refinement.(e) Lineage annotation accuracy after tree refinement."Reference" is the tree built by IQTREE2.Only one iteration of refinement by "F1ALA+X" (TIPars or UShER) is reported.(f) Gamma20 log-likelihood and parsimony score after tree refinement; other details as in (e). Table 1 . Runtime and memory used for ALA.
2024-07-27T15:11:23.831Z
2024-07-25T00:00:00.000
{ "year": 2024, "sha1": "8a7db9bfa906c5f9309f0ea63cf9d432bbc62c34", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/ve/advance-article-pdf/doi/10.1093/ve/veae056/58644976/veae056.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bebfed3edc3e01eaaf2ab893d8635bc0190c1445", "s2fieldsofstudy": [ "Computer Science", "Medicine" ], "extfieldsofstudy": [] }
8078635
pes2o/s2orc
v3-fos-license
A disposable biosensor based on immobilization of laccase with silica spheres on the MWCNTs-doped screen-printed electrode Background Biosensors have attracted increasing attention as reliable analytical instruments in in situ monitoring of public health and environmental pollution. For enzyme-based biosensors, the stabilization of enzymatic activity on the biological recognition element is of great importance. It is generally acknowledged that an effective immobilization technique is a key step to achieve the construction quality of biosensors. Results A novel disposable biosensor was constructed by immobilizing laccase (Lac) with silica spheres on the surface of multi-walled carbon nanotubes (MWCNTs)-doped screen-printed electrode (SPE). Then, it was characterized in morphology and electrochemical properties by scanning electron microscopy (SEM) and cyclic voltammetry (CV). The characterization results indicated that a high loading of Lac and a good electrocatalytic activity could be obtained, attributing to the porous structure, large specific area and good biocompatibility of silica spheres and MWCNTs. Furthermore, the electrochemical sensing properties of the constructed biosensor were investigated by choosing dopamine (DA) as the typical model of phenolic compounds. It was shown that the biosensor displays a good linearity in the range from 1.3 to 85.5 μM with a detection limit of 0.42 μM (S/N = 3), and the Michaelis-Menten constant (Kmapp) was calculated to be 3.78 μM. Conclusion The immobilization of Lac was successfully achieved with silica spheres to construct a disposable biosensor on the MWCNTs-doped SPE (MWCNTs/SPE). This biosensor could determine DA based on a non-oxidative mechanism in a rapid, selective and sensitive way. Besides, the developed biosensor could retain high enzymatic activity and possess good stability without cross-linking reagents. The proposed immobilization approach and the constructed biosensor offer a great potential for the fabrication of the enzyme-based biosensors and the analysis of phenolic compounds. Results: A novel disposable biosensor was constructed by immobilizing laccase (Lac) with silica spheres on the surface of multi-walled carbon nanotubes (MWCNTs)-doped screen-printed electrode (SPE). Then, it was characterized in morphology and electrochemical properties by scanning electron microscopy (SEM) and cyclic voltammetry (CV). The characterization results indicated that a high loading of Lac and a good electrocatalytic activity could be obtained, attributing to the porous structure, large specific area and good biocompatibility of silica spheres and MWCNTs. Furthermore, the electrochemical sensing properties of the constructed biosensor were investigated by choosing dopamine (DA) as the typical model of phenolic compounds. It was shown that the biosensor displays a good linearity in the range from 1.3 to 85.5 μM with a detection limit of 0.42 μM (S/N = 3), and the Michaelis-Menten constant (K m app ) was calculated to be 3.78 μM. Conclusion: The immobilization of Lac was successfully achieved with silica spheres to construct a disposable biosensor on the MWCNTs-doped SPE (MWCNTs/SPE). This biosensor could determine DA based on a non-oxidative mechanism in a rapid, selective and sensitive way. Besides, the developed biosensor could retain high enzymatic activity and possess good stability without cross-linking reagents. The proposed immobilization approach and the constructed biosensor offer a great potential for the fabrication of the enzyme-based biosensors and the analysis of phenolic compounds. Background Laccase (Lac) has been widely used to construct electrochemical biosensors for phenolic and their derivatives, because it can catalyze the oxidation of phenolic compounds accompanied by the reduction of oxygen [1,2]. The high stability and enzymatic activity of the bioelectrochemical interfaces play a crucial role in the construction of Lac-based biosensors. The immobilization of enzymes on solid supports is one of the effective strategies, which allows the recovering and reusing of enzyme for several reaction cycles [3,4]. There is of intense interest in the construction of Lacbased biosensors using nanomaterials, due to their unique and particular properties [5]. Silica materials, which can accommodate different dimensions of enzyme without affecting their biological activity, could be considered as suitable hosts for enzyme immobilization [6,7]. For example, functionalized SBA-15 mesoporous silica was applied to immobilize Lac for the oxidation of a mixture of four phenolic compounds [8]. In another work, Lac was encapsulated into thin silicate film deposited on the Au electrode [9]. Moreover, magnetic mesoporous silica spheres were prepared to immobilize Lac as a promising support [10]. Multi-walled carbon nanotubes (MWCNTs), with high surface area and excellent biocompatibility, are also a promising candidate as the matrix material to incorporate enzyme and construct enzyme-based biosensors [11,12]. Importantly, because of the low overvoltage and rapid electrode kinetics, MWCNTs have the ability to facilitate electron transfer of enzyme with the electrode [13]. Therefore, MWCNTs have been employed as the supporting materials of Lac, such as the matrix based on MWCNTs-chitosan composite film [2], polyazetidine prepolymer-MWCNTs integrated system [14], and copper nanoparticles/chitosan/carboxylated MWCNT/polyaniline composite [15]. Recently, researchers are committed to develop MWCNTs/silica nanocomposite as the immobilization materials of Lac, because they possess excellent properties of low toxicity and good electrocatalytic activity, and could provide a stabilizing microenvironment for Lac [16][17][18]. Screen-printed electrode (SPE) is a kind of planar sensor device with various substrates that are coated with layers of electroconductive and insulating inks at controlled thickness [19]. Several works related to the biosensors construction have been reported based on the immobilization of Lac on the SPEs, which could be incorporated in portable systems as an alternative detection method for the direct in-situ analysis [20][21][22]. It is notable that the most common way to construct electrochemical interfaces is to drop conductive substrates onto the electrode surfaces [23,24]. However, it is difficult to produce thin (<1 mm) layers and control the consistency of detection [25]. While printing technology provides a convenient route to produce electrochemical sensors with consistent chemical performances based on the modification of functional conductive materials, such as conducting polymers [19], ionic liquid [25] and enzyme [26,27]-doped conductive materials. Herein, our goal is to construct a disposable electrochemical biosensor by immobilizing Lac on the MWCNTs-doped SPE using silica spheres as immobilization matrix (Lac/Si/MWCNTs/SPE) without cross-linking reagents. The morphology and the electrochemical properties of the constructed biosensor were characterized. Moreover, its electrochemical sensing properties were evaluated by selective measurements of dopamine (DA). Figure 1 depicts the procedures used for constructing the disposable biosensor and the mechanism for the determination of DA. Scanning electron microscopy (SEM) results were obtained by using a Zeiss utra 55 field-emission SEM instrument (Zeiss, Germany). All the electrochemical measurements were performed with a CHI-1211A portable electrochemical workstation (Chenhua Instruments Co. Ltd., Shanghai, China). The measurements were performed at room temperature (~15°C). Fabrication of MWCNTs/SPE As the base electrodes for the printing process, SPEs with a standard three-electrode system and a 3.1 mm 2 working area for each were fabricated according to the process described by our previous work [28,29] with an AT-25P screen-printing machine (ATMA CHAMPENT. Corp., China). Compared to our previous publications, the working electrodes were printed using different mass proportions of MWCNT/carbon paste, drying at 100°C. The prepared MWCNTs/SPEs were then stored at 4°C until required. Construction of the disposable biosensor Silica spheres were synthesized according to the Stöber's method [30]. Typically, a solution of 5 mL of 33% ammonia solution was mixed with 50 mL of dry ethanol. After 3.14 mL of TEOS and 1.8 g of Milli-Q water was added in sequence, the solution was stirred to hydrolyze TEOS. After 12 h of stirring, a colloidal solution of silica spheres about 100 nm in diameter were obtained. Before modification, the bare SPEs were pretreated in pH 7.0 potassium phosphate buffer solution (PBS) by applying an anodic potential of 2.00 V for 300 s. The synthesized silica spheres colloidal suspension was mixed with Lac (10.0 mg mL -1 , prepared in 0.10 M pH 5.0 PBS) stock solution thoroughly in a volume ratio 2:3 for 24 hours. Then, 2.5 μL mixed solution of silica-Lac was coated onto the surface of the MWCNTs/SPE to form Lac/Si/MWCNTs/SPE as the disposable biosensor. After the solvent evaporated, the constructed biosensor was washed with deionized water to remove excess Lac. For comparison, Lac modified SPE (Lac/SPE), Lac modified MWCNTs/SPE (Lac/MWCNTs/SPE), and silica spheres modified MWCNTs/SPE (Si/MWCNTs/SPE) were fabricated with the similar steps. All of the modified electrodes were stored at 4°C. Morphology characterization of the disposable biosensor The typical SEM images of the disposable biosensor at different preparation stages are displayed in Figure 2. It can be seen that the surface of the bare SPE is covered by a layer of carbon particles (Figure 2A). On the contrary, on the MWCNTs/SPE, twisted MWCNTs distribute among the carbon particles to form a threedimensional structure ( Figure 2B). While, some of them are flat and embedded into the carbon particles, which may be due to the pressure during the screening process. After casting the silica solution loaded with Lac onto the MWCNTs/SPE, silica spheres can be found embedding into the Lac and connecting with each other Figure 1 Schematic illustration of the screen-printed configuration, the procedures used in the process of MWCNTs-doped SPE fabrication (step 1~step 4), and the detection procedures with the mechanism for the determination of DA at the disposable Lac/Si/ MWCNTs/SPE (reaction 1~reaction 4). ( Figure 2C), which presented that Lac has been immobilized onto silica spheres successfully. For comparison, the SEM image of silica spheres is presented in Figure 2D. Electrochemical properties of the disposable biosensor The electrochemical sensing properties of the disposable biosensor were investigated by choosing DA as the typical model of phenolic compounds. In Figure 3, the electrocatalytic properties of the Lac/Si/MWCNTs/ SPE ( Figure 3A), Lac/SPE ( Figure 3B), Lac/MWCNTs/ SPE ( Figure 3C), and Si/MWCNTs/SPE ( Figure 3D) for DA were compared by performing cyclic voltammetry (CV) experiments in PBS solution (pH 5.0). As we expected, no catalytic current responses are shown in the absence of DA (black curves in Figure 3A, B and C). In contrast, upon the presence of DA in solution, a pair of well-defined redox peaks is obtained (red curves in Figure 3B and C, and blue curve in Figure 3D). However, the voltammetric feature of the Lac/Si/MWCNTs/SPE (red curve in Figure 3A) differs significantly, because a cathodic peak at around −0.158 V vs. Ag/AgCl appears. The height of this cathodic peak is sensitive to the change of the DA concentration in PBS (not shown here). The possible reason is essentially based on the demonstrated non-oxidative electrochemical approach [31] by taking advantage of the chemical properties of DA and the catalytic activity of Lac, as shown in Figure 1 (reaction 1-4). DA can be oxidized into its quinonoid form (Figure 1, reaction 1) either through a reversible electrochemical method or an irreversible chemical method under the catalysis of Lac, which can be described simply as follows [7]: The quinones formed in reaction (1) are usually electrochemically active and subsequently re-reduced on the surface of the electrode at the appropriate potentials. DA, as a typical model of o-benzenediol, follows this reaction mechanism. Then, Lac is used to initialize the sequential intramolecular cyclization reactions of DA, including a deprotonation reaction (Figure 1, reaction 2), an intramolecular cyclization process (reaction 3), and a disproportionation reaction and/or oxidation (reaction 4). The finally formed 5,6-dihydroxyindoline quinone is readily electrochemically reduced at SPE [32]. On the basis of these reaction properties of DA, the nonoxidative electrochemical approach can be proposed for the determination of DA by measuring the cathodic current of 5,6-dihydroxyindoline quinone at a negative potential (−0.158 V). The results of Figure 3B and C show that symmetrical redox couple of DA at the Lac/SPE and the Lac/ MWCNTs/SPE with the potential difference between anodic and cathodic peaks (ΔE p ) are 0.051 V and 0.044 V, characteristic of a two-electron and twoproton quasi-reversible redox process of DA at both SPEs [32]. No cathodic peak at around −0.150 V is found at these SPEs. The results demonstrate that the Lac/SPE or the Lac/MWCNTs/SPE does not have any appreciable electrocatalytic activity to DA based on a non-oxidative electrochemical approach, implying the direct immobilization of Lac on bare SPE or MWCNTs/SPE is not successful. Moreover, the electrocatalytic features of the Si/MWCNTs/SPE to DA are similar to those of the Lac/SPE and the Lac/MWCNTs/ SPE in terms of the anodic (0.057 V) and cathodic (0.015 V) peak potentials (blue curve in Figure 3D), and there is still no cathodic peak at around −0.150 V appearing at this Si/MWCNTs/SPE. However, after immobilizing Lac on the surface of the MWCNTs/SPE with silica spheres, the cathodic peak caused by the enzymatic oxidation of DA appears at around −0.158 V (red curve in Figure 3D). Obviously, this process is ascribed to the two-electron and two-proton quasireversible redox process of 5,6-dihydroxyindoline quinone, and implies that the Lac has been immobilized on the Lac/Si/MWCNTs/SPE stably with a good biocatalytic activity [2]. Furthermore, the introduction of Lac makes the measurement of DA through cathodic current at negative potential (around −0.150 V) achieve, which avoiding the interference of other electroactive species whose oxidized potentials are very close to DA, by measuring the oxidation current of DA at a positive potential. These phenomena substantially demonstrate that the disposable biosensor can show an excellent electrocatalytic activity to DA based on this non-oxidative electrochemical approach. On one hand, the biosensor could retain the bioactivity of Lac to a large extent by immobilization of Lac with silica spheres on the MWCNTs/SPE. On the other hand, silica spheres and MWCNTs can both provide large loading area for Lac by their high specific surface area. The above results also imply that the Lac immobilized on the surface of Si/MWCNTs/SPE might provoke the drastic conformation change of Lac which is in favor of the active sites of enzyme approaching the SPE. However, if cross-linking reagents were used to immobilize Lac, this maybe promotes a high degree of reticulation with Lac that blocks the process [2]. The effect of pH on the electrochemical properties of the disposable biosensor Since the proton participates in the electrochemical reaction, the pH value of the supporting electrolyte is considered to be an important parameter affecting the electrochemical behavior of the biosensor [28]. The current responses to DA of the disposable biosensor in the pH range from 4.0 to 7.0 were evaluated. The results are shown in Figure 4, where the cathodic currents of DA at the Lac/Si/MWCNTs/SPE are expressed as the percentage of the maximum response obtained at an appropriate pH. It is shown that the optimum response is obtained at about pH 5.0, just below the neutral pH, which is consistent with the reported work [7]. It is noteworthy that although the soluble Lac has an optimum pH value at around 3.0-4.0 to retain its bioactivity [33], the Lac immobilized with silica spheres on MWCNTs/SPE makes the effective pH values shift to 4.0-6.0. This advantage renders the Lac/Si/MWCNTs/ SPE for broad application fields. Therefore, a pH value of 5.0 was selected for next experiments. The effect of the amount of MWCNTs on the electrochemical properties of the disposable biosensor Another important parameter affecting the responses of the target analyte at the disposable biosensor is the loading amount of MWCNTs. SPEs doped with different mass proportions of carbon paste and MWCNTs were investigated in order to choose an optimum loading amount of MWCNTs. As shown in Figure 5, with the mass proportion of MWCNTs/carbon paste (MWCNTs: carbon paste) changing from 1:50 to 2:5, the cathodic current response (expressed as the percentage of the maximum response) of DA at the Lac/Si/MWCNTs/SPE enhances, and reaching its maximum at 3:10. Therefore, the proportion of 3:10 was chosen for the fabrication of subsequent disposable Lac/Si/MWCNTs/SPEs. Determination of DA using the disposable biosensor The usage of Lac enables DA determination with differential pulse voltammetry (DPV), which shows a better resolution and a higher signal-to-noise ratio comparing with CV [31]. As displayed in Figure 6, the typical DPVs at the Lac/Si/MWCNTs/SPE with different concentrations of DA are obtained. It can be observed that the cathodic peak current recorded at around −0.177 V enhances with the increasing of the concentration of DA in solution, and the current is found to be linear with the concentration of DA from 1.3 to 85.5 μM (I (μA) = −0.069C DA (μM) -2.091, R = 0.9908) (Inset in Figure 6). The detection limit was calculated to be 0.42 μM (S/N = 3). Furthermore, the relative standard deviation (R.S.D.), sensitivity, and Michaelis-Menten constant (K m app ) of the disposable biosensor were evaluated (Table 1). Among them, the K m app value, which can provide information regarding the Lac-substrate kinetics, is calculated according to the Michaelis-Menten equation [34]: where I s refers to the steady-state catalytic current, I max is the maximum current measured under saturated conditions, and C is the concentration of DA. The K m app value was estimated to be 3.78 μM. This small K m app means that Lac immobilized with silica spheres on the MWCNTs/SPE possesses very high enzymatic activity for the determination of DA. These results indicate that the disposable biosensor has a good analytical performance for DA. The selectivity of the disposable biosensor to DA determination was also investigated. AA was added into the solution which contained 85.5 μM DA (dash curves 1-3 in Figure 7). As can be seen from Figure 7, the introduction of AA into DA solution does not lead to an obvious change in the cathodic peak current responses, indicating that DA can be virtually detected using Lac/Si/MWCNTs/SPE due to the interference-free from AA. These results may suggest the disposable biosensor could be used for the practical measurements of DA based on the non-oxidative mechanism. Stability, reproducibility and repeatability of the disposable biosensor One of the most critical issues in constructing biosensor is the avoidance of enzyme immobilized on the surface of the electrode leaking into the solution. In this study, the stability of Lac/Si/MWCNTs/SPE was investigated by recording cathodic peak current responses of the Lac/Si/MWCNTs/ SPE at different sweep segments in 0.10 M PBS (pH 5.0) containing of 60.0 μM DA. It was found that the current responses stayed at the same level after 20 sweep segments, indicating that Lac was immobilized stably on the SPE. The results further proved that the proposed immobilization method was effective and it was not necessary to use any other cross-linking reagents. The possible reason may be due to the large loading area, good biocompatibility and the stabilizing property provided by silica spheres and MWCNTs, attributing to their porous and threedimensional architecture. The nanosized pores on silica spheres and MWCNTs could act as small cages surrounding the Lac, consequently offering a protective chemical microenvironment which is similar to the microenvironment near enzyme in biological cells. Furthermore, the interconnected pores and a well-defined three-dimensional network of the proposed immobilization matrix can prevent Lac from leaching into the solution while allow free diffusion between the matrix and product molecules from/ to the catalytic active sites. Therefore, even if in the absence of cross-linking reagents, the developed biosensor can still show good stability during the detection [35,36]. To verify the reproducibility of the disposable biosensor, five different Lac/Si/MWCNTs/SPEs fabricated by same steps independently were chosen randomly from 50 pieces of store SPEs. The R.S.D. for the cathodic peak current responses to 60.0 μM DA was 6.5%, meaning that the constructing procedures were reliable and the modified SPEs had a good reproducibility. In addition, the same Lac/Si/MWCNTs/SPE was used to detect DA for five times successively. As a result, the R.S.D. value was 4.7%, showing a good repeatability. The storage stability of the disposable biosensor was also investigated. After 10 and 30 days saving at 4°C, the current response of Lac/Si/MWCNTs/SPE reached to 91.0% and 86.0% of the initial response respectively in the 60.0 μM DA solution. The good stability may ascribe to the effective protection of the bioactivity of Lac due to the consistent stability of silica spheres and the biocompatible microenvironment provided by silica spheres and MWCNTs. Conclusions A novel disposable biosensor has been successfully constructed on MWCNTs/SPE by immobilizing Lac with silica spheres. Due to the large specific surface area and excellent biocompatibility of MWCNTs and silica spheres, the biosensor can effectively provide a suitable microenvironment for the immobilization of Lac and exhibit a good electrocatalytic performance for DA. In addition, based on a non-oxidative electrochemical mechanism, the biosensor enables the in situ determination of DA with good sensitivity, selectivity and reproducibility. In summary, the proposed approach of enzyme immobilization shows a great potential for the construction of biosensors without using cross-linking reagents, and the constructed biosensor displays an excellent analytical performance for phenolic compounds in a rapid and cost-effective way.
2015-03-03T00:52:33.000Z
2012-09-17T00:00:00.000
{ "year": 2012, "sha1": "5d12020a3a163b87c128bf17c4cc0996e92b0a69", "oa_license": "CCBY", "oa_url": "https://ccj.biomedcentral.com/track/pdf/10.1186/1752-153X-6-103", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c4d383a2ae335481138e4b944b4d8de77597eee1", "s2fieldsofstudy": [ "Chemistry", "Engineering", "Environmental Science", "Materials Science" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
264660141
pes2o/s2orc
v3-fos-license
Exclusive $\rho^0$ meson electroproduction from hydrogen at CLAS The longitudinal and transverse components of the cross section for the $e p\to e^\prime p \rho^0$ reaction were measured in Hall B at Jefferson Laboratory using the CLAS detector. The data were taken with a 4.247 GeV electron beam and were analyzed in a range of $x_B$ from 0.2 to 0.6 and of $Q^2$ from 1.5 to 3.0 GeV$^2$. The data are compared to a Regge model based on effective hadronic degrees of freedom and to a calculation based on Generalized Parton Distributions. It is found that the transverse part of the cross section is well described by the former approach while the longitudinal part can be reproduced by the latter. Understanding the precise nature of the confinement of quarks and gluons inside hadrons has been an ongoing problem since the advent, about 30 years ago, of the theory that governs their interactions, quantum chromodynamics (QCD).In particular, the transition between the high energy (small distance) domain, where quarks are quasi-free, and the low energy (large distance) regime, where they form bound states and are confined in hadrons, is still not well understood. The analysis of elementary processes, such as the exclusive electroproduction of a meson or a photon on the nucleon in the few GeV range, allows one to study this transition.In the case of exclusive meson electroproduction, the longitudinal and transverse polarizations of the (virtual) photon mediating the interaction provide two qualitatively different pieces of information about the nucleon structure. Longitudinal photons, whose transverse size is inversely proportional to their virtuality, truly act as a microscope.At sufficiently large Q 2 , small distances are probed, and the asymptotic freedom of QCD justifies the understanding of the process in terms of partonic degrees of freedom and the use of perturbative QCD (pQCD) techniques.In particular, it has been recently shown [1,2] that the non-perturbative information can be factorized in reactions such as exclusive vector meson electroproduction.Here the process can be described in terms of perturbative quark or gluon exchanges whose momentum, flavor, and spin distributions inside the nucleon are parametrized in terms of the recently introduced Generalized Parton Distributions (GPD's) [3,4,5].This is the so-called "handbag" diagram mechanism which is depicted in Fig. 1 (right diagram).At higher γ * p center-of-mass energies, W , than considered in this letter, 2-gluon exchange processes also intervene [2,6].At low virtuality, Q 2 , of the photon, hadronic degrees of freedom are more relevant and, above the nucleon resonance region, the process is adequately described in terms of meson exchanges (Fig. 1, left diagram). For transverse photons, however, this description in terms of quarks and gluons is not valid.A factorization into a hard and soft part does not hold [1,2] and even at large Q 2 , there is no dominance of a "handbag" mechanism as in Fig. 1. "Soft" (non-perturbative) and "hard" (perturbative) physics compete over a wider range of Q 2 , and in practice it is necessary to take into account non-perturbative effects using hadron degrees of freedom.In order to access the fundamental partonic information when studying meson electroproduction processes, it is therefore highly desirable to isolate the longitudinal part of the cross section, which lends itself, at least at sufficiently high Q 2 , to pQCD techniques and interpretation.In this approach, however, several questions remain to be answered.What is the lowest Q 2 where a perturbative treatment is valid?What corrections need to be applied to extend its validity to lower The mechanisms for ρ 0 electroproduction at intermediate energies: at low Q 2 (left diagram) through the exchange of mesons, and at high Q 2 (right diagram) through the quark exchange "handbag" mechanism (valid for longitudinal photons) where H and E are the unpolarized GPD's. The aim of this letter is to address these questions using the recent measurement of the longitudinal and transverse cross sections of the ep → e ′ pρ 0 reaction, carried out at Jefferson Laboratory using the CEBAF Large Acceptance Spectrometer (CLAS) [7] in Hall B. This elementary process is one of the exclusive reactions on the nucleon which has the highest cross section, and for which the extraction of the longitudinal and transverse parts of the cross section can be accomplished using the ρ 0 decay angular distribution.On the theoretical side, formalisms and numerical estimates for both hadronic and partonic descriptions of the reaction have been developed, which can be compared to the transverse and longitudinal components of the cross section, respectively. In the following, we will present the analysis results of the ep → e ′ pρ 0 reaction.Data were taken with an electron beam energy of 4.247 GeV impinging on an unpolarized liquid-hydrogen target.The integrated luminosity of this data set was about 1.5 fb −1 .The kinematic domain of the selected sample corresponds to Q 2 from 1.5 GeV 2 to 3.0 GeV 2 .We analyzed data for W greater than 1.75 GeV, which corresponds to a range of x B from 0.21 to 0.62.Our final data sample included about 2 × 10 4 e ′ pπ + π − events. The ρ 0 meson decay to π + π − was used to identify the reaction of interest.We identified the ep → e ′ pπ + π − reaction using the missing mass technique by detecting the scattered electron, the recoil proton, and the positive pion.The electron was identified as a negative track with reconstructed energy deposition in the calorimeter which was consistent with the momenta determined from magnetic analysis, in combination with a signal in the Cerenkov counter.The proton and pion were identified as positive tracks, whose combination of flight times and momenta corresponded to their mass.Figure 2 (left plot) shows a typical missing mass distribution for ep → e ′ pπ + X events.Events were selected by the missing mass cut -0.03 < M 2 X < 0.06 GeV 2 , consistent with a missing π − .Fig. 2 (center) shows the resulting π + π − invariant mass spectrum.The ρ 0 peak is clearly visible, sitting on a large non-resonant π + π − background.FIG.2: Left plot: an example of a squared missing mass M 2 X (ep → e ′ pπ + X) spectrum (for a scattered electron momentum between 1.9 and 2.2 GeV).Points with error bars show the experimental data and the solid lines represent the results of simulations for the channels e ′ pπ + π − (dashed line), e ′ pπ + π − π 0 (dotted line) and the sum of the two (solid line).The vertical dashed line is located at the missing mass squared of a pion.Central and right plots: an example of the π + π − and pπ + invariant masses, respectively (for the interval 1.63 < Q 2 < 1.76 GeV 2 and 0.28 < xB < 0.35).Points with error bars show the experimental data and the lines correspond to the results of fits for the channels ep → e ′ pρ 0 (dashed line), ep → e ′ ∆ ++ π − (dash-dotted line), non-resonant ep → e ′ pπ + π − (dotted line) and the sum of the three processes (solid line). The unpolarized ep → e ′ pπ + π − reaction is fully defined by seven independent kinematical variables which we have chosen as: Q 2 and x B , which define the virtual photon kinematics; t, the invariant squared momentum transfer between the virtual photon and the final pion pair (i.e. the ρ 0 meson when this particle is produced); M π + π − , the invariant mass of the π + π − system; θ hel and φ hel , the π + decay angles in the π + π − rest frame; and Φ, the azimuthal angle between the hadronic and leptonic planes.The CLAS acceptance and efficiency were calculated for each of these 7-dimensional bins using a GEANT-based simulation of several hundred million events.The event distributions were generated according to Ref. [8], which includes the three main contributions above the resonance region to the e ′ pπ + π − final state: diffractive ep → e ′ pρ 0 , t-channel ep → e ′ ∆ ++ π − , and non-resonant (phase space) ep → e ′ pπ + π − .Each of these contributions to the event generator was matched to the world's data on differential and total cross sections, and then extrapolated to our kinematical domain.We were then able to extract a total cross section for the ep → e ′ pπ + π − channel in good agreement with world's data where the kinematics overlapped.The event generator also includes radiative effects following the Mo and Tsai prescription [9] so that radiative corrections could be applied in each (Q 2 , x B ) bin. The main difficulty in determining the ρ 0 yield stems from its large width (Γ ρ 0 ∼ 150 MeV), which does not allow for a unique determination of the separate contributions due to the resonant ρ 0 production and nonresonant π + π − pairs.We simultaneously fitted the two 3fold differential cross sections d 3 σ/dQ 2 dx B dM π + π − and d 3 σ/dQ 2 dx B dM pπ + to determine the weight of the three channels mentioned earlier, leading to the e ′ pπ + π − final state (see Fig. 2, central and right plots).The mass spectra of the ρ 0 and ∆ ++ are generated according to standard Breit-Wigner distributions and the non-resonant pπ + π − final state according to phase space.This background estimation procedure, along with the CLAS acceptance modeling, is one of the dominant sources of systematic uncertainty which, in total, ranges from 10% to 25%.More sophisticated shapes for the ρ 0 mass spectra were also investigated but led to consistent numbers of ρ 0 's within these error bars. The final step of the analysis consisted in separating the longitudinal and the transverse parts of the ep → e ′ pρ 0 cross section.The determination of these two contributions was accomplished under the assumption of s-channel helicity conservation (SCHC) [10].This hypothesis states, in simple terms, that the helicity of the virtual photon is directly transferred to the vector meson.The SCHC hypothesis originates from the vector meson dominance model which identifies vector meson electromagnetic production as an elastic process without spin transfer. The validity of the SCHC hypothesis, which is only applicable at small momentum transfer t, can be tested experimentally through the analysis of the azimuthal angular distribution.We found that the r 04 1−1 ρ 0 decay matrix element [11], which can be extracted from the φ hel dependence, was compatible with zero at the 1.7 sigma level.We also found that the σ T T and σ T L cross sections, which can be extracted from the Φ dependence, were, respectively, 10.6% ± 11.8% and 0.4% ± 5.4% of the total cross section.They are therefore consistent with zero, as they should be if SCHC is valid and, in any case, don't represent potential large violations of SCHC.Let's also note that all previous experiments on electromagnetic production of ρ 0 on the nucleon are consistent with the dominance of s-channel helicity conserving amplitudes (the helicity-flip amplitudes which have been reported [12,13,14,15] never exceeded 10-20% of the helicity non-flip amplitudes).We can therefore safely rely on SCHC for our analysis. The decay angular distribution of the π + in the ρ 0 rest frame can be written as [11]: where r 04 00 represents the degree of longitudinal polarization of the ρ meson.Under the assumption of SCHC, the ratio of longitudinal to transverse cross sections is: where ǫ is the virtual photon transverse polarization.r 04 00 was extracted from the fit of the background-subtracted cos θ hel distributions following Eq. 1 as illustrated in the insert in Fig. 3, and was used in Eq. 2 to determine R ρ .Due to limited statistics in the CLAS data, this procedure could be performed only for the two Q 2 points which are shown on Fig. 3 and where our points are found to be compatible with the existing world's data.We then have fitted the Q 2 dependence of R ρ including, in order to take into account a potential W dependence of the ratio R, only the world's data in the W domain close to ours (W ≈ 2.1 GeV) [12,16].The following parametrization, whose power form is motivated by the perturbative PQCD prediction that σ T is power suppressed with respect to σ L , was found : R ρ = 0.75 ± 0.08 × (Q 2 ) 1.09±0.14 . ( It is customary to define the reduced cross section for ρ meson production as the electroproduction cross section divided by the flux of virtual photons : where the virtual photon flux is given by : In this notation, and in Fig. 4, the longitudinal and transverse σ T and σ L cross sections are integrated over t, Φ, θ hel , and φ hel .The t dependence of σ T + ǫσ L can be parametrized by e −b|t−tmin| (−t min < −t < 1 GeV 2 ), where −t min is the smallest value of momentum transfer for a given kinematic bin.We measured the exponential slope b to range from 1.19 to 1.74 GeV −2 for x B between 0.31 and 0.52.The dotted line represents the Regge model of Refs.[21,22] while the solid line describes the GPD model of Refs.[6,23].The systematic error is indicated by the shaded zones at the bottom of the plots. The longitudinal and transverse cross sections are plotted in Fig. 4 as a function of Q 2 for four bins centered at x B of 0.31, 0.38, 0.48, and 0.52.These values correspond to W values of 2.2, 2.0, 1.9, and 1.85 GeV, respectively.The data are compared to two theoretical approaches.The first one is based on hadronic degrees of freedom with meson Regge trajectory exchanges in the t-channel (as illustrated in Fig. 1, left graph).This approach has been successful in describing, with very few free parameters, essentially all of the available observables of a series of forward exclusive reactions in photo-and electroproduction of pseudoscalar mesons (π 0,± , K + [24], η, η ′ [25]) above the resonance region.For the ρ 0 , ω, φ vector mesons, as well as for Compton scattering, such an approach has been recently developed in Refs.[21,22,26].In the case of ρ 0 electroproduction, the contributing meson trajectories are the σ, f 2 , and Pomeron, the latter being negligible in the W region investigated in this experiment.This Regge model was normalized by adjusting the σ and f 2 meson-nucleon couplings to reproduce existing photoproduction data (see for instance, Refs.[27]).There is little freedom in the choice of parameters when one uses data from all three ρ 0 , ω, and φ channels, which together constrain all photoproduction parameters.The only remaining free parameters for the electroproduction case are the squared mass scales of the meson monopole form factors at the electromagnetic vertices for the diagrams of Fig. 1 (left plot).They have been determined from the Q 2 dependence of the world's data, in particular from the Cornell [16] and HERMES [28] experiments, to be approximatively 0.5 GeV 2 , in accordance with known meson form factor mass scales. As shown in Fig. 4, this Regge model provides a fair description of the transverse and longitudinal cross sections.There is some discrepancy at large values of x B , but at those values some s-channel nucleonic resonances decaying into ρ 0 p may contribute, a process which is not taken into account in this Regge t-channel approach, and might explain the missing strength in this particular kinematical domain.The calculation was also done for the Cornell [16] and HERMES [28] data, where, general agreement is found as well (the longitudinal cross section is also overestimated as one goes to smaller x B , as in our x B = 0.31 bin). We now turn to the handbag diagram approach (Fig. 1, right plot), which is based on the QCD factorization between a "hard" process (the interaction between a quark of the nucleon and the virtual photon, along with a onegluon exchange for the formation of the final meson) and a "soft" process (the parametrization of the partonic structure of the nucleon in terms of GPD's).As mentioned in the introduction, this approach is only valid at sufficiently large Q 2 when the longitudinal cross section dominates the QCD expansion in powers of 1/Q 2 .Unfortunately, the value of Q 2 at which the "handbag" mechanism becomes valid is unknown, and especially for meson electroproduction, it must be determined experimentally. In the case of ρ 0 production, only the unpolarized GPD's H and E contribute to the amplitude of the reaction.In the calculation, shown in Fig. 4, we neglect the contribution due to the GPD E because it is proportional to the 4-momentum transfer between the incoming virtual photon and the outgoing meson, and our data cover small momentum transfers.For the GPD H we use the parametrization of Refs.[6,23].The other ingredient entering the (leading order) calculation of the handbag diagram is the treatment of the strong coupling constant α s between the quarks and the gluon.It has been "frozen" to a value of 0.56, as determined by QCD sum rules [29].The freezing of the strong coupling constant α s is an effective way to average out non-perturbative effects at low Q 2 and is supported by jet-shape analysis of the infrared coupling [30]. As mentioned earlier, the handbag diagram calculation can only be compared with the longitudinal part of the cross section.Figure 4 shows a good agreement between the calculation and the data at the low x B values.As for the Regge model discussed above, the two highest x B bins might contain some additional nucleonic resonance "contamination", which are not included in the "handbag" approach.Variations in reasonable ranges of the parameters entering the GPD's were studied, and results were found to be stable at the 50% level.This provides confidence in the stability, reliability, and validity of the calculation based on the prescription of a "frozen" α s .Let us also note that this calculation reproduces reasonably well the HERMES data [28], which were taken at neighboring kinematics. A signature of the handbag mechanism is that, independent of any particular GPD parametrization adopted, the (reduced) cross sections should follow a 1/Q 6 dependence at fixed t and x B .In this analysis, due to the lack of statistics, σ L is integrated over t, which means that it is proportional to t min , this latter variable changing as a function of Q 2 .This 1/Q 6 scaling behavior at fixed t and x B can therefore not be directly observed in our data, which is modified by the (trivial) kinematical Q 2 dependence of t min .Nevertheless, agreement between the data and the GPD calculation, which also contains this trivial t min dependence, should be interpreted as confirmation of the leading order prediction based on the "handbag" diagram. In conclusion, we have presented here a first exploration of exclusive vector meson electroproduction on the nucleon in a region of Q 2 between 1.5 and 3.0 GeV 2 and x B between 0.2 and 0.6, which is a kinematical domain barely explored.The Regge model, based on "economical" hadronic degrees of freedom, is able to describe the transverse cross section data, along with the other existing vector meson photo-and electroproduction data.Furthermore, the more fundamental "handbag" approach, with a standard parametrization of the GPD H and the extrapolation to low Q 2 by an effective freezing of α s , provides a fair description of the longitudinal part of the cross section.Therefore it seems possible to understand the longitudinal part of the ρ meson production cross section in a pQCD framework, which potentially gives access to GPD's.The transverse cross section, on the other hand, for which no factorization between soft and hard physics exists, can be described in terms of meson exchanges.These tentative conclusions need of course to be confirmed by a more extensive and thorough exploration of the x B , Q 2 phase space which is currently under way with a much larger data set [31]. We would like to acknowledge the outstanding efforts of the staff of the Accelerator and the Physics Divisions at Jefferson Lab that made this experiment possible.This work was supported in part by the Istituto Nazionale di Fisica Nucleare, the French Centre National de la Recherche Scientifique, the French Commissariat à l'Energie Atomique, the U.S. Department of Energy, the National Science Foundation, Emmy Noether grant from the Deutsche Forschungsgemeinschaft and the Ko-rean Science and Engineering Foundation.The Southeastern Universities Research Association (SURA) operates the Thomas Jefferson National Accelerator Facility for the United States Department of Energy under contract DE-AC05-84ER40150. FIG. 4 : FIG.4: Cross sections σL (left) and σT (right) for ep → e ′ pρ 0 as a function of Q 2 as measured in this experiment.The dotted line represents the Regge model of Refs.[21,22] while the solid line describes the GPD model of Refs.[6,23].The systematic error is indicated by the shaded zones at the bottom of the plots.
2019-04-14T02:27:20.747Z
2004-08-03T00:00:00.000
{ "year": 2004, "sha1": "cb47dcd0092daa68fdc7cd4c7bddb6f652f0ca2a", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.physletb.2004.11.019", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "cb47dcd0092daa68fdc7cd4c7bddb6f652f0ca2a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
249953781
pes2o/s2orc
v3-fos-license
Communication by Means of Thermal Noise: Towards Networks with Extremely Low Power Consumption In this paper, the paradigm of thermal noise communication (TherCom) is put forward for future wired/wireless networks with extremely low power consumption. Taking backscatter communication (BackCom) and reconfigurable intelligent surface (RIS)-based radio frequency chain-free transmitters one step further, a thermal noise-driven transmitter might enable zero-signal-power transmission by simply indexing resistors or other noise sources according to information bits. This preliminary paper aims to shed light on the theoretical foundations, transceiver designs, and error performance derivations as well as optimizations of two emerging TherCom solutions: Kirchhoff-law-Johnson-noise (KLJN) secure bit exchange and wireless thermal noise modulation (TherMod) schemes. Our theoretical and computer simulation findings reveal that noise variance detection, supported by sample variance estimation with carefully optimized decision thresholds, is a reliable way of extracting the embedded information from noise modulated signals, even with limited number of noise samples. a couple of microwatts [3]. However, even ambient BackCom systems rely on existing signals, such as TV or Wi-Fi signals, to transmit information. As an alternative to BackCom, communication by means of modulated Johnson (thermal) noise has been recently put forward in [7], to transmit information with extremely low power consumption and without requiring pre-existing RF signals. In this context, noise-based communication might further reduce the transmitter complexity and power consumption of RIS-based and BackCom systems. Despite still not being fully perceived by our community, the roots of the concept of communication by means of modulated thermal noise date back to early 2000s. In this paper, we refer to this general paradigm as thermal noise communication (TherCom). In the seminal work of Kish [8], the concept of zero-signal-power (stealth) communication has been put forward by representing an information bit by the choice of two different resistance levels (impedance bandwidths) and the resulting two thermal noise spectra. In the same study, a wireless transmission system with two parabolic antennas is also envisioned to realize the concept of reflection modulation (by indexing a resistor to embed information). Taking stealth communication one step further, the Kirchhoff-law-Johnsonnoise (KLJN) secure key exchange scheme is proposed by the same author in 2006 to achieve unconditionally secure communication by utilizing the pure laws of physics: Kirchhoff's law and thermal noises of two pairs of resistors [9], [10]. In simple terms, communicating two partners, Alice and Bob, first select their resistors according to information bits and then connect them to a wire channel in each transmission interval, where bit 0 and bit 1 are represented by the selection of low-and high-valued resistors, respectively. A secure bit exchange takes place when the bit values at the two ends are different, which results in an intermediate mean-square noise voltage level on the line. Despite the fact that this intermediate level can be detected by an eavesdropper (Eve), the specific contributions of Alice and Bob cannot be comprehended (for Alice→ 0 and Bob→1 or vice versa), which ensures an unconditional security that is in the level of quantum secrecy. From a wider perspective, the KLJN scheme has conceptual similarities with the well-known index modulation (IM) concept [11], [12], which performs indexing to embed information in certain system entities. Specifically, a KLJN communicator performs a sort of IM for the available two resistors, in return, for two noise voltage power spectral densities. In [13], performance of KLJN communicators is evaluated through practical experiments over a model-line at several distances and data rates. Furthermore, a bit error rate (BER) of 0.02% is reported through experiments. The effect of wire resistance on the noise voltage and current is investigated in [ 14] and the information leak is reported to be not significant in this case. Additionally, it has been reported in [15] that Johnson-like noise (either generated naturally or externally) must be used for secure key exchange in KLJN systems, while a generalized KLJN scheme is proposed in [16] by using arbitrary (four different) resistors. Nevertheless, in the recent study of [17], this generalized scheme is shown to be less secure than the original KLJN scheme under realistic conditions. The first attempt to quantify the bit error probability (BEP) of the KLJN scheme has been made in [18], where an exponentially decaying error probability is reported with respect to the bit duration. In the follow-up study of [19], the combination of voltage and current measurements is introduced for further BEP reduction. However, in both studies, only the performance of securely exchanged bits is evaluated and approximations, based on the Rice's formula of threshold crossing frequency, are used. A more systematic approach is used in [20] to reveal the effect of distances between noise variances on the BEP performance using statistical hypothesis testing. However, to the best of our knowledge, a general theoretical framework on the receiver system design and BEP optimization of the KLJN scheme have not been presented in the literature. Against this background, the major contributions of this article is summarized as follows: • By revisiting the scheme of KLJN from a communication engineering perspective, we introduce a new framework for the calculation of its BEP for general system parameters. We also introduce an effective threshold-based detection method that considers sample variance estimation from the taken noise samples. • We propose two novel KLJN detectors by exploiting joint voltage and current measurements to further reduce the BEP. While the first detector raises an error flag when voltage and current bit interpretations are different, the second one adaptively considers either voltage or current measurements according to user information bits. • By taking the KLJN scheme one step further, we propose the scheme of wireless thermal noise modulation (TherMod), which performs a sort of IM for the available two resistors at the transmitter to convey information. We formulate its generic signal model and evaluate the BEP of its proposed detector. • Finally, extensive numerical and computer simulation results are presented to assess the potential of KLJN and TherMod systems under diverse set of parameters. We reveal that the TherCom paradigm might be a remedy for future wired/wireless networks with extremely low or almost zero power consumption. This article is organized as follows. Sections II and III of this manuscript are devoted to theoretical foundations, receiver designs, and BEP optimizations of KLJN and TherMod schemes, respectively. We provide our numerical results under Section IV while concluding the paper in Section V. II. WIRED INFORMATION TRANSFER WITH KLJN SECURE BIT EXCHANGE In this section, first, we describe the fundamental aspects of the KLJN secure bit exchange scheme from a communication engineering perspective, then introduce a new framework for the calculation of its BEP under general system parameters. Furthermore, we propose two novel detectors to further improve the BEP performance of the KLJN scheme. A. Fundamentals of the KLJN Secure Bit Exchange Scheme We begin our discussion by reviewing the KLJN secure bit exchange scheme given in Fig. 1, which finds its roots in early works of Kish from 2005Kish from -2006- [10]. This scheme is based on the Johnson-Nyquist noise (thermal noise, Johnson noise, or Nyquist noise) voltages generated by two terminals, Alice and Bob, which are connected with a wire channel. Here, for each bit duration (bit exchange period) of T b seconds, according to their information bits, Alice and Bob select one of their resistors with either R L or R H ohms. The resistors selected by Alice and Bob are represented by R A and R B , respectively, that is, R A , R B ∈ {R L , R H }. Namely, bit 0 and bit 1 are presented by the resistors with low resistance (R L ) and high resistance (R H ), respectively. Here, we can consider the relationship R H = αR L , where typical α values can range from 5 to 50. From this perspective, the KLJN scheme can be considered as an instance of IM, where according to incoming information bits, the index of a resistor is selected at both sides of the link simultaneously. Alternatively, the bit loading process of the KLJN scheme can be considered as the modulation of the noise fluctuations over the channel, where one performs a sort of IM for the noise power spectral density level, that is, for the mean-square noise voltage. Another important aspect of the KLJN communicator is that as in IM schemes, indexing does not consume power and using background noises only, a zero-signal-power communication might be possible without ambient signal sources. The above resistance selection process is repeated in every T b seconds, where Alice and Bob perform either voltage or current measurements, or both, simply taking samples from randomly fluctuating voltage/current waveforms over the wire at discrete-time instances for further processing. The mean-square value of the thermal noise voltage v R (t), which is in fact a Gaussian random process appearing across the terminals of the selected resistor, measured in a bandwidth of ∆f Hz, is given by [21] where k is the Boltzmann's constant, which is 1.38 × 10 −23 joules per degree Kelvin, T is the temperature in degrees Kelvin, and R ∈ {R L , R H } is the selected resistance in ohms. Accordingly, both Alice and Bob rely on voltage (v(t)) and/or current (i(t)) measurements on the wire line to decode the bits of their partner. Here, considering the Kirchhoff's law, the power density spectrum of the resulting noise voltage on the line with two parallel resistors will be Similarly, using Ohm's law, the power density spectrum of the noise current flowing through the loop is given by For the sake of simplicity, we first put our emphasis on voltage measurements, while a generalization will be provided in Subsection II.C. We also note that external noise generators with a relatively high effective temperature (beyond a trillion of Kelvin) and limited bandwidth, in return, much stronger noise signals with the same linear scaling between the resistance and noise voltage spectrum, can be used to further boost the system against background noise effects and attenuation. However, the original thermal noise generation concept can be utilized to realize networks with extremely low power consumption as well as to achieve stealth communication. In other words, the use of background noise signals might be a powerful tool to hide information signals. However, in both cases, the bandwidth of the wire limits the data rate for reliable communication. Finally, it is worth noting that due to thermal equilibrium temperature T at both ends of the link, the mean power flow between the parallel resistors is zero, and voltage and current samples taken any time instant are statistically independent [9]. Against this background, for the following four cases of selected Alice/Bob bits, namely 00, 01, 10, and 11, where the first and second bits stand for the selected bits by Alice and Bob, respectively, the samples taken from the voltage waveform on the line will be Gaussian distributed with the following variance values: An interesting interpretation of (4) is given as follows [22]. As illustrated in Fig. 1, if both Alice and Both select the large resistance R H , the fluctuations on the line will be high. If both select the small resistance R L , they will be small. And if one user selects the large one while the other selecting the small, the noise variance takes an intermediate value. According to (4), the relative variance ratios are obtained as 1 : 2α/(1+α) : α, that is, it can be easily shown that for σ 2 00 = σ 2 , we obtain σ 2 01 = σ 2 10 = (2α/(1+α))σ 2 and σ 2 11 = ασ 2 . For instance, for α = 10, which is a value considered in our numerical results, we obtain the variance ratios of 1 : 1.8182 : 10, which results in a non-uniform distribution for the observed noise variances. Particularly, as we will discuss later, these noise variance ratios play an important role to distinguish the selected Alice/Bob bit combinations from limited number of measurements (noise samples) and directly affect the overall BEP. In order to provide unconditional security, KLJN scheme relies on basic laws of physics. Specifically, the KLJN scheme provides a bold answer to the following question: Can Eve find out the selected resistance values at both ends of the link from her own voltage/current measurements? The answer of this question is even more interesting. The eavesdropper can estimate the noise variance on the link from its limited number of samples, while the cases of 01 and 10 impose a serious challenge. Specifically, for the cases of 00 and 11, which correspond to highest and lowest noise fluctuations on the link, an eavesdropper can easily identify Alice/Bob bits from its measurements, and these bits will be identical undoubtedly. This case is regarded as non-secure bit exchange. Nevertheless, since the cases of 01 and 10 produce equivalent noise variances under ideal conditions, that is, with zero wire resistance, the specific locations of 0 and 1 bits cannot be determined from measurements taken in-between terminals. This provides a sort of very powerful and simple encryption, and ensures an absolute security [23]. Furthermore, the KLJN scheme has been shown be highly resistant to many attacks, including man-in-the-middle, current injection, wire resistance, and cable capacitance attacks. In this sense, from a security performance perspective, one would expect a very high secrecy capacity from the KLJN scheme. On the other hand, Alice and Bob can determine the other partner's selected bit for all four cases exploiting their voltage and/or current samples, thanks to the knowledge of their own bit. In other words, a user's own bit behaves as a decryption mechanism, revealing information about the contribution of the selected resistor at the other end of the link. From this perspective, the KLJN scheme also performs a sort of telecloning of a user's bits, say Alice, allowing Bob to perform detection without physically accessing Alice's bits or transmitted waveforms in his own decision device, and vice versa. We conclude this subsection by summarizing the major features of the KLJN secure bit exchange scheme: i) It allows the simultaneous exchange of Alice's and Bob's bits in a single bit period, ii) It is unconditionally secure when the selected bits of Alice and Bob are different, that is, for the cases of 01 and 10, which occur 50% of the time for uniform bit probabilities. As a result, it can be used to exchange the secure keys of two partners under certain protocols, iii) The use of background thermal noise allows a stealth and extremely low power consuming communication, which might be critical for future wired/wireless systems. A more detailed discussion and a complete historical perspective on the KLJN scheme can be found in [23]. B. A New Theoretical Framework on the BEP of KLJN The KLJN scheme described in the previous subsection, has random bit errors, due to the limited number of samples taken during the specified bit duration. Particularly, Alice and Bob are subject to certain bit errors depending on the selected bit of their partner. As a result, BEP calculation as well as its optimization are not straightforward tasks. Here, we provide a unified view on the BEP behavior of the KLJN scheme. In Table I, we provide all possible error events for Alice and Bob considering four possible resistance selection scenarios: low-low, low-high, high-low, and high-high, which stand for bit sequences 00, 01, 10, and 11, respectively. Here, probabilities of corresponding error events for Alice and Bob respectively denoted by P A (.) and P B (.). We consider probabilities for all error events, not only involving secure bit exchange, due to the following two reasons. First, under stealth and ultra-low power communication, we are also interested in non-secure bit exchange, that is, the cases of 00 and 11 might be exploited as well. From an encryption perspective, under general conditions, it would be still difficult to decrypt messages with 50% compromised bits for long enough keys, such as 256-bit keys that are widely used in standards and protocols. Second, since a non-secure combination (00 and 11) can be mistaken as a secure combination (01 and 10), and vice versa, focusing only on 00/11 → 01/10 error events, as in [18], might be misleading in terms of overall BEP. Nevertheless, by ignoring 01/10 → 00/11 error events, one can easily obtain a valid BEP expression for the secure bit exchange protocol using our framework as well. In light of Table I and also considering the bit error symmetries for Alice and Bob 1 , BEP for the KLJN scheme can be simply expressed as 1 As will be shown next, we have P A (00 → 01) = P B (00 → 10), P A (11 → 10) = P B (11 → 01), P A (01 → 00) = P B (10 → 00), and P A (10 → 11) = P B (01 → 11). As a result, Alice and Bob will have the same overall BEP. For notational simplicity, condition terms are not shown in probability expressions of (5). where it is assumed that 0s and 1s are generated uniformly, that is, each selected bit combination in Table I has a probability of 1/4. However, the BEP values for bit 0 and bit 1 might differ in the general case. In what follows, we will investigate these four error event cases using noise variance estimation. We assume that during each bit duration, both Alice and Bob take samples from the thermal noise voltage on the wire to determine its variance at both sides of the link. Denoting the kth independent noise sample by x k , which follows Gaussian distribution with zero mean and σ 2 i variance where σ 2 i ∈ σ 2 00 , σ 2 01 , σ 2 11 , that is, x k ∼ N (0, σ 2 i ), the noise variance can be estimated aŝ Here, N stands for number of noise samples per bit. Assuming the knowledge of the zero mean of the thermal noise, it can be easily shown that the variance estimator of (6) is unbiased, stands for the expectation. Ideally,σ 2 would follow chi-square distribution, however, for large enough N , due to central limit theorem (CLT), we obtainσ 2 ∼ N (σ 2 i , 2σ 4 i /N ). We note that even for N = 50, σ 2 approximately fits Gaussian distribution and increasing number of noise samples might improve the quality of the variance estimate significantly. The band-limited nature of the noise, which can be caused by either the use of external noise generators and/or the bandlimited wire channels, puts a hard limit on the number of samples N that can be taken from the channel by Alice and Bob. Assuming a noise bandwidth of ∆f Hz, the Wiener-Khinchin theorem states that a maximum of N = 2T b ∆f samples can be taken per bit to ensure statistically independent samples. As a result, for a fixed bandwidth, increasing N increases T b , in return, reduces the bit rate R b = 1/T b . We finally note that the above limit on N also applies to Eve, no matter how strong her signal processing capabilities, ensuring a solid secrecy. As shown in Fig. 2, we implement a threshold-based detection for the noise variance, where two threshold values γ 1 and γ 2 are considered. Here, depending on the value ofσ 2 , Alice and Bob make decisions on their partner's bit, also exploiting their own bit. Considering statistical decision errors due to the randomness ofσ 2 , we obtain the corresponding error event -Correct probabilities for the cases of 00, 11, 01, and 11: Here, Q(·) denotes the tail probability of the standard Gaussian distribution. Substituting the probability values of (7) in (5), we obtain Let us further simplify (8) by considering the variance ratios introduced in (4). Noting that σ 2 00 < γ 1 < σ 2 01 < γ 2 < σ 2 11 , we can simply normalize all these terms with respect to the smallest one, which is σ 2 00 , by assuming σ 2 00 = σ 2 . Further defining γ 1 = βσ 2 and γ 2 = κσ 2 , we obtain where 1 < β < 2α 1+α < κ < α. It is worth noting that in this ideal transmission scenario, the BEP of (9) does not depend on the individual noise variances but on the ratio α of two resistance values as well as two thresholds. In light of these calculations, the BEP of the KLJN scheme can be minimized with respect to the threshold values, that is β and κ, for given α and N . At this point, we provide the following remarks to assist our numerical evaluations in Section IV. Remark 1: Even for moderate N and α values, such as N = 50 and α = 10, we observe that P A (00 → 01) P A (11 → 10) and P A (01 → 00) P A (10 → 11) due to uneven distribution of noise variances. In other words, error events associated with the case of 00 dominates the BEP due to the close proximity of σ 2 00 and σ 2 01 . In this case, (9) can be further simplified as which is independent of κ, that is, the second threshold value. In simple terms, no matter how large α and σ 2 , the case of 00 becomes the bottleneck of the system. Remark 2: For the case of R H R L , which is highly practical for the effective modulation of the noise power density spectrum, we have α 1 and 2α 1+α ≈ 2, as a result, (10) can be further simplified to As seen from (11), only the choice of β and N dictates P b . Here N is a measure of the overall quality of the noise variance estimates and directly influences P b while β builds a border between 00 and 01/10 decision regions and affects the error probabilities associated with the case of 00. Remark 3: An intuitive solution to the minimization problem of (11) with respect to β would be β = 4/3, which ensures a uniform error probability for bit 0 and bit 1 by providing the same error probabilities for the two terms in (11). In this case, we obtain 6 (12) reveals an exponentially decaying BEP for the KLJN scheme with respect to N , i.e., P b ≈ (1/24) exp(−N/36) for large N using the exponential approximation for the Q-function, which might be promising for wired TherCom schemes. We also note that in the asymptotic case for N , the effect of β diminishes and P b is dominated by N . C. New KLJN Detectors In this subsection, we propose two new detectors for the KLJN scheme using joint voltage and current measurements. Considering the fact that current and voltage amplitudes are independent due to the second law of thermodynamics [19], the BEP of the KLJN scheme can be further improved with these detectors by jointly processing voltage and current samples. We first provide the basics of current-based noise variance estimation and then present the two new detectors. Considering the power density spectrum of the loop noise current from (3), we obtain the following noise variances for the cases of 00, 01/10, and 11, respectively: Here, noise variance ratios are obtained as 1 : 2/(1 + α) : 1/α, which is the opposite of the case for voltage variances, that is, we observe the largest fluctuations for the case of 00. As a result, we consider the threshold-based modified detection scheme in Fig. 3 for the case of noise current samples, wherê s 2 stands for the unbiased noise variance estimate that is defined the same as in (6). Following a similar methodology as for voltage measurements, the BEP for the case of current measurements is obtained as whereP (·) is used here to distinguish from the probabilities of voltage-based error events. Defining γ 3 = ηs 2 and γ 4 = ξs 2 , consideringŝ 2 ∼ N (s 2 i , 2s 4 i /N ) for s 2 i ∈ s 2 00 , s 2 01 , s 2 11 and following similar steps from the previous subsection, we obtaiñ For large enough α and considering the dominance of 11 error events due to the close proximity of s 2 11 and s 2 01 for this case, (15) can be simplified as To ensure uniform error probability for bit 0 and bit 1, we can set η = 4/(3α) in (16) , which is the same as (12). Consequently, due to the symmetry in their variance ratios, we observe that voltage and current measurements provide an identical BEP. Nevertheless, by exploiting their independence, we introduce the following two detectors to further improve the error performance. KLJN New Detector I (ND-I): Taking samples from both voltage and current noise waveforms, this detector raises an error flag when voltage and current bit decisions are different. It is worth noting that a similar detector is considered in [19], which performs secure bit exchange when both current and voltage bit interpretations are secure (01/10). For ND-I, the correct symbol detection probability of Alice can be obtained as follows by considering the same decisions from voltage and current measurements, respectively for 00, 11, 01, and 10: +P (σ 2 > γ 1 ) P (ŝ 2 < γ 4 ) + P (σ 2 < γ 2 ) P (ŝ 2 > γ 3 ) . (17) Substituting corresponding probabilities in (17), we obtain Finally, the BEP of this detector is obtained as P I b = 0.5(1 − P c ), where the factor of 0.5 stands for the ratio of bit errors with respect to a symbol error of the KLJN scheme. We observe that the BEP of this detector is a function of β, ξ, κ, and η, while a straightforward solution for the optimum set of these parameters that minimizes P I b cannot be obtained analytically. As a result, for given α and N , we minimize P I b using numerical methods in Section IV. An important aspect of ND-I is its ability of error detection, which occurs when voltage and current bit interpretations are different. We will show later by numerical results that this detector is extremely robust to bit errors when erroneously detected symbols are discarded. C: current measurement, V: voltage measurement KLJN New Detector II (ND-II): Taking the previous detector one step forward, ND-II considers the fact that 00 and 11 error events are the dominant ones for voltage and current measurements, respectively. Consequently, this detector allows Alice and Bob to select their measurement types depending on their own bits. In other words, Alice (or Bob) considers current measurements if its own bit is 0, while uses voltage measurements if its own bit is 1, since 00 ↔ 01 and 10 ↔ 11 error events are less likely for current and voltage measurements of Alice, respectively. In light of this information, Alice and Bob will make their decisions according to the procedures in Table II. It is worth noting that for this detector, Alice and Bob can make different decisions for the cases of 01 and 10, but this is very unlikely. Accordingly, the BEP of ND-II is obtained as Substituting the corresponding probabilities in (19), we obtain As seen from (20), P II b does not depend on fragile thresholds, β and η, and only consists of weaker probability terms that are ignored earlier in BEP calculations. However, we again resort numerical tests to determine optimum ξ and κ values that minimizes P II b . That is being said, the search space can be reduced by assuming ξ = κ/α, which provides a uniform error probability for bit 0 and bit 1 by ensuringP A (00 → 01) = P A (11 → 10) andP A (01 → 00) = P A (10 → 11). III. WIRELESS INFORMATION TRANSFER WITH THERMAL NOISE MODULATION In this section, taking the KLJN scheme one step further, we introduce the scheme of TherMod, which considers the modulation of the thermal noise level to transmit information over wireless channels. A. Fundamentals of Thermal Noise Modulation (TherMod) In Fig. 4, we present the generic block diagram of the TherMod transceiver. Here, similar to the KLJN scheme, incoming information bits at the transmitter determine the index of the selected resistor, in return, the power spectral density of the generated thermal noise waveform 2 . However, unlike the KLJN scheme, only one of the terminals performs this resistor index selection and an unguided transmission medium is considered. Due to the extremely low noise power levels at the transmitter, possibly a short-range and lineof-sight (LOS)-dominated communication might be possible, with the aid of high-gain antennas, such as horn antennas, as experimentally demonstrated in [7] for the first time. Here, circuit sensitivity might be one of the most critical practical challenges for this system since the received noise power should be strong enough to be detected. Alternatively, external noise generators can be used to extend the coverage, with the cost of higher transmitter complexity and power consumption. Another way of boosting the noise level over the channel might be employing a power amplifier prior to transmission, which also increases the power consumption. In what follows, we put our emphasis on the basic model of Fig. 4. We also note that the TherMod scheme does not need and modulate ambient RF signals as in BackCom and differs from classical communication systems that rely on carrier signal modulation. Denoting the random voltages stemming from the thermal noise of low-and high-valued resistors by v L (t) and v H (t), respectively, the transmitted signal can be expressed as where i n ∈ {L, H} according to the incoming bits. Here, T b is the bit duration as defined before and g(t) stands for the unitgain rectangular pulse shape with a duration of T b seconds. A realization of this waveform, which is also a Gaussian random process with two variance levels, is illustrated in Fig. 4. Focusing on a single bit duration, the noise samples taken from this process will be independent and identically distributed Gaussian random variables. However, their variance carries information as in the KLJN scheme. Although the thermal noise generated by the resistors is white and can be observed at any frequency, the considered antennas as well as the receiver equipment (sampling rate) limits the bandwidth of the observed noise signal. Considering a pure LOS link between the transmitter and the receiver without any multipath components and ignoring time delays, according to the Friis transmission equation [24], the received signal is obtained as where d is the distance between terminals, G t /G r are transmit/receive antenna gains, and λ is the wavelength. A software-defined radio (SDR)-based receiver with carrier frequency f c and sampling rate f s can be used to obtain complex baseband samples for further processing. In light of this information, received complex baseband signal is obtained as s B (t) = LPF s RF (t)e −j2πfct , where its sampled version can be denoted by s B [n] (or simply by s n ). Here, LPF stands for low-pass filtering after downconversion. Filtering operations within the SDR ensure a band-limited white noise process at the output with independent in-phase and quadrature components. As a result, the nth sample in the complex baseband can be represented as 3 where r n stands for the samples of the received useful signal stemming from the thermal noise generated by the transmitter while w n is the additive while Gaussian noise (AWGN) sample introduced due to the receiver circuitry (amplification, filtering, and downconversion). In other words, received complex baseband samples include not only the information carrying noise terms but also the disruptive noise terms added on top of them. In simple terms, the variance of w n is fixed and proportional to σ 2 w ∼ kT B, where B ≈ f s according to the Nyquist theorem. On the other hand, the variance of r n changes with respect to the selected resistor at the transmitter and proportional to σ 2 r ∼ 4kT R i BP G for i ∈ {L, H}, where P G stands for the overall path gain including free space propagation loss and antenna gains, i.e., P G ∝ ( λ 4πd ) 2 G t G r . As shown in Fig. 4, the primary task of the receiver is to make a decision on the transmitted bit by processing the received samples (s n ) accordingly. Considering this basic communication model, in the following subsection, we provide a theoretical framework to assess the BEP performance of the TherMod scheme. B. A Theoretical Perspective on TherMod In this subsection, considering the statistics of the received noise samples as well as the AWGN terms at the receiver, we derive the theoretical BEP of the TherMod scheme. As in the KLJN scheme, rather than the individual noise variances, their ratio dictates the overall BEP. Denoting the variance of r n for bit 0 (R L ) and bit 1 (R H ) by σ 2 0 and σ 2 1 , respectively, we have σ 2 1 = ασ 2 0 , where α is as defined before and stands for the ratio of two resistance values. To assess the BEP, we define a quality metric, similar to the signal-tonoise ratio (SNR) in traditional communication systems, by δ = σ 2 0 /σ 2 w . Here, δ relates the useful (information carrying) noise variance to the receiver (disruptive) noise variance and directly affects the error performance. In light of this information and considering (23), we obtain s n ∼ CN (0,σ 2 0 ) and s n ∼ CN (0,σ 2 1 ) for bit 0 and bit 1, respectively, wherẽ Here CN (0, σ 2 ) stands for the complex Gaussian distribution with zero mean and σ 2 variance. Similar to the KLJN scheme, we consider sample variance calculations using N complex baseband samples for each bit duration, as a result, a total of f s = N/T b complex samples are processed per second. Considering the complex nature of s n , its variance can be estimated aŝ Here, for large enough N , the distribution ofσ 2 s can be approximated as Gaussian and follows N (σ 2 i ,σ 4 i /N ) for i ∈ {0, 1}. Accordingly, the BEP of the TherMod scheme is obtained as Considering a threshold-based variance detection as shown in Fig. 4, where a decision is made on the transmitted bit considering a predetermined threshold γ forσ 2 s , we obtain Scaling the threshold with respect to σ 2 w as γ = χσ 2 w for 1 + δ < χ < 1 + αδ, (26) can be re-expressed as Here, we observe that the BEP of the TherMod scheme depends on the selected threshold (χ), the ratio of resistance values (α) and the ratio of useful and disruptive noise variances (δ). In what follows, we further investigate (27) from different perspectives. Remark 4: To have uniform error probabilities for bit 0 and bit 1, we can equate the arguments of Q-functions in (27) by for which (27) simplifies to We note that χ given in (28) is valid for α > 1, which is also a requirement for the operation of TherMod. For the case of α 1, we obtain which reveals an exponentially decaying BEP with respect to N . This can be also verified for the special case of αδ >> 1 for which one obtains P b ∼ Q( √ N ) from (30). On the other hand, for small δ, the gap betweenσ 2 0 andσ 2 1 will shrink and one can obtain P b ∼ Q( √ N αδ/2) for αδ 1, which results in a degraded BEP performance compared to the earlier case. IV. NUMERICAL RESULTS In this section, we provide our computer simulation and numerical results for both KLJN and TherMod schemes. For these two schemes, we assume that the ratio of high-and lowvalued resistances is α = 10 unless specified otherwise. A. Results for the KLJN Scheme In Fig. 5, we plot the BEP performance of the KLJN scheme using voltage-based measurements only, where we assumed β = 4/3, κ = 5 and β = 1.3, κ = 4. Here, we observe that when we model the sample variance directly asσ 2 ∼ N (σ 2 i , 2σ 4 i /N ) during Monte Carlo simulations (Gaussian Fit), theoretical and simulation results perfectly match. On the other hand, BEP results obtained by generating N Gaussian samples for sample variance calculation according to (6) in simulations, slightly deviate from theoretical ones, due to the insufficient fit of chi-square distribution to Gaussian distribution. We also note that this deviation depends on the selected parameters. Despite the fact that even N = 50 would be a sufficient Gaussian fit for chi-square distribution, the overall convergence is not very fast due to its high skewness [25]. Nevertheless, our theoretical derivation from (9) might still be a good indicator for practical BER performance. In Fig. 6, we perform a 3D search over β and κ considering the BEP values obtained from (9). Here, we simply vary N from 50 to 400. As seen from Fig. 6, the optimal β that minimizes the BEP lies around 1.3, while the effect of κ is not significant at this specific β region, which is consistent with our discussion under Remark 1. Similar to Fig. 5, we observe a significant BEP improvement with increasing N . In Fig. 7, we compare the BER performances of our two new detectors with the classical (voltage-based) KLJN detector. For the optimization of the thresholds, we considered N = 100 for all three detectors with a search resolution between 0.05−0.001, which provided β = 1.3160, κ = 3.1512 for the KLJN classical detector, β = 1.3150, κ = 3.1532, η = 0.1300, ξ = 0.3168 for ND-I and κ = 3.1512, ξ = 0.3148 for ND-II. Unsurprisingly, almost the same optimal values are obtained for these detectors thanks to the common probability terms in (9), (18), and (20). We also note that optimal threshold values slightly change with respect to N ; however, the same thresholds given above are used for all considered N values for simplicity. As seen from Fig. 7, ND-I achieves a very similar performance to the classical detector, for the case where random bit errors counted for different voltage and current interpretations, that is, when ND-I raises an error flag. On the other hand, if the corresponding bits are discarded when an error is detected, the performance of ND-I improves splendidly. This behavior can be explained by the fact that it is very unlikely to have bit errors for both voltage and current measurements at the same time. Nevertheless, these discarded bits correspond to the loss of 7 -3.5% of the transmitted bits for the considered N values in Fig. 7, which are between 50 and 75. ND-II eliminates the need for error monitoring with a slight degradation in BER compared to ND-I. Thanks to its adaptive sample variance calculation ability among voltage and current samples, a remarkable BER performance is obtained for the ND-II. We note that for all three detectors, our theoretical derivations (respectively given by (9), (18), and (20)) are accurate when we fit the sample variance (σ 2 orŝ 2 ) directly to the Gaussian distribution (Gaussian Fit), while a slight gap is observed between theoretical and computer simulation curves when Gaussian noise samples are used for sample variance calculation (Random Samples) due to the insufficient fit of Gaussian distribution to chi-square distribution as discussed earlier. This gap is more visible for ND-II due to its limited number of noise samples and considerably low BEP values. Nevertheless, our theoretical framework based on the Gaussian approximation of the sample variance, stands out as a solid baseline for not only theoretical evaluation but also threshold optimization of the KLJN scheme. B. Results for the TherMod Scheme In Fig. 8, we provide the BEP performance of the TherMod scheme with respect to number of complex samples per bit (N ) using the threshold value (χ) obtained in (28) for varying δ, where δ ∈ {0.05, 0.1, 0.2, 0.5} and make comparisons with the results obtained from Monte Carlo simulations. As seen from Fig. 8, BER performance of the TherMod scheme is highly dependent on the δ value, which has an SNR-like effect on the detection mechanism, which is more dominant compared to increasing N . We note that the theoretical BEP obtained from (29) perfectly matches with computer simulation results when the sample variance (σ 2 s ) directly generated as a Gaussian random variable, a similar phenomenon reported for the KLJN scheme. However, slight deviations are observed again due to the insufficient fit of chi-square distribution to the Gaussian distribution, particularly with increasing δ. In Fig. 9, to observe the effect of the selected threshold on the BEP performance, we search for the minimum BEP using (27) for different N values, where δ = 0.1 is assumed. Here, χ is varied from 1 + δ = 1.1 to 1 + αδ = 2 with increments of 0.001. For reference, the fixed threshold value obtained from (28) is also marked in this figure as a vertical line at χ = 1.4194. As seen from Fig. 9, the χ value that minimizes the BEP slightly changes with respect to N . On the other hand, we observe that the fixed χ value, which ensures a uniform error probability for bit 0 and bit 1, provides approximately the minimum BEP for all cases, which can be also verified by substituting numerical N , α, and δ values in (29). In Fig. 10, we investigate the effect of the ratio (α) of two resistance values at the transmitter on the BEP performance. As discussed earlier, in our basic model, we simply assume that α directly scales the variance of the useful noise samples, as a result, it should be sufficiently high for the reliable separation of two possible noise levels at the receiver. Here, we vary α from 1 to 40 for two different δ values. As seen from Fig. 10, increasing α directly improves BEP, while the level of saturation is more evident for the case of δ = 0.2, where the effect of AWGN samples are less severe. We also note that for α values closer to unity, BEP closely converges to 0.5. Nevertheless, practical issues related with the selection of higher α values as well as their effect on the receiver side, would be worth of investigation in the future. Finally, in Fig. 11, we investigate the effect δ on the BER performance by also considering the impulse noise. Here, the impulse noise is modeled as a Bernoulli-Gaussian process in the complex baseband with an impulse probability of p and impulse power that is ten times of the AWGN power [26]. As seen from Fig. 11, more frequent impulse noises (higher p) cause irreducible error floors by disturbing the thresholdbased detection. V. CONCLUSIONS In this paper, we have laid the theoretical fundamentals of communication by means thermal noise, or simply TherCom, for extremely energy-efficient wired/wireless networks of the future. Particularly, we have put our emphasis on the -not fully perceived-KLJN secure bit exchange scheme, and then taking inspiration from it, we have proposed the wireless TherMod scheme. We note that the power consumption of TherCom schemes can be as low as BackCom systems when operated in the stealth mode by simply indexing the available resistors, while one can expect relatively higher power requirements for the case of external noise generators. We conclude that this preliminary work and its findings might help to unlock the true potential of communication through noise-like signals. The following topics might be of interest for future research: generalization of TherCom schemes for more than two resistors at the communicating parties to transmit higher number of bits, derivation of information theoretical bounds on the data rate and the secrecy capacity, exploration of practical issues such as wire resistances, sampling imperfections, and timing mismatches, and proposal of potential coding schemes to further improve the error performance.
2022-06-24T01:16:03.866Z
2022-06-22T00:00:00.000
{ "year": 2022, "sha1": "71cfbaa00d0ea263135585d68fceda833069d31a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "71cfbaa00d0ea263135585d68fceda833069d31a", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Engineering", "Mathematics" ] }
119248595
pes2o/s2orc
v3-fos-license
ALMA observations of a misaligned binary protoplanetary disk system in Orion We present ALMA observations of a wide binary system in Orion, with projected separation 440 AU, in which we detect submillimeter emission from the protoplanetary disks around each star. Both disks appear moderately massive and have strong line emission in CO 3-2, HCO+ 4-3, and HCN 3-2. In addition, CS 7-6 is detected in one disk. The line-to-continuum ratios are similar for the two disks in each of the lines. From the resolved velocity gradients across each disk, we constrain the masses of the central stars, and show consistency with optical-infrared spectroscopy, both indicative of a high mass ratio ~9. The small difference between the systemic velocities indicates that the binary orbital plane is close to face-on. The angle between the projected disk rotation axes is very high, ~72 degrees, showing that the system did not form from a single massive disk or a rigidly rotating cloud core. This finding, which adds to related evidence from disk geometries in other systems, protostellar outflows, stellar rotation, and similar recent ALMA results, demonstrates that turbulence or dynamical interactions act on small scales well below that of molecular cores during the early stages of star formation. 1. INTRODUCTION About one in two stars of solar mass and greater are born in pairs (Duchêne & Kraus 2013). Because of their common origin and age, young stellar binaries provide useful benchmarks for understanding star formation and evolution, and are extensively studied (Mathieu 1994). Circumstellar disks can exist around the individual stars in wide systems with semi-major axes > ∼ 100 AU, with masses and lifetimes that are similar to those around single stars (Jensen et al. 1996;Cieza et al. 2009;Kraus et al. 2012;Harris et al. 2012) and apparently similar planetary end-products (Desidera & Barbieri 2007). Circumstantial evidence has long suggested that the rotation axes of such wide binaries are mis-aligned. These include measurements of stellar rotation (Hale 1994) and non-parallel protostellar jets (e.g., Lee et al. 2002;Chen et al. 2008). More recently, detailed modeling of infrared interferometry and spectral energy distributions have indicated non-aligned disk planes in the T Tau (Ratzka et al. 2009), and GV Tau (Roccatagliata et al. 2011) systems. With the advent of the Atacama Large Millimeter/Submillimeter Array (ALMA), the rotation of individual disks in wide binary systems has now been directly measured and shown to be misaligned in HK Tau (Jensen & Akeson 2014) and AS 205 (Salyk et al. 2014). These observations demonstrate that wide bina-ries do not form in large, co-rotating structures and indicate the importance of stochastic processes during the early phases of star formation, either gas turbulence or dynamical interactions of young protostars. Planetary systems that form in such mis-aligned systems may be subject to secular torques that can affect their orbital evolution (Batygin 2012). The subject of this paper is V2434 Ori in the M43 HII region of Orion. HST imaging by Smith et al. (2005) reveal this to be a binary system with an angular separation of 1. 1, corresponding to 440 AU at our assumed distance to Orion of 400 pc (Sandstrom et al. 2007;Menten et al. 2007). The optically fainter component is surrounded by a large silhouette disk and drives a jet with associated Herbig-Haro (HH 668) objects. Following the naming convention for HST -identified protoplanetary disks ("proplyds") proposed by O'dell & Wen (1994), we refer to the system as 253-1536. The discovery that both binary members have disks was made by Mann & Williams (2009) though submillimeter imaging. The two disks were subsequently detected at 7 mm by Ricci et al. (2011) and the nearblackbody millimeter colors indicate that both harbor a substantial population of large dust grains, characteristic of protoplanetary disks. Following the nomenclature in those papers, we denote the brighter millimeter source as component A, although it is the fainter optical source. Here we present new ALMA observations that reveal the molecular line emission from the disks. These data allow us to examine disk masses, kinematics, and chemistry, as well as constrain the masses of their central stars. Similar to the aforementioned HK Tau and AS 205 results, we find here that the disks in this system are strongly misaligned. The observations are described in §2, the results are presented in §3, and the implications are discussed in §4. arXiv:1410.3570v1 [astro-ph.SR] 14 Oct 2014 2. OBSERVATIONS The data analyzed here come from the fifth field observed in the ALMA Cycle 0 (project 2011.0.00028.S) study of the Orion proplyds by Mann et al. (2014) wherein the details of the data acquisition and reduction can be found. That paper discusses the disk dust masses as inferred from the 856 µm (350.5 GHz) continuum but here we focus on the molecular lines observed in same observations; CO 3-2 and CS 7-6 in the lower sideband, and HCO + 4-3, and HCN 4-3 in the upper sideband. The Hanning smoothed spectral resolution of these data is dν = 488.28 kHz corresponding to velocity channels of dv 0.42 km s −1 . The full extent of the two disks in the binary system is less than the maximum recoverable scale of 5 so no emission is resolved out. The resolution of the continuum and line maps, ∼ 0. 5, is lower than the Submillimeter Array (SMA) images of Mann & Williams (2009) but the sensitivity of the ALMA data is much higher, allowing us to detect all four molecular lines and to measure velocity gradients within each disk. Moment Maps Maps of the submillimeter continuum, optical HST image, and molecular lines are displayed in Figure 1. Both disks are clearly detected in the continuum, CO 3-2, HCO + 4-3, and HCN 4-3. The large silhouette disk around 253-1536A, the brighter millimeter source, is also detected in CS 7-6. The positions of the two continuum peaks are given in Table 1. The mapped area (primary beam) is much larger than the region shown in Figure 1. From inspection of the full maps, we see strong CO emission over a range of size scales from the background molecular cloud but much weaker emission in the other lines. The CO contamination affects the disk morphologies, which show a small offset between line and continuum peaks, and the resulting flux measurements. However, the contamination in the other lines is negligible. The uncontaminated HCO + line is in fact the strongest line in the bandpass. The HCN line is ∼ 4 − 5 times weaker, and the CS line > 25 times weaker. The continuum and integrated line fluxes for each disk are given in Table 2. The error due to flux calibration is estimated to be about 10% (Mann et al. 2014). The continuum flux density is stronger than that measured by the SMA measurements by Mann & Williams (2009), partly due to the slightly shorter observing wavelength but mostly to the greater quality (phase coherence and signal-to-noise ratio) of the ALMA data. The CO fluxes are unreliable measures of the true disk emission due to the aforementioned cloud confusion. The main uncertainty in the fluxes of the other lines, estimated to be about 20%, is in separating the overlap between the two disks as the detectable gas emission extends further than the dust (e.g., Hughes et al. 2008;Andrews et al. 2012). The first moment (intensity weighted mean velocity) of each line is shown in the right panels of Figure 1 and reveal consistent velocity gradients across both disks. The direction of the gradients in the two disks are very different from each other, which indicates that they do not share the same axis of rotation. We use the HCO + data Images of the continuum and line data. The top row shows the ALMA continuum emission at the central observing wavelength of 856 µm, and the HST image in the F658 narrow band (Hα) filter. The HST image shows the two stars and the large silhouette disk around the fainter source, labeled component A on account of its brighter millimeter emission. A faint optical jet is seen perpendicular to the disk, and a diffraction pattern is seen around the optically brighter component B. The ALMA image shows disk dust emission from both binary members. The lower four rows show velocity moment 0 and 1 maps for the four observed lines. Line emission from CO 3-2, HCO + 4-3, and HCN 4-3 is detected toward both disks, and CS 7-6 toward the large silhouette disk around component A. The velocity maps are on the same scale, 7 to 13 km s −1 , and show a similar pattern of gradients across each source but oriented in very different directions. to analyze the velocity structure in more detail as they provide the best combination of high signal-to-noise ratio and low cloud confusion. 3.2. Analysis of HCO + data Channel maps of the HCO + 4-3 emission are plotted in Figure 2. These more clearly show the velocity gradients across each of the two sources and the difference in the angle between them. To measure the size and direction of the velocity gradient, we fit elliptical gaussians to the channel maps as in Tobin et al. (2012). The centroids of the fits toward the two sources are shown, color-coded by velocity, in Figure 3. The reversal of the centroids at the lowest and highest velocities toward the large silhouette disk around 253-1536A is a clear signature of gravitational motion; the gas moves faster closer to the star. Because of the weaker emission toward 253-1536B, fewer channels were detected and we are unable to see the same signature. Given the strength and small velocity extent of the HCO + and HCN lines, however, we assume that they similarly trace the kinematics of the dusty disk rather than an outflow. Through linear fits to the positions of the centroids, we determine the projected rotational planes of the two disks. Following the convention that the position angle (PA) is measured east of north to the redshifted edge of the disk we find P A A = 69.7 ± 1.4 • and P A B = 136 ± 15 • for components A and B respectively 8 . These are projections and the true angle between the rotational axes of the disks (derived from the dot product of the angular momentum vectors) depends on the inclinations 8 As the data are Hanning smoothed, we binned the velocity channels by 2 to provide independent points for the purpose of making the linear fits. of the two disks, as in Jensen & Akeson (2014). For the resolved silhouette disk, 253-1536A, we determine an inclination from face-on, i A = 65 ± 5 • , based on the 0. 6 × 1. 4 size of the optical shadow (Smith et al. 2005), but the inclination of the unresolved disk, 253-1536B, is unknown and can vary from 0 • to 180 • where 90 • is edge-on. We plot the angle, ∆, as a function of this inclination in the left panel of Figure 4. For random orientations, the probability of a particular inclination is proportional to the sine of the inclination (i.e., edge-on disks are more common than face-on). Using this as a prior, and without attempting to fit the observations further, we plot the posterior probability distribution for ∆ in the right panel of Figure 4 and derive a mean value and standard deviation, ∆ = 72 • ± 20 • . Thus we can robustly conclude that the disks are indeed strongly mis-aligned. The projected radius from the star is plotted against the channel velocity for the silhouette disk around 253-1536A in Figure 5. The S-shape is due to a combination of resolution, sensitivity, and Keplerian rotation. Because the disk is inclined and not resolved along the minor axis, emission at velocities close to the systemic motion of the star is dominated by the bright emission from the inner regions of the disk that project to low radial velocities (rather than the fainter emission from the slower moving outer disk) and channel maps only show slight offsets with respect to the star. There is less confusion at greater relative velocities but also weaker emission. There are a few low and high velocity channels, however, for which we can detect the increasing rotational velocity closer to the star. These are marked in blue and compared with a Keplerian profile, in Figure 5 to provide a rough estimate of the central stellar mass. The resolution and signal-to-noise ratio of the data are insufficient to attempt a more detailed model and more rigorous fitting as in e.g., Rosenfeld et al. (2012). With a well constrained inclination, i A = 65 • , as discussed above, we infer M * ∼ 3.5 M . The moderately high stellar mass for this optically faint star is consistent with the X-Shooter ultravioletoptical-infrared spectrum discussed by Ricci et al. (2011). The lack of prominent absorption lines led them to conclude that the star is heavily veiled and is spectral type F or G. For an age range of 1-3 Myr, this corresponds to a mass, M * ∼ 2.5 − 4 M . We can similarly look at the rotation curve for 253-1536B. The disk emission is weaker, both in the line and continuum, and we do not resolve it spatially. However, we are able to measure a shift in the peak position in different spectral channels and from this can study its kinematics. Although we do not detect a Keplerian turnover, the velocity gradient in Figure 3 provides a lower limit to the stellar mass, M * 0.2 M / sin 2 i B . This is consistent with the spectral typing of an M2 star (M * ∼ 0.4 M for an age range 1-3 Myr) from the X-Shooter spectrum (Ricci et al. 2011) and the catalog of Hillenbrand (1997), which classifies it as M2.5e (source 767). Both sets of authors note emission lines in the spectrum that are signatures of strong accretion. The central velocities of 253-1536A and B are v sys = 10.55 and 10.85 km s −1 respectively, with errors of about 0.1 km s −1 . The escape speed of the two stars from one another is ∼ 2.5 km s −1 at the projected separation of 440 AU, and greater than the measured 0.3 km s −1 difference unless the system is > ∼ 10 4 AU apart and viewed almost edge-on. However, the probability of a chance alignment of the two stars was already known to be low (see discussion in Mann & Williams 2009) and this additional kinematic information more likely indicates that this is a bound binary system with an orbital plane close to face-on. SUMMARY AND DISCUSSION The tremendous increase in sensitivity that ALMA provides is transforming our view of protoplanetary disks. Whereas we had known of the existence of the two disks under study here from SMA continuum observations, we can now detect their line emission and study their kinematics. The magnitude of the measured velocity gradients provides bounds on the central masses that are consistent with stellar spectroscopy. The system has a very high mass ratio, ∼ 9, that makes it a useful laboratory for studying the dependence of disk properties on stellar mass. From the continuum alone, we estimate disk masses of 0.074 M and 0.028 M for sources 253-1536A and B respectively, assuming canonical values for the temperature, dust opacity and an ISM gas-to-dust ratio of 100 (Williams & Cieza 2011). Under these assumptions, the disk to stellar mass ratios are ∼ 2% and ∼ 7% respectively. This is particularly high for for the M2 star, 253-1536B, but the dust is likely warmer than the canonical 20 K (possibly in both disks) given the high stellar mass and therefore luminosity of component A. Following the expected L 1/4 dependence from Andrews et al. (2013), we might expect a factor of 2-3 increase in average dust temperature and a correspondingly decrease in the dust masses. These temperatures, masses, and mass ratios can only be estimates given the limited information but serve as a comparison with other millimeter wavelength disk observations. The strong emission in multiple molecular lines suggests significant amounts of moderately warm and dense gas in both disks. The integrated line fluxes are only about a factor of 3-5 weaker in 253-1536B compared to the brighter millimeter component A, despite its much lower stellar luminosity. Radiation from the nearby primary member may be a significant factor in heating both disks and could potentially explain their roughly similar (to within ∼ 50%) line-to-continuum ratios. To compare the chemistry in the two disks, and also to measure their gas masses and gas-to-dust ratios, requires observations of optically thinner isotopologues (Williams & Best 2014). Perhaps the most noteworthy result from these obser-vations are that the projected disk rotational axes are highly misaligned, with an angle of 72±20 • to each other. As torques from the binary orbit act to align the disks on relatively short timescales (Lubow & Ogilvie 2000), the observed conditions are most likely a signature of their formation. Similar ALMA kinematic measurements of mis-alignment have been recently found in two other binary disk systems (Jensen & Akeson 2014;Salyk et al. 2014). These observations demonstrate that wide binaries do not form from the same co-rotating structure such as a massive disk or coherent cloud core (Bodenheimer et al. 2000). Signatures of disorder from the early phases of star formation persist. One possibility is the fragmentation of a turbulent core (Offner et al. 2010;Tokuda et al. 2014). Numerical simulations shows that binary formation in even weakly turbulent cloud cores is quantitatively different than the purely thermal case (Tsukamoto & Machida 2013). In turbulent cores, the direction of angular momentum vectors vary with spatial scale such that disks may form at different angles from each other and from the overall core rotation axis (Bate 2012). Alternatively, dynamical interactions of three or more protostars during the early Class O-I phases may chaotically scramble orbital axes. Non-hierarchical triple or small multiple systems with similar interstellar separations rapidly rearrange into hierarchical configurations consisting of a compact binary and distant companions or ejected members (Reipurth & Mikkola 2012;Reipurth et al. 2010). The observed decrease in multiplicity from Class 0 to I to main-sequence stars provides support for such dynamical evolution (Chen et al. 2013). In crowded regions, large disks may also assist in the capture of binaries, such as proposed for the massive star system Cepheus A (Cunningham et al. 2009). A similar event is thought to have occurred within the last 500 years in Orion BN/KL (Bally et al. 2011;Goddi et al. 2011). Stellar rotation and orbital axes are more tightly aligned in closer systems, a < ∼ 40 AU (Hale 1994), than those studied with ALMA to date. This is also seen in numerical simulations of cluster formation (Bate 2012). It would be interesting to search for disk alignment in such close binaries but, because the same proximity that might align the disks also tidally truncates them (Artymowicz & Lubow 1994;Andrews et al. 2010) and lowers their masses (Cieza et al. 2009), higher resolution and signal-to-noise observations than shown here will be required. At larger scales, one can imagine larger studies across star forming complexes providing new information on the velocity dispersion of protostars in different evolutionary states that can constrain timescales and the turbulent properties of the cloud from which they formed. As disk surveys continue in new ALMA observing cycles, including an expanded program of Orion proplyds, we can expect to routinely detect a much broader suite of molecules and transitions than has been observed in all but a handful of the brightest disks to date. This will open up new paths of exploration for not only studying statistical disk properties such as masses, sizes, gas-todust ratios and chemistry, but also for examining the dynamics of young star forming regions. We thank the referee for a very thorough report, Kaitlin Kratter, Stella Offner, and Hideko Nomura for comments, and Eric Jensen and Rachel Akeson for communicating their results ahead of publication. J.P.W. is supported by funding from the NSF through grant AST-1208911. D. J. is supported by the National Research Council of Canada and by a Natural Sciences and Engineering Research Council of Canada (NSERC) Discovery Grant. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada) and NSC and ASIAA (Taiwan), in cooperation with the Republic of Chile. This work made use of Astropy, a community-developed core Python package for Astronomy (Astropy Collaboration et al. 2013).
2014-10-14T04:27:29.000Z
2014-10-14T00:00:00.000
{ "year": 2014, "sha1": "97cf4fef85c3f67e3ee47ef8a250186961fd0439", "oa_license": null, "oa_url": "https://iopscience.iop.org/article/10.1088/0004-637X/796/2/120/pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "97cf4fef85c3f67e3ee47ef8a250186961fd0439", "s2fieldsofstudy": [ "Physics", "Geology" ], "extfieldsofstudy": [ "Physics" ] }
226970632
pes2o/s2orc
v3-fos-license
Clinical features of patients with acute coronary syndrome during the COVID-19 pandemic Although a reduction in hospital admissions of acute coronary syndromes (ACS) patients has been observed globally during the coronavirus disease 2019 (COVID-19) pandemic, clinical features of those patients have not been fully investigated. The aim of the present analysis is to investigate the incidence, clinical presentation, and outcomes of patients with ACS during the COVID-19 pandemic. We performed a retrospective analysis of consecutive patients who were admitted for ACS at our institution between March 1 and April 20, 2020 and compared with the equivalent period in 2019. Admissions for acute myocardial infarction (AMI) reduced by 39.5% in 2020 compared with the equivalent period in 2019. Owing to the emergency medical services (EMS) of our region, all time components of ST-elevated myocardial infarction care were similar during the COVID-19 outbreak as compared with the previous year’s dataset. Among the 106 ACS patients in 2020, 7 patients tested positive for COVID-19. Higher incidence of type 2 myocardial infarction (29% vs. 4%, p = 0.0497) and elevated D-dimer levels (5650 μg/l [interquartile range (IQR) 1905–13,625 μg/l] vs. 400 μg/l [IQR 270–1050 μg/l], p = 0.02) were observed in COVID-19 patients. In sum, a significant reduction in admission for AMI was observed during the COVID-19 pandemic. COVID-19 patients were characterized by elevated D-dimer levels on admission, reflecting enhanced COVID-19 related thrombogenicity. The prehospital evaluation by EMS may have played an important role for the timely revascularization for STEMI patients. Electronic supplementary material The online version of this article (10.1007/s11239-020-02340-z) contains supplementary material, which is available to authorized users. China with further worldwide transmission to a pandemic outbreak [1][2][3][4][5]. A large cluster of COVID-19 occurred in northeast France as early as mid-February, and a substantial number of critically ill patients started to be transferred to our institution (Nouvel Hôpital Civil, Strasbourg, France) from the beginning of March. As a way to contain the disease, the government established stringent lockdown measures as of March 17, 2020. Although a drastic reduction in patient admissions for acute coronary syndrome (ACS) during the confinement was noticed by heath workers [6,7], clinical features of those patients have not been fully investigated [8]. In the present study, we aimed to evaluate incidence, clinical presentation, serial laboratory tests, and clinical outcomes of patients with ACS during the COVID-19 pandemic. Methods We conducted a single-center, observational survey to collect data of all consecutive patients who underwent emergency angiography for suspected ACS between March 1 and April 20, 2020 and compared with the equivalent period in 2019. ACS was defined as ST-segment elevation myocardial infarction (STEMI), non-ST-segment elevation myocardial infarction (NSTEMI), or unstable angina pectoris. Symptom-onset-to-first medical contact (FMC) time was defined as the time from patient-reported chest discomfort onset time to the time of FMC. FMC-to-device time was defined as the time from FMC to successful wire crossing time during percutaneous coronary intervention. Door-to-device time was defined as the time from hospital arrival to successful wire crossing time. Catheterization laboratory (cath lab) arrivalto-device time was defined as the time from patient arrival in the cath lab to successful wire crossing time. Based on the World Health Organization guidance, a "confirmed case" of COVID-19 was defined by a positive test of a reversetranscriptase-polymerase-chain-reaction assay of a specimen collected on a nasopharyngeal swab. The COVID-19 testing was performed based on the treating team's discretion. The investigation conforms with the principles outlined in the Declaration of Helsinki. Statistical analysis Categorical variables are expressed as numbers (percentage), and continuous variables are expressed as mean ± SD or median and inter-quartile values. Differences between 2 groups were assessed with χ 2 tests or Fisher's exact tests for categorical variables. Unpaired Student's t-test was used to analyze continuous variables with normal distributions, and the Wilcoxon test was used to analyze continuous variables with skewed distributions. Propensity score matching was used to limit the risk of error and the number of biases between the COVID and non-COVID patients. The 3:1 propensity score model was developed using logistic regression. This model included the following covariates: age, sex, STEMI, and NSTEMI. A nearest neighbor algorithm was used to match patients with and without COVID-19 in a 3:1 ratio, with a caliper width equal to 0.2 of the standard deviation of the logit of the propensity score. The propensity score matched cohort included a total of 20 patients (5 COVID-19 patients and 15 non-COVID-19 patients). p values of < 0.05 were considered to indicate statistical significance. All analyses were performed using JMP 13 software® (SAS Institute, Cary, NC) or R version 3.6.3 [9]. Results A total of 106 patients were enrolled from the 2020 period and compared with the 174 patients from the equivalent period in 2019 ( Table 1). The number of patients with acute myocardial infarction (AMI) was dramatically reduced from 159 in 2019 to 92 in 2020 (39.5% reduction), whereas the number of patients with STEMI was similar (Fig. 1). Likewise, the number of patient admissions in the cardiology department was reduced by 42% after the French lockdown ( Supplementary Fig. 1). Discussion In the present study, the COVID-19 outbreak was associated with a 40% decrease in AMI. Similar findings have been noted in northern Italy and in the United States [10,11]. The identification of the mechanisms leading to the reduction in admissions for AMI is beyond the scope of the present work. Nevertheless, recent reports suggested that the decreased number of ACS patients was due to the fear of exposure to COVID-19 affected subjects at hospital admission or a true reduction in the incidence of ACS as the potential result of low physical stress during the social containment [6,12,13]. The delay on STEMI reperfusion during the COVID-19 pandemic has been explained by the fact that the emergency medical system was focused on COVID-19 and a large number of healthcare workers was relocated to manage the pandemic, resulting in increased mortality and complications in STEMI patients [6,14,15]. In contrast, the delay of STEMI care was minimal in our center, suggesting that the healthcare system effectiveness was maintained despite the pandemic. Owing to the emergency medical services (EMS) of our region, more than half of Values are n (%), n/N (%), mean ± SD, or median (interquartile range) AF atrial fibrillation, ARDS acute respiratory distress syndrome, BMI body mass index, BNP brain natriuretic peptide, CABG coronary artery bypass grafting, COPD chronic obstructive pulmonary disease, COVID-19 coronavirus disease 2019, Cr creatinine, CRP C-reactive protein, eGFR estimated glomerular filtration rate, Hb hemoglobin, HDL-C high-density lipoprotein cholesterol, ICU intensive care unit, LDL-C low-density lipoprotein cholesterol, LVEF left ventricular ejection fraction, MI myocardial infarction, NSTEMI non-ST-segment elevation myocardial infarction, PCI percutaneous coronary intervention, POBA plain old balloon angioplasty, STEMI ST-segment elevation myocardial infarction, VF ventricular fibrillation, VT ventricular tachycardia, WBC white blood cell In the present study, patients tested positive for COVID-19 tended to have less chest pain and more type 2 MI. Prior investigations suggested that those patients may have hypoxemia or hypotension along with intense systemic inflammation, which may cause an oxygen supply and demand imbalance in the heart, especially when underlying CAD exists [16][17][18]. Moreover, COVID-19 patients were clearly characterized by elevated d-dimer levels on admission in the present study, reflecting enhanced COVID-19 related thrombogenicity [19][20][21][22]. Although the increased thrombotic susceptibility in COVID-19 patients are far beyond the scope of this study, prior studies underlined that higher thrombotic burden in the acute phase of COVID-19 relies on pro-inflammatory cytokine/chemokine release [23], increased endothelial dysfunction/damage, and potential sepsis induced coagulopathy development in severe cases, all promoting coagulation activation. A recent observational study including 115 STEMI patients clarified that COVID-19 patients had an elevated d-dimer levels as well as higher thrombotic burden compared to non-COVID patients [24]. Our study described lower PCI rate and higher incidence of venous thromboembolism and ARDS in the COVID-19 patients ( Table 2), suggesting that those complications may have led COVID-19 patients to present ACS like symptoms. These insights may pave the way toward a novel diagnostic perspective for ACS patients complicated to COVID-19. Study limitation We acknowledge the following limitations: First, the analyses were performed on the basis of a single center data set with uncertain generalizability. Second, owing to the retrospective nature of this study, there were inherent limitations related to cofounding known or unknown factors. Third, the study period was limited to 7 weeks based on the pandemic period in our region. The use of such a limited period for data collection may represent a source for potential bias. Forth, the effect of the low event rate might have been more pronounced when analyzing small subgroups. Fifth, the incidence of diagnostic testing was only 27% in the non-COVID-19 group, suggesting that certain asymptomatic COVID-19 patients may have been mis-classified. Sixth, a large proportion of COVID-19 patients complicated by AMI was not due to coronary obstruction but due to increased thrombogenicity and/or inflammation. The true effects of COVID-19 on coronary arteries should be identified by a larger cohort. Conclusions In conclusion, a significant reduction in admission for AMI was observed during the COVID-19 pandemic. The prehospital evaluation by EMS may have played an important role for the timely revascularization for STEMI patients. Further studies are needed for the establishment of a dedicate diagnostic pathway for ACS patients with COVID-19, aimed at minimizing healthcare providers risk of infection.
2020-11-17T15:07:13.627Z
2020-11-16T00:00:00.000
{ "year": 2020, "sha1": "75581a8271d090fc59e9a7a6243df0289510b6b2", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s11239-020-02340-z.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "13ac8e4fa9e31c1862e9dd8b62bd09e7e6653fed", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
219468362
pes2o/s2orc
v3-fos-license
Design of a Projectile-Borne Data Recorder Triggered by Overload : The projectile-borne data recorder is used to measure and record the position and attitude data of exterior ballistic flights, which experience high overload during the launching process. The navigation algorithm can be optimized by analyzing the data stored in the recorder. As the primary means of acquiring the navigation data, the micro inertial measurement unit (MIMU) is an inevitable part of the projectile equipment. However, its mechanical structure could hardly bear the high overload during operation. In view of the above problems, a novel projectile-borne data recorder triggered by overload is designed in this paper. The recorder could have the navigation system powered when the projectile leaves the barrel so as to activate the data recording of the high overload. Furthermore, the viability of the MIMU of the high overload is guaranteed through a specific way of system encapsulation. In the proposed design, the overload switch redundancy and the power supply redundancy were adopted to improve reliability. The proposed design is tested with practical experiments, and the results show that the proposed recorder could be e ff ectively triggered by overloading, and the supply voltage is stable, which helps record reliable data for the projectile. Introduction The intelligent projectile is amongst the guided weapons that are fired by an artillery and strike the target accurately through searching, guidance and control during flight [1]. In flight, it is crucial to measure the attitude angles, namely the heading, pitching, rolling, in real time. Use in the EX-171 extended range guided munition [2], the combination of the micro inertial measurement unit (MIMU) and GPS, is a popular scheme for attitude measurement and control [2,3], where the MIMU constituting of the tri-axial accelerometers and gyroscopes, is a sensor unit for measuring acceleration, tilt, impact, vibration, rotation. As micromechanical inertial sensor has etch-moveable structures on silicon wafers, working in the impact environment is great challenge for the MIMU [4]. In particular, the spacing of the comb tooth structure is very small (about 10 µm) in the high-precision micro electro-mechanical systems (MEMS) gyroscope, and there would be resonant vibration with high frequency and large amplitude (1-3 µm) in the comb tooth structure when the power is on, so the device is more likely to fail under great impact [5]. For the projectile, the impact acceleration is large, with an intensity of more than 15,000 g and a duration of more than 10 ms during the launch phase [6]. In this circumstance, there are two outstanding problems, one is that the MIMU is easy to be damaged under the big impact, resulting in incapability of the inertial sensors; the other one is that the experimental data is needed for postanalysis and system improvement on the basis of the proper operation of the sensors. In order to ensure the working reliability of the MIMU, researchers and engineers all over world have completed various studies to solve the above-mentioned two issues. For the first issue, it can be improved through the optimal design of the gyro structure [7][8][9][10][11], like the optimal design of the folding beam [9,10] and the limit structure design of the mass block [8], etc. Such measures can only ensure its reliable operation in the state of no power, and cannot guarantee the normal operation of the device in the case of power [10]. For the second issue, there are three main methods to obtain the flight data of the projectile. One is the external measurement method [12,13], which measures the projectile speed with the external equipment of the projectile, such as a high-speed camera [12], Doppler radar [13] and so on. Since the external measurement method relies on the external equipment of the shooting range, and the rotational speed is also calculated off the projectile, it cannot implement the autonomous measurement or guidance for the projectile. The second method is the telemetry [14], which refers to the installation of a rate sensor on the projectile, and the flight speed information is sent to the ground station over transmission. The problem with telemetry [14] is the great cost of the equipment installation and debugging and transmission. The third method is to equip a recorder in the projectile. Compared with external measurement and the telemetry, the flight recorder [15][16][17][18][19][20] uses the onboard sensor to measure the flight data instead of external equipment, which is a direct and low-cost way to obtain real-time data. This paper aims to design a flight recorder for the projectile, to not only record the miniature inertial measurement unit (MEMS-IMU) data in real time, but also protect the MEMS-IMU from being damaged under the great impact. The proposed design is a FLASH-based recording device which is enabled by overload so as to delay power-on of the MIMU and avoid the impact damage on the sensors. Furthermore, the proposed recorder is marked with redundancy design in improving the system reliability. On one hand, a power supply scheme of a redundant parallel farad capacitor for dual lithium batteries is designed to guarantee the delayed power-on function and to alleviate the dropping of the power supply voltage of lithium battery issues in the projectile firing stage. One the other hand, as the delayed power-on function of the recorder is dependent on the detection of the overload, the record is redundant with two overload detection schemes, namely, the mechanical switching mode, and the electronic switching mode based on the accelerometer to enhance the overload-detecting reliability. In addition, in order to further protect the system, the strength is increased by means of epoxy resin sealing. Finally, the function of the proposed design is verified by laboratory experiments and practical projectile tests. The organization of this paper is as follows. The overall design scheme is given in Section 2. Then, Section 3 presents the detail of the hardware, including the data recording module, the impact detection module, the power supply module, etc. The laboratory experiments on the recorder's functions and the projectile tests with a detailed analysis are presented in Section 4. Section 5 concludes the paper. Design Scheme The block diagram of the designed recorder is shown in Figure 1. It is mainly consisted of three modules, the overload detection module, the recording module and the power supply module. The performance of the recorder is verified by the MIMU. The working process among modules is detailed below. (1) The power supply module only supplies the overload detection module and the recording module at the beginning. If an impact acceleration of certain magnitude is detected in the overload detection module, a trigger signal is created and goes through the delay circuit to relay and supply power to the MIMU. The delay time is determined based on specific requirements. (2) The data recording module constitutes of the data interface unit, the processor and the nonvolatile data storage unit. The data interface receives the MIMU measurement through RS422, and sends data to the processor via bus. The processor records the sensor data into the nonvolatile storage unit through serial peripheral interface (SPI) bus. (3) After the experiment, the experimental data is read back from the nonvolatile memory of the processor, and sent to the computer through RS422. Processor STM32 As the core of the projectile-borne recorder, the processor is responsible for data storage and reading. Considering the performance and programming simplicity of the processor, STM32F411 is selected as the main control unit [21,22]. The processor has the following characteristics: The working frequency is up to 168 MHz, and the processing speed is fast; 3. Floating-point calculation can be carried out; 4. Internal integrated digital signal processing (DSP) instruction set, and the development speed is fast; 5. Large storage space, and strong expansion of storage space; 6. Rich peripherals, with no separate design of external drive; 7. Internal integrated hardware debugging function, so it is convenient to check the internal status during debugging; 8. Cyclic redundancy check (CRC) computing unit, power monitoring controller, watchdog timer, clock controller and other highly integrated. Nonvolatile Storage Unit The memory module is mainly used to record all kinds of parameters in the process of projectile flight [23][24][25][26][27][28][29][30], which requires large storage, fast storing speed and stable performance, and as such, Nor Flash is selected for storage. Compared with the parallel interface Nor Flash, SPI interface Nor Flash has the advantages of small size, simple interface and ease of use, making it especially suitable for a compact application environment. In this research, the Micron Technology Inc's Nor Flash-N25Q00AA is employed, with the Flash of 1 Gb storage capacity, and it is supported by SPI serial port communication as fast as 108 MHz. Furthermore, it can read/be written per page at 256 bytes in 0.5-5 ms. Data Storage Operation The data storage module reads the data from MIMU and sends it to the nonvolatile storage unit. For the 32 bytes data of 1 kHz output rate, the storage of 1 Gb capacity can store about 70 min of the experimental data. That is to say, in 1 ms, the processor needs to read 32 bytes data via RS422 and store them in one page of Flash. If rate of RS422 is 460,800 bps, the read time of 32 bytes data is about 0.7 ms, and the time to store one page in Flash is 0.5 ms. It will be over 1 ms for an update within the serial operation which would result in chaos of the data storing. Regarding this problem, the interrupt receiver using the serial port direct memory access (DMA) and "ping pong" storage have been adopted in this paper. On one hand, the microcontroller is set to the serial DMA interrupt mode. When the serial port receives 32 bytes data, an overflow interrupt is generated, and the microcontroller is transferred to the interrupt program for processing. On the other hand, two data cache areas with 256 bytes are created, namely buffer A and the buffer B, respectively. When the serial port receives the data, it is firstly stored in buffer A. When buffer A is full, the microcontroller sends the data to buffer B. Flash receives data from SPI and writes it to page one. This approach greatly reduces the use of the CPU, allowing it to complete some additional calculations, which is also an improvement in the Flash storage unit utilization. Impact Detection Module In this paper, two sensors are used to detect the impact acceleration. One is the mechanical overload switch that uses the inertia of the mass block to cut off the wire during impact and create an on/off signal; the other one is the electronic overload switch with a wide-range acceleration sensor, which is used when the output of the acceleration sensor exceeds a designed threshold to give an on/off signal. Mechanical Overload Switch The mechanical overload switch adopts a typical "spring-mass" structure, which is composed of mass block, spring, guide sleeve, driving mechanism, contacting mechanism and so on. If the mass block detects an acceleration overload, by acceleration overload, the mass block makes a movement to generate a displacement and inertia force (F = ma), to compress the spring, overcome the friction force, and reverse the force of the contacting mechanism. The structure of the contacting mechanism is pushed to complete the conversion of the output of the overload signal. The structure of the mechanical overload switch is shown in Figure 2. In Figure 3, the shell and the central cylindrical structure act as the guide sleeve, which plays the role of concentrating the impact overload in one direction. The pushing mechanism adopts the shear pin structure. The output mechanism is a metal wire fixed at both ends. In the normal state, the metal wire is conducting. Under the impact of the overload, the pin will be cut off and disconnected. According to the mechanical analysis, the shear force is proportional to the mass of the block and the tensile strength of the spring. The metal wire is the resistance of the system, and all of the mechanisms work together to ensure the overload switch avoids a false alarm of small impact conditions. In this research, the mass block of the mechanical overload switch is 12 g, the spring has a wire diameter of 1 mm, an outer diameter 8 mm, 15 turns, and a height of 35 mm. The spring is made of 65 Mn steel and the elastic coefficient k is about 2200 N/m. The wire for output is made of AWG20 copper with a 0.52 mm inner diameter. The overall view of the mechanical overload switch is shown in Figure 3. In brief, the mechanical overload switch would make a state transition from "on" to "off" if an impact was detected. Electronic Overload Switch The electronic overload switch senses the impact acceleration with the accelerometer, and compares the value with threshold. It gives the high level if the detected impact acceleration exceeds the threshold, otherwise it gives a low-level signal. In this paper, ADXL1004 of ADI is employed as the acceleration sensor, whose range is −500-500 g. It can survive within as much as 10,000 g impact. When the detected impact acceleration exceeds 1000 g, it outputs a high level, otherwise it outputs a low level. Furthermore, due to possible larger impact, a protective case of stainless steel was designed, and vacuum sealing is operated for the system because the sealing adhesive has certain elasticity that could act as a buffer to alleviate the impact on the accelerometer. To sum up, if an impact was detected, the electronic overload switch would output a pulse signal, whose width is the period when the impact is greater than 1000 g. Signal Conditioner For the two means of detecting impact as mentioned above, the difference lies in that the mechanical overload switch provides on/off signals, while the electronic overload switch gives a pulse signal. For which, a signal conditioning circuit is needed and designed to transform the pulse signal to control the signal for the relay. The relay stays disconnected if impact was not detected, and vice versa. Figure 4 shows the circuit diagram of the signal conditioner. The upper part is the diagram of the mechanical overload conditioning circuit, and lower one is the electronic overload conditioning circuit. For the upper part, MOS is short for mechanical overload switch, and when no overload is detected, it outputs a low-level signal at point A. Otherwise, the overload impact cuts the lead, V suppy powers C1 through R1, and the voltage at point A varies from low to high. Then the Schmitt trigger reshapes the shifted signal by converting it to a digital signal, which formulates the mechanical overload switch trigger signal after D flip-flop. For the lower part, when the overload is detected, the switch outputs a high-level signal, which works to charge C2 through R2. At point B, the signal shifts from low level to high level. The shaped signal is reshaped by the Schmitt trigger as the output signal of the electronic overload switch. Following this, the two reshaped signals go through the "or" and create the control signal of the relay. The relay is closed, and the MIMU is powered by V supply . The RC delay circuit is composed of same resistors and capacitors in both upper and lower circuits. The delay time is related to the values of resistance and capacitance. To improve the detection capability of the overload, an "or" logical operator is used to deal with the two indicating signals, that is, if an overload has been detected in any of the two switches, the relay would be closed and the MIMU powered. Power Module Lithium battery has the advantages of high energy density, small volume, high-rated voltage and low self-discharge rate, which is suitable for limited volume applications. We have chosen the square lithium-ion polymer rechargeable cell to supply power for the recorder. The lithium-ion battery is of 3.7 V voltage, 4.2 V maximum charge voltage, and 1800 mAh nominal capacity. When the MIMU is powered, the supply current is about 500 mA, which means that a lithium battery can supply 3.6 h of power. Of course, the capacity of the battery can supply a reliable power within the long enough duration for the experiment. Nowadays, some researches have been carried out on the performance variation of lithium batteries under impact condition. The research results show that the stability of the long and wide surface of the lithium battery is worse than that of the long and high surface when it is impinged. Therefore, the height direction of the lithium battery is set to be opposite to the impinged direction during installation. Besides, considering the voltage drop of the lithium battery under impact, a redundancy design with two sets of lithium batteries is adopted in this paper. Furthermore, the voltage of charging the farad capacitor is less affected by the impact, and has low energy density, and as such, in the lithium battery redundancy design, a farad capacitor is connected in parallel. When the lithium battery voltage decreases, the farad capacitor discharges to supply for the overload detection module and the signal conditioning module in the case of a voltage deficiency emergency. The farad capacitance selected in this design is 5 F. The diagram of redundancy design of the battery pack is shown in Figure 5. As shown in Figure 5, the lithium battery Li1, Li2 and C are powered in parallel, and the Schottky diodes D1, D2 and D3 are used to avoid the back-charging current if damage occurs to the lithium battery under impact. R is the charging-current limiting resistance of the farad capacitor. In addition, two sets of lithium batteries are equipped with charging chips respectively, to make the recorder reusable. V supply is connected to the supply port through a relay, which is controlled by the overload detection module. In order to adapt to the various supply voltage of MIMU, an AD-DC conversion module with adjustable output voltage is added between the supply port and MIMU. Vacuum Sealing To further protect the assemble recorder from great impact, vacuum sealing with epoxy resin adhesive is poured to the system [31][32][33], as the cavities in the system might result in the failure of devices like falling off, collision, fracture, shear under great impact. The vacuum sealing process is as follows: After the screening of the systems, the selected systems are placed in the designed cases of stainless steel, the up direction of lithium batteries is aligned with the impact direction, the cavity of the cased is filled with epoxy resin adhesive, and the vacuum operation is carried out to avoid the bubbles. Practical tests demonstrate that the vacuum sealing can effectively protect the circuit and battery from being damaged in the process of impact. Experiments and Results In this paper, the laboratory experiments and practical projectile test are carried out to verify the performance of the designed recorder. Firstly, the Machete hammer is used in the laboratory to generate the impact acceleration so as to simulate the projectile launching circumstance, in which, the supply stability of the power supply module, the reliability of the impact detection and the effectiveness of the recording function are validated. On this basis, it is proved that the overload delay electrifying can effectively protect the MIMU from being damaged by impact. Secondly, the designed recorder was installed in a projectile and tested in a physical launching process, the recorded flight data of the projectile demonstrates the success of the proposed design. The Machete Hammer Experiment The Machete hammer is a device to simulate the gun-launched process and produce a large impact [34]. The device is mainly composed of the hammer, handle, auxiliary tools, frame and other parts. Because the structure of Machete hammer is specially designed, the impact acceleration of Machete hammer relates to the number of teeth on the Machete hammer, so that the acceleration value at trike can be calculated by the number of teeth on the hammer. This method is easy to operate and has a low cost [34]. The Machete hammer used in the test is shown in Figure 6. Power Parallel Scheme Verification The power supply system was manually powered on in the laboratory, and the Machete hammer test was carried out under the conditions of large capacitance disconnected and connected, respectively. In this test, the parallel capacitor is a farad capacitor with a capacitance of 5 F. When the farad capacitor is disconnected, the parallel voltage of the two lithium batteries falls down to 2.5 V from 4.1 V within the impact produced by the Machete hammer. The waveform was more stable after the capacitor was connected, and the falling voltage is only 0.2 V. Because of the trigger circuit, the delay circuit and the accelerometer circuit mention-above are all powered by the parallel circuit. If the impact causes a great large voltage falling, the switch may fail to function. The test results show that the scheme of parallel connection of the lithium battery and capacitor can maintain a stable voltage and guarantee the reliability of power supply under a high overload. Reliability Verification of the Overload Switch In this experiment, the status of the mechanical switch and the accelerometer switch are evaluated under different impact conditions induced by the Machete hammer. Different tooth numbers of the Machete hammer correspond to different impact strength. In the projectile-launch test, the overload is about 15,000 g. In the design of the overload switch, the overload trigger thresholds of the accelerometer switch and mechanical switch are about 10,000 g and 13,500 g, respectively. To facilitate the observation and recording, an LED lamp was added in the circuit to identify whether the overload switch is laid on. Table 1 presents the closure condition of the overload switch under different tooth numbers. As shown in Table 1, the accelerometer switch is easier to be turned on than the mechanical switch. The two type of switch cannot be triggered when the impact acceleration is less than 9500 g. This illustrates that the storage system would not be accidentally triggered by turbulences in transportation and redundant design of two-type switches increases the reliability. Through comparable data analysis, the reliability of the overload switch design is verified. Integrated Test In the integrated test, two sets of MIMU were tested, set one was equipped with the proposed design, and the other set was not. The necessity of the overload trigger is verified through the outputs of the MIMUs. In set one, a sealed MIMU was directly fixed on the Machete hammer. At the starts of the test, the MIMU was powered on. With the increasing teeth number of the Machete hammer, the impact grows. It was found that the impact of a Machete hammer with 15 teeth would cause the abnormal output of the MIMU and even caused the uniaxial gyro to break. This test proved that the enabled MIMU could not survive the high overload after electrification, and also proved the necessity of delayed electrification. In the other set, the tested MIMU is connected with the proposed data recorder, to test the delayed power-on function of the MIMU. The delay time is set to 1 s through the RC circuit. Furthermore, in order to quantitatively describe the impact magnitude, a charge-type impact sensor was installed on the Machete hammer together with an MIMU and data recorder. Its analogous output was collected through a high-speed data acquisition card in 500 kHz. The sample time of the card and the storage system was synchronous through a sync signal. Figure 7 shows the impact curve with 17 Machete hammer teeth. It can be observed that the width of the impact pulse was 500 µs, and the maximum impact was as high as 30,000 g. Figure 8 shows the synchronization between the NI acquisition card and the data record. It can be seen that the data recorder began to collect the output of the MIMU about 0.95 s after the impact was detected, and that the pulse collected by the impact sensor indicates the occurrence of the impact. The recorder recorded the data of the MIMU sensor under the impact condition, which proved that the delayed power-on was effective. Furthermore, as shown in Table 2, the MIMU's performance before and after the impact were compared. It can be seen that the MIMU's outputs did not have a significant difference before and after the impact, and this phenomenon illustrates the necessity of power-on delay. Launch Test Finally, we completed a practical test with a projectile launch to test the proposed design. The procedures were: 1. Before the test, the MIMU and the data recorder were fixed and sealed in the stainless-steel case, against the high overload. Then, the case was installed in the fuse chamber of the experimental projectile. 2. Before the launch, the overload-triggering circuit and the delay circuit were enabled, but not the MIMU-that is, the overload detection was initiated. 3. The projectile was launched. The designed recorder was supposed to have the MIMU powered for 1 s after the projectile left the barrel and collect the MIMU's output in its storage. 4. The landed projectile was found and the data from the recorder was read. Figure 9 shows the recorded data of the three-axis gyros and accelerometers from the FLASH. During the launch, the barrel pressure was 375.1 Mpa, and the overload was about 16,000 g. As shown in Figure 9b, the output of the z-axis gyro G z reached 5000 • /s after launch, demonstrating the high-speed rotation of the projectile. In addition, the signal curves in Figure 9b reveal the irregular periodic characteristic on three-axial gyro-outputs, which is the typical nutation feature of the projectile flight. From the recorded data, the effectiveness and superiority of the proposed high overload data recorder have been fully validated. Conclusions This paper presents a projectile-borne data recorder that is triggered by overload. This design makes use of the mechanical switch and the accelerometer switch, to enable the MIMU after the projectiles leaves the barrel, so as to protect the inertial sensors. The scheme of parallel connection of the lithium battery and large capacitance ensures the reliability of power supply voltage for the system. The reliability of the overload switch and accuracy of the projectile-borne recorder are eventually verified through the Machete hammer test and projectile launch test. The available recorded data demonstrates that the design could successfully record the output of the MIMU in a high overload environment. With the recorded inertial data of gyros and accelerometers, the dynamic motion and attitudes of in-flight projectiles could be analyzed to provide instructions for guidance and system optimization.
2020-05-28T09:14:51.815Z
2020-05-22T00:00:00.000
{ "year": 2020, "sha1": "3eee3bcf38956da156586c30e189bdfdd1fdf493", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-9292/9/5/860/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "3af80d554e99680177f963e1bea85c8f6f0056f5", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
131765906
pes2o/s2orc
v3-fos-license
Resolving the cosmic X-ray background with a next-generation high-energy X-ray observatory The cosmic X-ray background (CXB), which peaks at an energy of ~30 keV, is produced primarily by emission from accreting supermassive black holes (SMBHs). The CXB therefore serves as a constraint on the integrated SMBH growth in the Universe and the accretion physics and obscuration in active galactic nuclei (AGNs). This paper gives an overview of recent progress in understanding the high-energy (>~10 keV) X-ray emission from AGNs and the synthesis of the CXB, with an emphasis on results from NASA's NuSTAR hard X-ray mission. We then discuss remaining challenges and open questions regarding the nature of AGN obscuration and AGN physics. Finally, we highlight the exciting opportunities for a next-generation, high-resolution hard X-ray mission to achieve the long-standing goal of resolving and characterizing the vast majority of the accreting SMBHs that produce the CXB. Introduction The cosmic X-ray background (CXB) was first discovered by the earliest X-ray astronomical rocket flights (Giacconi et al. 1962), and over the past 50 years, its origin has since been a major research area in high-energy astrophysics. The CXB is now known to be primarily composed of emission from individual active galactic nuclei (AGNs; e.g., Hickox & Alexander 2018), whose X-ray emission provides key constraints on the cosmic evolution of the supermassive black holes (SMBHs). These SMBHs play an important role in the evolution of galaxies and large-scale structure (see Civano et al. 2019 White Paper). The CXB spectrum peaks at an energy of ≈30 keV, indicating a significant contribution from heavily obscured AGN whose emission is attenuated at lower energies and show strong signatures of Compton reflection. In recent years, great progress has been made (particularly with the Chandra, XMM-Newton, Swift, and NuSTAR observatories) in understanding the high-energy emission from AGN with subsequent insights into the process of SMBH accretion, the impact of SMBHs on galaxy evolution, and the ultimate composition of the CXB. However, the challenges of observations at hard (> 10 keV) X-ray energies mean that direct knowledge of the AGN that produce the bulk of the CXB has remained elusive. In this White Paper, we present an overview of our current understanding of the origin of the CXB and the nature of hard X-ray emission from AGN, with a focus on results from NuSTAR . We next discuss current challenges and open questions regarding the nature of AGN obscuration and AGN physics. Finally, we present some of the exciting opportunities for progress with a future high-resolution hard X-ray mission. 2 Black hole evolution and the origin of the CXB Prior to the launch of NuSTAR, our understanding of the >10 keV CXB came almost entirely from wide-field observatories with limited resolving power (see e.g., Gilli, Comastri & Hasinger 2007;Ajello et al. 2008). In soft (< 10 keV) X-rays, sensitive, high-resolution observatories, specifically Chandra and XMM, provided detailed measurements of the individual AGN that made up the CXB, resolving up to 80% of the <2 keV CXB in the deepest observations (Hickox & Markevitch 2006;Xue et al. 2012). Chandra and XMM surveys and associated multiwavelength observations yielded constraints on the evolution of both the number counts of AGN (e.g., Luo et al. 2017) and of their X-ray luminosity function (XLF; e.g., Aird et al. 2015a). Together, these observations have informed synthesis models of the CXB, in which an evolving population of AGN is responsible for the integrated CXB spectrum (e.g., Gilli, Comastri & Hasinger 2007;Ueda et al. 2014;Aird et al. 2015b;Jones et al. 2017;Ananna et al. 2019; Figure 1). CXB synthesis models have led to a number of important conclusions about the AGN population, most strikingly that the total CXB spectrum is peaked toward higher energies than most of the individually detected AGN, indicating the presence of a large "hidden" population of heavily obscured (Compton-thick, N H > 10 24 cm −2 ), X-ray hard AGN (e.g., Gilli, Comastri & Hasinger 2007). The major challenge in interpreting CXB synthesis models has come from the limited observational constraints on the hard (> 10 keV) X-ray emission for individual AGN. Until 2012, some of the most sensitive observations came from the Swift Burst Alert Telescope (BAT) and INTEGRAL observatories,which are able to obtain detailed spectroscopy at energies between 14 keV and as high 195 keV and so provided us a valuable picture of the high energy emission for bright, nearby Obscured sources dominate the high-energy peak of the CXB , with a significant contribution from Compton-thick AGN (thick red line). Right: Distributions in redshift and X-ray luminosity for current hard X-ray surveys by Swift/BAT and NuSTAR and the range that could be probed by the planned X-ray telescope HEX-P assuming a similar shallow-medium-deep survey strategy. AGN. However, Swift/BAT's and INTEGRAL's limited sensitivity means that they can only detect the absolute most luminous sources beyond the local Universe (z ∼0.6). In 2012, the launch of NuSTAR , the first focusing X-ray observatory at energies > 10 keV, enabled a >100-fold increase in sensitivity, allowing us to probe a more complete population of AGN as well as its evolution with redshift. To capture this AGN population, NuSTAR has carried out an extragalactic survey program consisting of several blank fields of varying depths and areas that cover well-studied multiwavelength survey regions (e.g., COSMOS, Extended Chandra Deep Field South, Extended Groth Strip, UKIDSS Ultra Deep Survey, Hubble Deep Field North). The widest area component is the Serendipitous Survey, which searches for sources in the fields of pointed NuSTAR observations and involves dedicated spectroscopic follow-up. Together, these surveys have yielded ∼ 1000 hard X-ray selected AGN and have led to the most precise measurements to date of the hard X-ray flux distribution (log N -log S; Harrison et al. 2016) and X-ray luminosity function (Aird et al. 2015a). An ongoing NuSTAR survey program extends this analysis by targeting known X-ray bright sources in wider soft X-ray fields (e.g., XBoötes and Stripe82X). NuSTAR observations have also dramatically advanced our understanding of the connections between obscuration, accretion physics, and the high-energy X-ray emission from AGNs. A targeted survey of nearby obscured systems has led to key constraints on the nature of X-ray reprocessing from the obscuring "torus" (e.g., Gandhi et al. 2014;Marchesi et al. 2018Marchesi et al. , 2019Baloković et al. 2018;Lanz et al. 2019), indicating a potentially wide range of covering factors among obscured AGN at a given luminosity. Similar observations have uncovered AGN that appear to be intrinsically X-ray weak (Luo et al. 2014) as well as sources with very complex, multi-phase obscurers (Bauer et al. 2015, Teng et al. 2015 and very heavy (Compton-thick) obscuration (e.g., Koss et al. 2016) that may preferentially be associated with late-stage galaxy mergers (e.g., Ricci et al. 2017a;Lansbury et al. 2017). Finally, NuSTAR has observed higher-redshift (z > 0.1) luminous, obscured AGN that are ini-tially identified at other wavelengths, for example through high-excitation optical lines (Lansbury et al. , 2015 and WISE mid-infrared colors (Stern et al. 2014, Yan et al. 2019. NuSTAR has shown that many previously X-ray detected AGN are significantly more obscured than can be determined through soft X-rays alone; including constraints from NuSTAR indicates a large Compton-thick fraction of ∼30% or higher (Lansbury et al. , 2015. Furthermore, some luminous obscured AGN with no previous X-ray detections are extremely weak or undetected in deep NuSTAR exposures, implying N H of 10 25 cm −2 or greater (Stern et al. 2014, Yan et al. 2019. These results all inform the inputs to CXB synthesis models, by providing information on the X-ray spectral shapes, N H distribution, and luminosity evolution of AGN. Recently, we have been able to perform direct tests of CXB synthesis models, by stacking the NuSTAR emission from known AGN (Hickox et al. in prep; Figure 2). These results are able to distinguish between different CXB synthesis models and favor the latest prescriptions that include sophisticated handling of the distributions and evolution of spectral parameters (e.g., Ananna et al. 2019). Looking toward the future, we note that NuSTAR surveys have dramatically increased the fraction of the hard CXB that is resolved into individual sources, to ≈30% (Harrison et al. 2017) compared to < 1% with Swift/BAT. However, NuSTAR detections are still mainly limited to < 16 keV due to instrumental backgrounds, and NuSTAR is approaching a fundamental flux limit for direct detection due to source confusion. Thus the majority of the CXB is still not resolved directly, providing a large discovery space for more sensitive hard X-ray observations. Below, we outline some outstanding challenges in our understanding of the hard X-ray emission from AGN and opportunities with a future sensitive, high-resolution hard X-ray mission. Current challenges and open questions: AGN obscuration, connections to galaxies, and accretion physics A major challenge in AGN population studies with Chandra and XMM-Newton is the impact of absorption, which preferentially affects soft (0.5-2 keV) X-rays and can introduce significant uncertainty on obscuration levels and intrinsic luminosities without sensitive high-energy constraints. Thanks to the large sample provided by NuSTAR surveys (Figure 1), it was possible to measure for the first time the hard X-ray luminosity function (Aird et al. 2015a) beyond the local Universe and the evolution of the obscuration distribution (Zappacosta et al. 2018, Del Moro et al. 2017) to z =3. Nonetheless, the sample is limited in both size and depth. Moreover, most of the sources are not detected in the hardest 8-24 keV band, and their luminosities are all probing the brighter range above the "knee" of the luminosity function, missing the bulk of the population contributing to the total accretion density. While NuSTAR is continuing to perform surveys of several fields and the serendipitous sample is increasing continuously (now ∼ 1000 sources), improving the depth is a key component. Measuring sources below the "knee" of the luminosity function at z >0.5 and probing for the first time the full range of luminosity and N H at all redshifts would allow us to constrain the total accretion density at the energy peak of the CXB and beyond. A related question is the physical nature of the obscuration, arising from a parsec-scale "torus" around the AGN central engine (e.g., Netzer 2015; Ricci et al. 2017b), or from a range of scales from broad-line region clouds to galaxy-scale gas and dust lanes on the scale of the galaxy (e.g., Buchner et al. 2017;Hickox & Alexander 2018;Blecha et al. 2018). Soft X-ray observations are often degenerate between obscuration by small-scale clumpy material and smooth, large-scale clouds, but hard X-ray observations can constrain the reflection component and provide important additional constraints on the geometry as well as the metallicity of the material, as the shape of the Compton hump depends on the abundances (e.g., Wilman & Fabian 1999). An additional open question regards the presence of extreme Compton-thick obscuration (N H > 10 25 cm −2 ). While such sources are known locally (e.g., the canonical Seyfert 2 NGC 1068), their general abundance is poorly understood, in part because their X-ray emission, along with many other AGN signatures, is heavily suppressed. The existence of extremely obscured sources can strongly impact estimates of the total AGN power and the global radiative efficiency (e.g., Comastri et al. 2015). With sensitive hard X-ray observations of AGN selected in the optical, infrared, or radio, we can infer the presence of these heavily obscured sources (e.g., Yan et al. 2019;Aalto et al. 2019 White Paper). To this end, NuSTAR is carrying out a dedicated survey of heavily obscured candidate AGN in the local Universe (NuLANDS; Boorman et al. in prep). The characterization of the X-ray emission in AGN is the best tool available to investigate the physical properties of the innermost regions around accreting SMBH and to measure coronal properties (temperature, optical depth and geometry; see associated White Paper by Kamraj et al. 2019). X-ray reverberation mapping studies with NuSTAR have provided measurements for the corona radius, while measuring the cut off energy in X-ray spectra can provide the temperature (e.g., Ricci et al. 2018). Because of its limited bandwidth, NuSTAR cannot tightly constrain cut-off values larger than few hundreds of keV in the observed frame, except for sources with large photon statistics (see e.g. NGC 5506, Matt et al. 2015), and therefore these studies were so far limited to a sample of few tens of (mostly unobscured) sources (e.g., Fabian et al. 2015) up to z <0.1, luminosities < 10 45 erg s −1 , and cut-off energies <200 keV. Only recently, the high-energy cutoff was measured for a few bright quasars with luminosities of 10 46 erg s −1 in the 2-10 keV band at z >1. These measurements provided an estimate of the temperature, testing runaway pair production models by measuring coronal properties in high redshift sources where higher cut-off energies can be constrained as the spectrum is redshifted to lower energies (Lanzuisi et al. 2019; see magenta points in Fig. 2). Directly resolved by NuSTAR Only accessible with next generation high-resolution hard X-ray observatory . HEX-P can directly resolve up to 80% of the hard CXB, and account for over 90% through stacking of soft X-ray sources. Opportunities with a next-generation hard X-ray mission The high scientific return of NuSTAR and the opportunities described above for sensitive hard X-ray observations demonstrate the motivation for a High Energy X-ray Probe (HEX-P; Madsen et al. 2018), which would be highly complementary to upcoming lower energy X-ray imaging and spectroscopy missions such as XRISM and Athena and concepts like Lynx and AXIS. The key requirements would be a wider energy bandpass (1 to 200 keV) and higher sensitivity at harder energies, which will completely revolutionize our knowledge of AGN and the origin of the CXB. Improved sensitivity could be achieved by increasing the effective area (through more optical modules and mirror shells), lowering the instrumental background, and perhaps most importantly, decreasing the size of the PSF to avoid source confusion and reduce background. Such a configuration would increase the number of sources by a factor of ∼70 (assuming a similar survey strategy used by NuSTAR) and allow us to probe a factor of 30 deeper in the 8-24 keV band, directly resolving as much as 80% of the CXB, a regime that so far has only been probed in stacking studies (see Figs. 1 and 3). HEX-P would also allow us to move to higher energies (to 50 keV and beyond), where no detection has been found so far by NuSTAR (see Masini et al. 2018). With increased high-energy bandwidth and area (as proposed for HEX-P), a NuSTAR-like survey (Fig. 3) would allow us to accurately measure N H for even heavily obscured, distant AGN (up to z = 3) and constrain the redshift evolution of the N H distribution to extreme Compton-thick columns. Survey exposures would yield ∼2,000 counts for AGN at 0.5 < z < 2 and L X > 10 43.5 erg s −1 , constraining the high-z cut off to 30% error (Figure 2). Building on the ground-breaking analyses of NuSTAR with Chandra and XMM-Newton, HEX-P would provide excellent synergy with future soft X-ray missions (e.g., XRISM, Athena, Lynx). These joint observations would take particular advantage of the high spectral resolution of soft X-ray calorimeters, providing exquisite measurements of X-ray line complexes that, together with hard X-ray constraints on the Compton hump, are critical for understanding the geometry of obscuration and the physics of accretion.
2019-04-25T18:38:10.433Z
0001-01-01T00:00:00.000
{ "year": 2019, "sha1": "1cff6fca1e5b90757bff0b7ad148f4aa078135d1", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "1cff6fca1e5b90757bff0b7ad148f4aa078135d1", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
16341423
pes2o/s2orc
v3-fos-license
Comprehensive N-Glycan Profiling of Avian Immunoglobulin Y Recent exploitation of the avian immune system has highlighted its suitability for the generation of high-quality, high-affinity antibodies to a wide range of antigens for a number of therapeutic and biotechnological applications. The glycosylation profile of potential immunoglobulin therapeutics is species specific and is heavily influenced by the cell-line/culture conditions used for production. Hence, knowledge of the carbohydrate moieties present on immunoglobulins is essential as certain glycan structures can adversely impact their physicochemical and biological properties. This study describes the detailed N-glycan profile of IgY polyclonal antibodies from the serum of leghorn chickens using a fully quantitative high-throughput N-glycan analysis approach, based on ultra-performance liquid chromatography (UPLC) separation of released glycans. Structural assignments revealed serum IgY to contain complex bi-, tri- and tetra-antennary glycans with or without core fucose and bisects, hybrid and high mannose glycans. High sialic acid content was also observed, with the presence of rare sialic acid structures, likely polysialic acids. It is concluded that IgY is heavily decorated with complex glycans; however, no known non-human or immunogenic glycans were identified. Thus, IgY is a potentially promising candidate for immunoglobulin-based therapies for the treatment of various infectious diseases. Introduction Antibodies are at the forefront of the field of targeted therapeutics and diagnostics due to their natural high affinity and excellent half-life properties [1]. These molecules can be readily manipulated using standard molecular biology techniques into specialised antibodies that are tailored to perform efficiently in their chosen end-point application [2]. The biopharmaceutical industry has heavily invested in antibody-based therapeutics, which currently represents the largest and fastest growing class of biopharmaceuticals [3]. Polyclonal and recombinant antibodies are developed in many different species. However, a large number of protein targets are highly conserved in mammalian evolution and commonly used mammalian species, such as rabbits and mice, are thus inclined to render a somewhat limited immune response due to immunological tolerance invoked during foetal development [4]. The use of a species more phylogenetically distant from humans such as chickens, who diverged from mammalian genomes some 310 million years ago [5], are ideal alternatives for immunisation and selection of antibodies against highly conserved human proteins [4,6]. IgY is the predominant serum immunoglobulin in birds, reptiles and amphibians and is considered to be evolutionary ancestor of uniquely mammalian IgG and IgE antibodies [6]. Although IgY has characteristics and functions similar to its mammalian counterpart, IgG, with 2 heavy (67-70 kDa each) and two light (25 kDa each) chains (Fig 1), structural differences exist in the number of constant heavy domains, as IgY has an additional constant heavy domain resulting in its higher molecular mass (180 kDa). Furthermore, IgY lacks of a hinge region and has significantly reduced flexibility in comparison to IgG. This limited flexibility is derived from proline-glycine rich regions around the Cν1-Cν2 and Cν2-Cν3 domains [6]. These structural differences provide IgY with distinct biochemical properties and behaviour (Table 1). IgY is more heavily glycosylated than its mammalian counterpart as it contains two potential N-glycosylation sites. One is located in Cv3 domain, that is absent in the mammalian IgG, and the other is located in the Cv3 domain which corresponds to the C H 2 (Cγ2) domain of mammalian IgG (Fig 1) [6,8]. Structural characterisation of N-glycans present on antibody therapeutics is a regulatory requirement as the nature of these glycans can decisively influence the therapeutic performance of an antibody [9]. The linked carbohydrate moieties of therapeutic antibodies affect both their thermal stability and physicochemical properties, along with other crucial features like receptor-binding activity, circulating half-life and immunogenicity [10]. N-glycan profiling of therapeutic antibodies with good reproducibility is vital to fulfil the needs of the both the biopharmaceutical industry and national regulatory agency requirements [11]. There is now considerable awareness of the therapeutic value of IgY antibodies with respect to a variety of pathologies including, but not limited to, pulmonary or gastrointestinal infections. For a detailed review of the current IgY therapeutic approaches in both animal studies and clinical trials in human cohorts see Spillner et al., 2012 [1]. The N-glycosylation pattern of avian IgY was previously shown to be more analogous to that in mammalian IgE than IgG, presumably reflecting the structural similarity to mammalian IgE [7]. While previous studies have elucidated the IgY N-glycan profile in detail from serum, egg yolk and various other expression vehicles [8,12,13], in this study, the chromatography technique was significantly improved through the use of hydrophilic interaction chromatography ultra-performance liquid chromatography (HILIC UPLC) which allows for shorter run times and greatly increased resolution. In addition, the bioinformatics tool, GlycoBase, was used to greatly assist the analyses. GlycoBase consists of a database with HILIC and mass spectrometry data for over 460 2-AB-labeled N-linked, 68 O-linked, and 71 free glycan structures. This reliable and robust method facilitates detailed analysis of femtomolar quantities of Nlinked sugars released from glycoproteins This study describes the detailed N-glycan profile of IgY polyclonal antibodies from the serum of Leghorn chickens using a fully quantitative high-throughput N-glycan analysis based on ultra-performance liquid chromatography (UPLC) separation of released glycans. the duration of the study. The procedures were classified as "mild", and all procedures were carried out by highly trained, competent, personnel. Immunoglobulin Y purification Polyclonal IgY antibodies were purified from the serum of an adult female leghorn chicken using the Pierce 1 Thiophilic Adsorption Kit (Thermo Scientific, Ireland). This protocol was carried out as per manufacturer's guidelines and the eluted protein fractions were pooled and concentrated in a 10kDa MWCO Vivaspin column (Sartorius, Ireland). Total protein concentration was measured by spectrophotometry at 280 nm (NanoDrop 1000). The purified IgY polyclonal antibody sample was subjected to sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) in a 12% gel and stained with InstantBlue (MyBio, Ireland). The sample was also subjected to SDS-PAGE in a 12% gel and subsequently blotted onto a nitrocellulose membrane using the Pierce G2 Fast Blotter system (Thermo Scientific, Ireland). The membrane was blocked for 1 hour at room temperature with PBS containing 5% (w/v) skim milk. To detect the heavy and light chains of the IgY, the sample was probed with a horse radish peroxidase (HRP) labelled-donkey anti-IgY (H+L)-specific antibody (Gallus Immunotech, Canada) (1:2,000 dilution) in PBS-Tween 20 (0.05%, v/v)-skim milk (1%, w/v) for 2 hours at room temperature. Specific bands were visualised using liquid TMB as a substrate for HRP. Ultra-Performance Liquid Chromatography (UPLC) 2AB derivatized N-glycans were separated by UPLC with fluorescence detection on a Waters Acquity UPLC H-Class instrument consisting of a binary solvent manager, sample manager and fluorescence detector under the control of Empower 3 chromatography workstation software (Waters, Milford, MA, USA [17]. An injection volume of 10 μL sample prepared in 70% (v/v) MeCN was used throughout. Samples were maintained at 5°C prior to injection, while separation was carried out at 40°C. The fluorescence detection excitation/emission wavelengths were λ excitation = 330 and λ emission = 420 nm, respectively. The system was calibrated using an external standard of hydrolysed and 2AB-labeled glucose oligomers to create a dextran ladder, as described previously [14]. Weak Anion-Exchange High-Performance Liquid Chromatography (WAX HPLC) Weak anion exchange (WAX) HPLC to separate the N-glycans by charge was carried out as detailed in Royle et al., 2006, with a fetuin N-glycan standard as reference. WAX HPLC was performed using a Vydac 301VHP575 7.5 × 50 mm column (Anachem) on a 2695 Alliance separations module with a 2475 fluorescence detector (Waters), which was set with detection excitation/emission wavelengths of λ excitation = 330 and λ emission = 420 nm, respectively [15]. Solvent A was 0.5 M formic acid adjusted to pH 9.0 with ammonia solution, and solvent B was 10% (v/v) methanol in water. Gradient conditions were as follows: a linear gradient of 0 to 5% (v/v) A over 12 minutes at a flow rate of 1 mL/minute, followed by 5-21% (v/v) A over 13 minutes and then 2-50% (v/v) A over 25 minutes, 80-100% (v/v) A over 5 minutes followed by 5 minutes at 100% A. Samples were prepared in water and a fetuin N-glycan standard was used for calibration [15]. Ultra-Performance Liquid Chromatography -Fluorescence-Mass Spectrometry (UPLC-FLR-MS) For UPLC-FLR-QTOF MS analysis lyophilised IgY samples were reconstituted in 3 μl of water and 9 μl acetonitrile. Online coupled fluorescence (FLR)-mass spectrometry detection was performed using a Waters Xevo G2 QTof-with Acquity 1 UPLC (Waters Corporation, Milford, MA, USA) and BEH Glycan column (1.0 x 150mm, 1.7 μm particle size). For MS acquisition data the instrument was operated in negative-sensitivity mode with a capillary voltage of 1.80 kV. The ion source block and nitrogen desolvation gas temperatures were set at 120°C and 400°C, respectively. The desolvation gas was set to a flow rate of 600 L/h. The cone voltage was maintained at 50V. Full-scan data for glycans were acquired over m/z range of 450 to 2500. Data collection and processing were controlled by MassLynx 4.1 software (Waters Corporation, Milford, MA, USA). The fluorescence detector settings were as follows: λ excitation : 330 nm, λ emission : 420 nm; data rate was 1pts/second and a PMT gain = 10. Sample injection volume was 10 μL. The flow rate was 0.150 mL/minute and column temperature was maintained at 60°C; solvent A was 50 mM ammonium formate in water (pH 4.4) and solvent B was acetonitrile. A 40 minute linear gradient was used and was as follows: 28% (v/v) A for 1 minute, 28-43% (v/v) A for 30 minutes, 43-70% (v/v) A for 1 minute, 70% (v/v) A for 3 minutes, 70-28% (v/v) solvent A for 1 minute and finally 28% (v/v) A for 4 minutes. Samples were diluted in 75% (v/v) acetonitrile prior to analysis. The weak wash solvent was 80% (v/v) acetonitrile and the strong wash solvent was 20% (v/v) acetonitrile. To avoid contamination of Mass Spec system, flow was sent to waste for the first 1.2 minutes and after 32 minutes. Molecular Modelling of IgY Molecular modelling of IgY was performed on a Silicon Graphics Fuel workstation using InsightII and Discover software (Accelrys Inc., San Diego, USA). Figures were produced using the program Pymol [18]. Protein structures used for modelling were obtained from the Protein Data Bank (PDB) database [19]. The peptide structure of chicken IgY was based on the crystal structures of human IgE domains Cε2-4 [20] and human IgG Fab domain [21]. Sequence alignment and methods for generation of homology model are provided in S1 File. IgY Purification Immunoglobulin Y differs from most of the other immunoglobulins as it does not bind protein A or protein G [22]. Here, IgY was successfully recovered from the serum of chickens using thiophilic adsorption, which is based on the principles of hydrophobic interaction chromatography. Many proteins, particularly immunoglobulins will bind to an immobilised ligand that contains a sulfone group neighbouring a thioether. Addition of salts such as potassium sulphate will promote binding by encouraging the protein into close proximity of the ligand [23,24]. Total protein concentration was determined to be 11.5 mg/mL by spectrophotometry at 280 nm (NanoDrop 1000). The heavy and light chains of the purified IgY were visualized by Western Blot analysis (Fig 2). IgY N-glycan profiling The N-glycans released from the purified IgY were analysed by WAX HPLC and HILIC UPLC in combination with exoglycosidase digestions and structural assignments using established methods [15] and the software tool GlycoBase (https://glycobase.nibrt.ie). UPLC-FLR-QTOF MS analysis was also carried out for comparative analysis. Annotation of the N-glycans present in each chromatographic peak was based upon the oligosaccharide composition as derived from the m/z value. Over 80 different glycans structures were assigned to 40 peaks (each peak contains one or more glycans) (Figs 3 and 4 and S1 Table). N-glycan structures annotated include high mannose, hybrid and complex glycans with variable degrees of core fucosylation, galactosylation and sialylation. To assign the complex sialylation properly, the N-glycome was separated by WAX HPLC according to the number of sialic acids and then each WAX fraction was subjected to an array of sialidases for assignment of sialic acids linkages (S1 Table). The resulting HILIC UPLC profiles were combined with other exoglycosidase digests and indicated the presence of complex glycans, with more sialic acids than branches (Fig 4, S1 Table). Table). [15,25]. Shown here are the most abundant glycans identified-glycans assigned to peaks with % area great than 5%-highlighted in grey is the most abundant glycan(s) within that particular peak. For full glycan assignment see S1 Molecular Modelling of IgY The precise glycans chosen for sites N390 (M9Glc/peak 31 and A2G2S1/peak 20) were based on the site specific glycan analysis of IgY [8]. The glycans chosen for sites N292 (A2G2S2/peak 27 and FA3G3/peak 24) were representatives from the other largest peaks. Glycan structures were generated using the database of glycosidic linkage conformations [26] and in vacuo energy minimisation to relieve unfavourable steric interactions. The Asn-GlcNAc linkage conformations were based on the observed range of crystallographic values [27], the torsion angles around the Asn Cα-Cβ and Cβ-Cγ bonds then being adjusted to eliminate unfavourable steric interactions between the glycans and the protein surface (Fig 5). The complete IgY sequence is provided in S1 Fig. Discussion Chicken antibodies have several distinct biochemical advantages over mammalian antibodies and are widely utilised in the field of biotechnology. They do not activate the mammalian complement system nor interact with rheumatoid factors, or bacterial and human Fc receptors. Hence IgY antibodies make ideal regents for immunological assays as they can reduce assay interference in a mammalian serum sample, resulting in increased sensitivity as well as decreased background [28]. The use of chickens as hosts for the generation of therapeutic antibodies is becoming increasingly more prevalent with a greater understanding of the unique attributes of avian antibodies [2,29]. Polyclonal IgY represents an attractive approach to immunotherapy for the treatment of numerous diseases [1]. Notably, orally administered IgY preparations have been demonstrated as an alternative to antibiotics for the prevention of pulmonary Pseudomonas aeruginosa (PA) infections in a group of patients with cystic fibrosis [30,31]. In this study, the authors show that the IgY treated group had significantly less incidents of colonization with PA than the control group and none of the IgY-treated patients became chronically colonized with PA [30,31]. The robust, reliable methods employed in this study allow for shorter run times with increased resolution that enables identification of glycans that may not have been previously observed in other IgY glycoprofiling studies. Our structural assignments revealed serum IgY to contain mainly complex, bi-, tri-and tetraantennary glycans with or without core fucose and bisects, all with varying levels of galactosylation and sialylation, hybrid and high mannose glycans. In investigating the site-specific N-glycosylation of IgY, Suzuki and Lee [8] noted the Fc portion of IgY possesses a N-glycosylation site which is structurally equivalent to conserved glycosylation sites of other Ig classes in mammals and is composed of predominantly high-mannose type oligosaccharides [8]. This uniquely avian glycosylation pattern at the conserved N-glycosylation site is thought to be attributed to the structural differences between IgG and IgY (IgY lacks the defined hinge region observed in IgG) [12]. The additional N-glycosylation site, located in Cv2 domain, was previously shown to contain exclusively complex-type oligosaccharides [8,12]. These distinct avian glycosylation patterns and structural differences provide IgY with unique biochemical properties and behaviour. A model of IgY with the glycans identified from this study was generated following glycan assignment (Fig 5). Characterisation of the individual glycans decorated on a protein is essential for detailed understanding of structure/ function relationships and the design of potential therapeutic agents. The model generated from this study aims to enhance our understanding of the therapeutic potential of IgY. Computational modelling methods are universally accepted as central tools in the invention process for many biopharmaceuticals, facilitating drug development areas, such as optimising affinity for a target while minimising cross reactive effects, alongside optimising pharmacokinetic properties [32]. The oligosaccharide content of therapeutic immunoglobulins plays a significant role in its bioactivity and pharmacokinetic (PK) activity. Raju and colleagues (2000) examined at variations between the glycan content of IgG across several species. These authors highlighted the importance of choosing the right host in generating therapeutic IgG as the terminal sialylation of IgG is species specific [13]. In this study, LC-MS analysis of chicken IgY suggested the Nlinked glycosylation of chicken IgY is considerably more heterogeneous than in human IgG. Our results are consistent with previous N-glycan studies of IgY from both serum and egg yolk [8,12,13] and also detect several previously unidentified structures. In this study high sialic acid content was observed, with many sialic acid isomers (same composition but different sialic acid linkage arrangements resulting in a different GU from the original structure). The presence of unusual sialic acids was also noted, which is likely to be polysialic or Sialic acid linked on N-Acetylgalactosamine (GalNAc) as well as on Galactose. Comprehensive N-Glycan Profiling of Avian Immunoglobulin Y Sialic acids are most commonly α2-3 or α2-6 linked to galactose (Gal) or α 2-6 linked to Gal-NAc. However, Sialic acid can also be found linked to N-Acetylglucosamine (GlcNAc) or to another Sialic acid in α2-8 or α2-9 linkage [33]. Polysialic acids occupy internal positions within glycans, the most common being one Sialic acid residue attached to another, often at the C-8 position [34]. The high sialic acid content of IgY is very important when considering IgY as a therapeutic agent as the level of sialic acid can have a significant impact on the PK of therapeutic antibodies. A lower content of total sialic acids can significantly reduce the half-life of a drug [35]. Hence, the high sialic acid content of IgY that was observed suggests IgY-based biotherapeutics could have potentially extended circulating half-lives and are promising candidates against a variety of pathogens. High mannose glycans were also found on the IgY, which can be removed from circulation by mannose-binding receptor, therefore lowering its half-life [36]. However, the high mannose glycans on IgY are rather low in quantity in comparison to the highly sialylated complex glycans and should have no effect on the therapeutic application of IgY. Certain glycan structures have a direct impact on the immunogenicity of therapeutic proteins, that is, their presence can affect protein structure in such a way that the protein becomes immunogenic. However the glycan structure itself can also induce an immune response. The sialic acid N-Glycolylneuraminic acid (Neu5Gc) and terminal galactose-α-1,3-galactose are examples of such structures that are not naturally present in humans and are known to be immunogenic when used as therapeutics [37]. These non-human antigenic structures could promote clearance of a biopharmaceutical preparation from circulation [38][39][40]. The chimeric mousehuman IgG1 monoclonal antibody, Cetuximab, is an anti-human epidermal growth factor receptor (EGFR) antibody used for the treatment several cancers [38]. High incidences of hypersensitivity reactions to Cetuximab were reported and a study by Chung and colleagues showed that the majority of patients who had a hypersensitivity reaction to Cetuximab also had circulating IgE antibodies against Cetuximab before therapy was initiated. These antibodies were specific for the glycan structure galactose-α-1,-3-galactose, which is present on the Fab portion of the Cetuximab heavy chain [38]. In order to overcome these severe hypersensitivity reactions which are observed in many immunoglobulin-based biotherapeutic agents it is of primary importance to ensure the oligosaccharide content will not elicit such reactions. Recently, a glyco-engineered anti-EGFR monoclonal antibody with a lower α-Gal content than Cetuximab was developed [41], highlighting the importance of these structures to the biopharma industry in the development of novel biotherapeutics. In conclusion, while IgY is heavily decorated with complex glycans, no non-human immunogenic structures were identified. These results were determined using highly robust methods and are in accordance with previous IgY glycosylation studies from chicken serum [8]. The results from this study, combined with other known advantages of chicken antibodies, such as increased stability over IgG and phylogenetic distance from man [1], makes chickens ideal hosts for the generation of novel oral therapeutic interventions for the treatment of numerous infectious diseases. Supporting Information S1 Fig. Full IgY Sequence: (A) The complete amino acid sequence of IgY upsilon heavy chain including leader sequence and rearranged VDJ sequences and (B) The amino acid sequence chicken λ light chain. Numbering for chicken IgY heavy chain is based on the deduced amino acid sequences from cDNA, starting from the first alanine in the VH region [42,43] Numbering for chicken light chain immunoglobulin is derived from the nucleotide sequence from recombinant cDNA plasmids constructed from chicken spleen poly(A)-containing RNA [43]. (TIF) S1 File. Sequence alignment and methods for generation of homology model. (DOCX) S1
2018-04-03T02:10:33.779Z
2016-07-26T00:00:00.000
{ "year": 2016, "sha1": "15116ba0b8eb4aba40094865bab883da56ccdd0f", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0159859&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "15116ba0b8eb4aba40094865bab883da56ccdd0f", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
246893191
pes2o/s2orc
v3-fos-license
Echinoderm model systems , homology , and phylogenetic inference : Comment and reply to Paul ( 2021 ) Understanding the phylogenetic relationship among derived blastozoans has been a goal of researchers since phylogenetic methodologies were first applied to Paleozoic echinoderms. Paul (2021) proposed a new “pan-dichoporites” group to circumscribe early Paleozoic blastozoans. Unfortunately, this work includes many inaccuracies, non-reproducible analyses, and nonstandard method choices that confuse rather than advance the understanding of echinoderm paleobiology. Herein, we focus on key aspects of philosophy, methodology, and data reproducibility the publication of Paul (2021) raises that need to be addressed and considered by echinoderm researchers as they assess the concept of pan-dichoporite echinoderms. To better understand body-wall homologies, common ontogenetic patterns, major events in body plan evolution, and the identification of synapomorphies among morphologically disparate echinoderm clades, Mooi et al. (1994) proposed the Extraxial-Axial Theory (EAT) largely based on a novel understanding of larval development in extant echinoids. A discussion of the validity of EAT is outside the scope of this commentary, but its use by Paul (2021) does raise important philosophical questions. Are eleutherozoans, including echinoids, an appropriate model to understand morphologically diverse extinct echinoderm clades such as the blastozoans and other pelmatozoans? Assumptions that Paleozoic echinoderms would have had similar developmental pathways to extant echinoids are problematic. At present, very little information exists on the larval histories of Paleozoic pelmatozoans (Sumrall and Sprinkle 1998;Sevastopulo 2005). This lack of developmental data should be mentioned as a caveat in all studies invoking EAT as a model for homology among non-eleutherozoan taxa. For example, ocular plate rule (OPR) and radial water vessels (RWV) are concepts based on extant eleutherozoans, but ocular plates are documented only in echinoids and asteroids and do not occur in other groups. Universal Elemental Homology (UEH) is a homology hypothesis that takes into account comparative anatomy, ontogeny, function, and position to identify homologous plates across taxa (Sumrall 2010;Sumrall and Waters 2012). EAT and UEH are two different schemes for understanding homology that should not be considered mutually exclusive. Sumrall and Waters (2012: 956) attempted to underscore this point with the following: "The EAT theory has been useful for understanding homology at the highest taxonomic levels where deep structure is illuminated by these regional homologies. Universal elemental homology (described here for stemmed echinoderms) takes the understanding of homology to the next level by allowing the identification, in many cases, of individual plates across clades. Thus, evolutionary changes in shape or plate contact relationships can be used to generate characters that are useful for reconstructing phylogeny at the lowest taxonomic levels." The plate nomenclature update from Sumrall and Waters (2012) was useful in showcasing the critical issue with plate naming systems in echinoderm paleobiology. The work by Paul (2021) seemingly reproduces a version of this update but following the EAT schema. Paul (2021) largely ignores the efforts to understand the homologous oral plates in blastozoans in the UEH schema outside of stating they are incongruent with EAT (Sumrall and Waters 2012). We completely understand that authors may not fully agree with UEH; however, when disagreement arises between homology schemes a case should be presented to showcase the efficacy of one approach as better than the other rather than to completely disregard the body of published work. Similar to the issues that arise with applying EAT to blastozoans, UEH does not work when applied to echinoids. The homologous skeletal elements outlined in UEH do not exist in echinoids, which would all be rendered as non-applicable data in morphological and phylogenetic downstream analyses. Therefore, the homology scheme employed to assess and generate character state data for the echinoderm group is important in understanding small (e.g., Blastoidea) and large group (e.g., Blastozoa) evolutionary patterns. Our hypotheses of homology must be critically evaluated, as phylogenetic inferences are sensitive to these data and can lead us to erroneous understandings of evolutionary relationships. Data inputs and phylogenetic inference Phylogenetic tools are powerful, but the output is highly dependent on the data input. For reproducibility, all published studies should include supplemental files of all files used to perform the analysis. Paul (2021) stated that he only used early Paleozoic blastozoans, though the eublastoid used in this analysis is a Carboniferous genus when there are Silurian taxa that would have been better suited (e.g., Polydeltoideus or Troosticrinus). There is also an inaccuracy listed in Table 1 with the placement of Macurdablastus in Stephanocrinidae. This taxonomic placement is not supported by previous work (e.g., Broadhead 1984;Bodenbender and Fisher 2001;Bauer et al. 2019). Phylogenetic characters are assumed to be both heritable and independent of one another. These assumptions require the removal of character sets that break these assumptions such as ecology and stratigraphy (Sumrall 1997). Swofford and Olsen (1990) present the challenges that would arise in computing phylogenetic trees if dependent characters were used. In Paul (2021), no character explanations were provided to clarify the thought behind character constructions, making it difficult to ascertain the morphologies encompassed by each character state. However, many characters used in the analysis appear to be constructed outside of these assumptions or use arbitrary ranges that do not describe alternate expressions of homology (Weins 2001;Wiley and Lieberman 2011). Many characters are also coded incorrectly. For example, character 11 is "radials being present or absent". Those with missing radials (e.g., Thoma tocystis) should then be coded as non-applicable in other characters involving radial plates, though they are in Paul (2021) as a "?". Three of the fourteen taxa are missing more than 10 of the 24 characters (42%). Missing data has a large effect on tree topology, especially in such a small dataset. This is critical because character selection (and taxon selection) can have large influences on tree outputs (Wiley and Lieberman 2011). For any phylogenetic analysis, it is commonplace for character descriptions and codings to be publicly available, allowing readers to follow and understand the author's justifications for character description and character state transformation selections. Missing character data can artificially alter tree topologies as it relates to accuracy and support (Scotland et al. 2003). Two taxa in the work of Paul (2021) are missing 11 and 12 charac-ters. This, in concert with the three parsimony-uninformative characters, problematically leaves fewer than 50% of the characters scored for these taxa, possibly resulting in a decrease in accuracy of reconstructed trees (Huelsenbeck 1991;Hartmann and Vision 2008). With too much information missing, maximum parsimony will recover many alternate topologies and the consensus summary trees may be misleading (Wilkinson 1995). Best practices suggest that results are more resolved when there are at least two to three times the characters than there are taxa (Scotland et al. 2003). Phylogeny.-General methodological and interpretive errors exist in the recent Paul (2021) work. For example, there is a rather lengthy discussion regarding the placement of the unusual glyptocystitoid Rhombifera in the blastozoan evolutionary history. The author suggests that inclusive blastoids (i.e., eublastoids, coronoids, Lysocystites, Macurdablastus) are most closely related to Rhombifera (Paul 2021: 48). However, the results of the phylogenetic analysis (Paul 2021: 59) do not align with this assessment as Rhombifera is not recovered as the sister group to the inclusive Blastoidea. There is no discussion on why Rhombifera is not sister taxa to the inclusive Blastoidea as one would expect given the earlier sections. The author concludes that the ambulacrals of Lysocystites are Rhombifera radials and eublastoid lancets. Paul (2021) does not discuss that if this is the case, we would expect to see ambulacrals on Thomacystis and Caryocrinites as well, given the results of his phylogeny. The phylogenetic analysis performed here used a heuristic search method for 14 taxa. However, best practices in phylogenetics suggest that a heuristic search for this particular analysis is inappropriate, as the size of the dataset allows for the use of exact methods (i.e., exhaustive searches or branch and bound searches; Swofford and Olsen 1990). Support indices are useful measures that allow researchers to gain information on the dataset and resulting tree topology, but they provide little information about support for monophyletic groups within the tree (Wiley and Lieberman 2011). Other measures of nodal support (i.e., bootstrap) that resample the character matrix would be more valuable here. The author describes 100% support in reference to the 50% majority rule consensus tree but this information indicates that of the six resulting trees the relationships were recovered 100% of the time. It is not indicating that the nodes are 100% supported. Upon request, the PAUP nexus and log files were sent for re-analysis. Our intention was not to correct the character coding dataset, rather to illustrate pathways for improvement. We were able to replicate the exact results from PAUP for the parsimony analysis. With this tree topology, we ran a bootstrap analysis, resampling all 24 characters for 100 replicates to determine nodal support. Only six of the 11 nodes had ≥ 50% support. Unsurprisingly, inclusive Blastoidea is found at 79%; Caryocrinites + Thoma cystis found at 76%; the large grouping of all taxa except Akado crinus and Cambrocrinus is at 72%. Support for Macurdablas tus and Codaster is at 68%. Lysocystites and Stephanocrinus is at 65%. Finally, the large grouping of Macrocystella upward is supported at 50% (Fig. 1A). Since parsimony analysis excludes any characters that are considered "parsimony uninformative", this includes three characters of the 24, we also re-analyzed the dataset with maximum likelihood to see if utilizing all 24 characters with the mKv model (Lewis 2001) provided differing results. Three trees were retained with a LnL score of 150.4167 with six nodes which had ≥ 50% support (Fig. 1B). The major difference between the recovered trees is the placement of Ridersia, which is not surprising given the number of missing characters (8/24) for that taxon (Fig. 1). In this tree the support for Ridersia upward is much higher at 91% than in the parsimony results. The relationships between the inclusive Blastoidea match those proposed by Bauer et al. (2019) and the support is slightly higher than that of the parsimony tree at 81%. In this analysis, Rhombifera is always sister taxon to Lepadocystis, which was an unresolved grouping in Paul's (2021) parsimony tree. Most notably, the taxon that is unstable across the three most likely trees (Ridersia) is not the taxon (Lepadocystis) that was unstable in the parsimony analysis (Fig. 1). The pan-naming convention in Paul (2021) is used incorrectly for the proposed pan-dichoporites. As laid out in the Phylo-Code (Cantino and De Queiroz 2020), pan groups are designated as total groups that include the related name-bearing crown groups. For example, Echinodermata is a crown clade including all descendants of the last common ancestor of Echinoidea, Asteroidea, Ophiuroidea, Holothuroidea and Crinoidea (Sumrall 2020a). This circumscribes a clade that includes all modern taxa and such fossil taxa that are descended from the most inclusive node. Sumrall (2020b) defined Pan-Echinodermata as the total group echinodermata; that is, the stem lineage that includes all taxa closer to Echinodermata than to any other crown. This effectively aligns with the traditional non-phylogenetic diagnosed Echinodermata including stylophorans and other basal taxa. The pan-naming convention simply cannot be used for extinct lineages. There is no crown group within blastozoans as the last known members were extinct by the end of the Permian. Likewise, there is no total group in the blastozoans because there is no set of taxa closer to an non-extant crown lineage than another crown lineage. Pan-dichoporites is effectively a synonym of Blastozoa and while they were not formally defined either by PhyloCode rules or by ZooBank, names for several derived echinoderm clade names were previously suggested by Sumrall (1997). Stratigraphy.-Stratigraphy has played an interesting role in understanding echinoderm evolutionary relationships. The concept of stratocladistics (Bodenbender and Fisher 2001) was established using blastoids. However, stratigraphic information should not be incorporated into a phylogenetic character matrix (Sumrall 1997); it is information that is unrelated to the heritable traits of the animals. However, it can be incorporated into other aspects of phylogenetic analysis as a parameter to better understand and analyze clades of interest. On several occasions the author makes targeted remarks regarding the stratigraphic occurrences of fossils and the evolution of these groups. These statements are problematic in different ways. The first reads, "Both the suggestions that Hemicosmites preceded the glyptocystitoids and that Lysocystites preceded the remaining blastoids sensu lato are counter to known stratigraphy of occurrences" (p. 59). This frame of thinking excludes considerations of evolutionary processes and is not in terms of the most recent common ancestor but instead only of the terminal taxa. These thoughts are exclusive and can be confusing to readers. In another section the author writes, "It has become fashionable to ignore stratigraphy and the grounds of the incompleteness of the fossil record" (p. 59). This is ignoring the recent and large body of work utilizing stratigraphy in concert with phylogeny in fossil invertebrates (e.g., Congreve et al. 2019;Lam et al. 2018Lam et al. , 2021Bauer 2021). Stratigraphy is not being ignored, rather it is more fully and appropriately being utilized. Just because a taxon is stratigraphically older does not mean it is ancestral.
2022-02-17T16:02:48.583Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "307d5e0b646564f47a22f40e7577d46d171c52a5", "oa_license": "CCBY", "oa_url": "https://www.app.pan.pl/article/item/article/item/app009562021.html?pdf=39", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "9964ffb636f7b518af9a4bcc080c66a6aa431dae", "s2fieldsofstudy": [ "Biology", "Geology" ], "extfieldsofstudy": [] }
251713506
pes2o/s2orc
v3-fos-license
Number Needed to Quarantine and Proportion of Prevented Infectious Days by Quarantine: Evaluating the Effectiveness of COVID-19 Contact Tracing Objectives: Information on the effectiveness of COVID-19 contact tracing is lacking. We proposed 2 measures for evaluating the effectiveness of contact tracing and applied them in a public health unit in northern Portugal. Methods: This retrospective cohort study included the contacts of people with COVID-19 diagnosed July 1–September 15, 2020. We examined 2 measures: (1) number needed to quarantine (NNQ), as the number of quarantine person-days needed to prevent 1 potential infectious person-day; and (2) proportion of prevented infectious days by quarantine (PPID), as the number of potential infectious days prevented by quarantine divided by all infectious days. We assessed these measures by sociodemographic characteristics, types of contacts, and intervention timings (ie, time between diagnosis or symptom onset and intervention). We considered 3 scenarios for infectiousness periods: 10 days before to 10 days after symptom onset, 3 days before to 3 days after symptom onset, and 2 days before to 10 days after symptom onset. Results: We found an NNQ of 19.8-41.8 person-days and a PPID of 19.7%-38.2%, depending on the infectiousness period scenario. Effectiveness was higher among cohabitants and symptomatic contacts than among social or asymptomatic contacts. NNQ and PPID changed by intervention timings: the effectiveness of contact tracing decreased with time from diagnosis to quarantine of contacts and with time from symptom onset of the index case to contacts’ quarantine. Conclusions: These proposed measures of contact tracing effectiveness of communicable diseases can be important for decision making and prioritizing contact tracing when resources are scarce. They are also useful measures for communication with the general population, policy makers, and clinicians because they are easy to understand and use to assess the impact of health interventions. quarantine adoption during the COVID-19 pandemic has been to delay the epidemic peak or even delay the entry of the disease in a certain geographic area. It is therefore complementary to isolation, which corresponds to the separation of individuals who are already known to be infected with a contagious disease. Contact tracing is paramount to identify individuals who must be quarantined. The purpose of contact tracing is to block further transmission through rapid identification and management of possible secondary cases. The effectiveness of contact tracing depends on identification of all cases and contacts. 5 Contact tracing is widely recognized as an effective approach to limit the spread of outbreaks, including the COVID-19 pandemic. [6][7][8][9][10][11][12] However, information is lacking on its effectiveness (ie, how many cases can be prevented through contact tracing). To measure and compare interventions, health professionals and policy makers rely on effectiveness measures that are easy to understand and explain. One of the most intuitive measures is number needed to treat, which is the number of individuals needed to be treated to prevent 1 undesirable outcome. [13][14][15][16] This concept has been adapted to evaluate the effectiveness of other interventions, such as vaccination, by using the number needed to vaccinate. [17][18][19][20][21] The concepts of number needed to treat and number needed to vaccinate can also be adapted to assess the effectiveness of quarantine. Our study aimed to evaluate the effectiveness of contact tracing during the COVID-19 pandemic in the Espinho/Gaia (E/G) Public Health Unit (PHU) area in Portugal through 2 new measures: (1) number needed to quarantine (NNQ), which is conceptually similar to number needed to treat and number needed to vaccinate; and (2) proportion of prevented infectious days by quarantine (PPID), the number of potential infectious days prevented by quarantine divided by all infectious days. We also assessed various timings of public health interventions-namely, time from diagnosis to quarantine of contacts and time from onset of index case symptoms to contacts' quarantine-and we examined the sociodemographic characteristics of contacts for comparability with future studies. Study Design and Population The sample of our retrospective cohort study included the contacts of all people with confirmed COVID-19 residing in the E/G PHU area who were diagnosed July 1-September 15, 2020. We excluded from analysis contacts residing in a different area (ie, not the E/G PHU area) from confirmed (index) cases. Contact Tracing Details and Public Health Intervention Portugal has followed the European Centre for Disease Prevention and Control's recommendations for COVID-19 prevention and control-namely, case definition, quarantine, risk criteria, and contact tracing. 22,23 All positive cases and laboratory results of mandatory notifiable diseases, including COVID-19, need to be immediately reported through the national epidemiologic surveillance system (online platform) and become immediately available to public health authorities. In Portugal, local public health authorities were responsible for contacting all people with confirmed COVID-19 residing in that geographic area and asking them to list all individuals with whom they came into contact from 2 days before symptom onset until the day they were contacted. As part of the pandemic response, the E/G PHU staff collected data during COVID-19 contact tracing and follow-up. PHU staff interviewed each person with confirmed COVID-19 (ie, confirmed case), generally within 24 hours after receiving a positive test result, to identify the transmission pathway and the people with whom the confirmed case was in contact (ie, at risk of developing . PHU staff also monitored contacts for 14 days after their last contact with the confirmed case. All contacts were classified according to their level of risk (high or low). 11,24 High-risk contacts were those who had physical contact with a person with confirmed COVID-19 within 2 m for >15 minutes during a 24-hour period. High-risk contacts were quarantined and contacted daily for symptomatic evaluation for not longer than 14 days since their last exposure to a person with confirmed COVID-19. Low-risk contacts were those who had contact with a person with confirmed COVID-19 for <15 minutes, >2 m apart, in an outdoor or ventilated environment. Low-risk contacts were advised to monitor their symptoms and practice social restriction. PHU staff classified all contacts by exposure context as cohabitants, social, work related, school related, travel related, and health professionals. Social contacts include those who do not belong to any other category, such as family members who are not cohabitants. Testing Policy According to national guidelines, all people, regardless of their risk, were tested if they developed symptoms during quarantine that were compatible with clinical manifestations of COVID-19, such as signs or symptoms of respiratory infection (fever, new-onset or worsening dyspnea, shortness of breath or cough patterns, myalgia, headache), anosmia, dysgeusia, or ageusia. [6][7][8][9]23,25 During the study period, the E/G PHU tested all cohabitants as soon as a case was confirmed, as well as on the 8th through 12th days after their last contact. The remaining high-risk contacts were also tested on the 8th through 12th days after their last exposure. Infectiousness Period To measure the effectiveness of contact tracing, we first calculated the number of potential infectious days, defined as the period between the start of the infectiousness period and the diagnosis date (ie, positive reverse transcription polymerase chain reaction test). Given the uncertainty about the infectiousness period, we adopted 3 scenarios and compared the effectiveness of contact tracing for each: Scenario A: This scenario included the entire possible infectiousness period as previously described (ie, from 10 days before symptom onset to 10 days after symptom onset). [26][27][28][29][30] Scenario B: In this scenario, the infectiousness period spanned 3 days before symptom onset to 3 days after symptom onset. This scenario focused on highly infectious days because only a small proportion of transmission occurs at the end of the transmission window. 28 Scenario C: In this scenario, the infectiousness period spanned 2 days before symptom onset to 10 days after symptom onset, following the European Centre for Disease Prevention and Control's recommendations for contact tracing. 11 For asymptomatic cases (ie, confirmed cases without symptoms), we calculated a potential date of symptom onset: for cohabitants, we added the median serial interval (ie, the time between illness onset of 2 consecutive cases [5 days]) to the index case symptom onset date 31,32 ; for the remaining contacts, we added the median time from infection to onset of symptoms, or incubation time (6 days), to the last high-risk contact date. [33][34][35][36][37][38] We also calculated the generation time for the study sample, which corresponded to the median time between the date when COVID-19 was confirmed in the index case and the date when COVID-19 was confirmed in secondary cases (contacts with a positive test result). Effectiveness Measures We used the number of potential infectious days to calculate the NNQ and the PPID but with different formulas and goals. NNQ: measures the "cost" of quarantine-specifically, the number of person-days from all contacts who needed to be quarantined to prevent 1 potential infectious day: Together, both measures provide a clear picture of contact tracing effectiveness by comparing the desired outcome (number of potential infectious days prevented) with its "cost" (ie, comparing PPID and NNQ, respectively). Statistical Analysis We calculated NNQ and PPID using the 3 infectiousness period scenarios. We assessed these measures by sociodemographic characteristics and types of contacts (ie, cohabitants, social contacts), as well as by various intervention timings-namely, time from diagnosis to quarantine of contacts and time from onset of symptoms in the index case to contacts' quarantine. We did not evaluate other types of contacts, such as school related, work related, health professionals, and travel contacts, because they represented only 5% of contacts with a positive test result. We performed all statistical analyses using R version 4.0.3 (R Foundation). The study was approved by the Northern Region Health Administration Ethics Committee and abided to the Declaration of Helsinki. The Northern Region Health Administration Ethics Committee authorized the use of patients' data without written consent, because all data were previously collected for contact tracing and reused. Results From July 1 through September 15, 2020, a total of 152 people with confirmed COVID-19 were residing in the E/G PHU area. A total of 1582 contacts were identified; 849 (53.7%) were in the E/G PHU area and were included in the study (Table 1). Of the 849 contacts, 725 (85.4%) were considered high-risk contacts and quarantined. Nearly half of quarantined contacts were social contacts (48.1%, n = 349), followed by cohabitants (26.9%, n = 195), work-related contacts (14.5%, n = 105), and school-related contacts (9.9%, n = 72). However, in terms of quarantined contacts from the E/G PHU area who later received a positive test result for COVID-19 (n = 117), social contacts (50.4%, n = 59) and cohabitants (45.3%, n = 53) made up nearly the whole group. The median (interquartile range [IQR]) time from receiving a validated positive test result from the laboratory to the contacts' quarantine was 1.0 (1.0-2.0) day. The median (IQR) time from symptom onset in a confirmed case to contacts' quarantine was 5.0 (4.0-8.0) days. The median (IQR) time from last contact with a confirmed case to contacts' quarantine was 4.0 (2.0-6.0) days, ranging from a median of 1 day for cohabitants to 5 days for school-related contacts. The median time from diagnosis of a confirmed case to contact tracing was 1 day. We found a median (IQR) of 6.0 (3.0-10.5) high-risk contacts per confirmed COVID-19 case. The E/G PHU determined quarantine for 6525 persondays of quarantine, with a median (IQR) of 9.0 (7.0-12.0) days. From the screened quarantined contacts living in the E/G PHU area, 117 of 725 (16.1%) received a positive test result (ie, newly confirmed cases), 36 (30.8%) of whom had symptoms at the start of quarantine ( Table 1). The NNQ was 19.8 in scenario A, 41.8 in scenario B, and 23.0 in scenario C ( Table 2). By sex, the NNQ was 20.1 in scenario A, 42.8 in scenario B, and 22.3 in scenario C among females and 19.4, 40.8, and 23.8 among males. Contacts aged ≥70 years and 0-17 years had the lowest NNQ estimates of all age groups, and among contact types, cohabitants had lower estimates than social contacts. Additionally, the presence of symptoms at the start of quarantine decreased the NNQ overall. We found 1677, 609, and 743 potential infectious days in scenarios A, B, and C, respectively, of which 330 (19.7%), 156 (25.6%), and 284 (38.2%) were spent in quarantine (ie, PPID) ( Table 3). By sex, the PPID was 16.8% in scenario A, 21.7% in scenario B, and 34.7% in scenario C among females and 23.9%, 31.2%, and 43.1% among males. Contacts who were aged 18-29 years had the highest PPID estimates across all age groups. Contacts without symptoms at the start of quarantine had a higher PPID estimate than contacts with symptoms. Additionally, cohabitants had higher PPID estimates than social contacts overall (ie, contacts with or without symptoms), although the opposite was true among contacts with symptoms. The evolution of NNQ and PPID according to the number of days since diagnosis (Figure A and C) and symptom onset of the index case ( Figure B and D) showed an increasing NNQ and a decreasing PPID. NNQ estimates increased with the delay from diagnosis of a confirmed case to quarantine of contacts, while PPID decreased with the increase of this interval. NNQ estimates increased with time from symptom onset of the confirmed case to quarantine of contacts, with a steep increase after day 4. PPID estimates decreased with the increase of this interval, presenting a sharp drop on day 5, after which they were stable. Discussion This study provides information to reinforce and improve contact tracing by gathering a sample of contacts who received follow-up from quarantine until discharge, during a period of 2.5 months. In our sample, we found a median of 6 quarantined contacts per case, with a median of 9 days of quarantine. We calculated our own transmission-dynamic periods for contacts who received a positive test result: a generation time value of 3 days (ie, the time between an individual's infection and the moment that the person infects another) and a serial interval value of 4 days. The generation time is 2 days fewer than what is described in the literature, 28 which can represent the impact of contact tracing by anticipating diagnosis in 2 days, as contacts were tested not only if symptomatic but also when public health staff established quarantine for cohabitants and for all high-risk contacts on the 8th through 12th days. These measures allow calculation of the infectiousness period in asymptomatic cases and for comparison with future studies. This study proposed 2 measures for assessing the effectiveness of contact tracing that can be continuously monitored. By considering all contacts, NNQ provides information on the effectiveness and efficiency of contact tracing (ie, estimating the number of person-days of all contacts needed to be quarantined to prevent 1 infectious day), meaning that a lower value is desirable. PPID complements NNQ by focusing on evaluating the proportion of infectious days that were prevented by quarantine from the total number of infectious days, meaning that a more effective intervention will have a higher PPID. The ideal combination should be a low NNQ and a high PPID so that the intervention is effective and "cost" effective. Given the uncertainty in the infectiousness period, we considered 3 scenarios. Scenario A includes the longest infectiousness period but may overestimate the effect of quarantine because of the residual transmission that occurs at both ends of the infectiousness period. Scenario B includes the most likely infectiousness period because it is stricter. As recommended by the European Centre for Disease Prevention and Control, scenario C evaluates the basis for the contact tracing process used in Portugal but may underestimate the presymptomatic infectiousness period. Although we obtained different magnitudes by scenario, we found similar patterns when comparing the groups by demographic characteristics (eg, age, type of contact). A total of 152 people with confirmed COVID-19 were identified during the study period in the E/G PHU area, and 849 contacts lived in the E/G PHU area and were included in this study. Number needed to quarantine is the number of quarantine person-days needed to prevent 1 potential infectious person-day. We considered 3 infectiousness periods: 10 days before to 10 days after symptom onset (scenario A), 3 days before to 3 days after symptom onset (scenario B), and 2 days before to 10 days after symptom onset (scenario C). In this study, we determined that to prevent 1 potential infectious person-day, 19.8-41.8 person-days of quarantine (NNQ) are needed, depending on the scenario. Contact tracing prevented 19.7%-38.2% of all preventable infectious person-days (PPID), thus limiting disease spread. We identified subgroups in which contact tracing effectiveness was higher, and these subgroups may be considered a priority for contact tracing. Contacts aged ≥70 years had the lowest NNQ estimates, with contact tracing and subsequent quarantine being more efficient than for other age groups; yet, PPID estimates in this age group were not higher than the overall PPID estimates for each scenario. This apparent contradiction may be the result of the low number of contacts in this age group. For contacts aged 18-29 years, we found the highest estimates of the PPID and NNQ. While NNQ estimates may depend on the large number of contacts in this age group, quarantine helped to prevent a large number of infectious days, as shown by the high PPID estimates. The presence of symptoms at the start of quarantine implied a reduction of about half of the NNQ. However, PPID was lower for already symptomatic contacts, which was expected given that the infectiousness period varies according to the date of symptom onset. Cohabitants had lower NNQ but higher PPID as compared with social contacts, highlighting another priority group. This study evaluated the effectiveness of contact tracing according to public health intervention timings. NNQ and PPID estimates varied with the time from diagnosis of a confirmed case to quarantine of contacts (ie, NNQ increased with time from diagnosis of a confirmed case to quarantine of contacts and PPID decreased). Moreover, both measures showed that postponing quarantine by 1 day had a marked impact on contact tracing effectiveness, by tripling the NNQ estimate and halving the PPID estimate, thereby reinforcing the importance of timely intervention. In addition, it showed a decrease in contact tracing effectiveness with Abbreviations: E/G PHU, Espinho/Gaia Public Health Unit; NA, not applicable. a A total of 152 people with confirmed COVID-19 were identified during the study period in the EG/PHU area, and 849 contacts lived in the E/G PHU area and were included in this study. The proportion of prevented infectious days by quarantine is the number of potential infectious days prevented by quarantine divided by all infectious days. We considered 3 infectiousness periods: 10 days before to 10 days after symptom onset (scenario A), 3 days before to 3 days after symptom onset (scenario B), and 2 days before to 10 days after symptom onset (scenario C). Figure. Estimates of number needed to quarantine (NNQ) (A and B) and proportion of prevented infectious days by quarantine (PPID) (C and D), according to time from index case diagnosis to contacts' quarantine (A and C) and time from index case symptom onset to contacts' quarantine (B and D), in 3 scenarios of COVID-19 exposure in a study of 2 measures used to assess the effectiveness of contact tracing, Portugal, July 1-September 15, 2020. NNQ is the number of quarantine person-days needed to prevent 1 potential infectious person-day. PPID is the number of potential infectious days prevented by quarantine divided by all infectious days. We considered 3 infectiousness periods: 10 days before to 10 days after symptom onset (scenario A), 3 days before to 3 days after symptom onset (scenario B), and 2 days before to 10 days after symptom onset (scenario C). time from symptom onset of the index case to quarantine of contacts. The minimum NNQ estimate was on the eighth day, explained by an outbreak in a long-course bus trip (n = 24 positive contacts). On another note, contact tracing effectiveness decreased on day 4 or 5 after the onset of index case symptoms, which is compatible with our estimated serial interval, providing a possible opportunity for forward tracing (ie, screening contacts of high-risk contacts). Thus, future studies should assess whether immediate testing and forward tracing of secondary contacts should be considered, if resources are available, to quickly deter disease spread. Also, given the various pandemic phases that each country or region goes through during a pandemic, these measures may be useful to inform and guide prioritization for various stages and groups for thorough contact tracing. Limitations This study had several limitations. First, our results might be underestimated because our local contact tracing seemed to anticipate diagnosis by 2 days, with a generation time that was 2 days lower than the median generation time found in other studies (ie, 5 days) in which surveillance was not implemented. 28 In fact, because the diagnosis date was the endpoint to calculate the number of potential infectious days, the early diagnosis provided by timely contact tracing reduced the number of potential infectious days prevented. Second, these data refer to a period in which public health measures (eg, lockdowns) were not as strict as during other periods of the pandemic. However, contact tracing has limitations: some contacts might not have been reached (eg, information errors and/or memory bias); there was information bias related to reporting symptom onset date, despite our extensive symptoms questionnaire; and some contacts were already self-isolated (considered quarantined), while others might not have adhered to quarantine measures. Finally, NNQ may vary because of small numbers or larger outbreaks, which highlights the need to consider this value alongside PPID and the need for additional and larger studies. Conclusions Our study showed that contact tracing effectiveness (and "cost" effectiveness) is higher until 24 hours after a COVID-19 diagnosis or until 4 or 5 days of symptom onset in a person with confirmed COVID-19. Furthermore, our proposed measures can be important tools for decision making and prioritizing contact tracing. Because contact tracing effectiveness was high among cohabitants and symptomatic contacts, public health agencies should prioritize these groups when resources are scarce. Additionally, NNQ and PPID can be useful measures for communication with the general population, policy makers, and clinicians because they are easy to understand and use to assess the impact of health interventions.
2022-08-22T15:03:23.135Z
2022-08-20T00:00:00.000
{ "year": 2022, "sha1": "5767a8116d5a4bf5aa900f7a3743cf43d65891ae", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1177/00333549221114343", "oa_status": "HYBRID", "pdf_src": "Sage", "pdf_hash": "e7fa2ce9f400c55e54ccd27f40aaa21289b47655", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
17232790
pes2o/s2orc
v3-fos-license
Occupational Therapy using Rapid Prompting Method: A Case Report Individuals with autism spectrum disorders that are nonverbal or have significantly limited verbal ability often demonstrate difficulties with learning and communication that impact their ability to participate in everyday, functional activities. Healthcare providers and educators that provide intervention for individuals with autism spectrum disorders utilize a variety of interventions and treatment techniques while tailoring their interventions to consider the unique needs of the individual with autism. This case report reviews how incorporating Rapid Prompting Method, a relatively new teaching technique for individuals with autism spectrum disorders, into occupational therapy treatment for a young adult male with autism with significantly limited verbal ability improved his functional participation, including communication, behavior, and engagement in routine activities of daily living. Introduction The incidence of Autism Spectrum Disorders (ASD) has grown to 1 in 68 children in recent years [1]. Individuals with ASD are affected differently, and there is a wide variance in the type of challenges children with ASD exhibit. It has been estimated that 25-40% of children with ASD do not speak or have only a few words [2,3]. A recent report estimated that 46% of children with ASD have average to above average intellectual functioning [1]. Previously, autistic children with significantly limited verbal skills were thought to have severe cognitive impairments, but recent research has found that traditional IQ tests are not appropriate for these individuals and that children judged as "low functioning" have more cognitive potential than what was previously thought [4]. Therefore, interventions for autistic individuals need to be selected and customized based upon the unique strengths and deficits of the individual with ASD regardless of their perceived cognitive abilities or verbal skills. Some interventions for autistic individuals have a substantial amount of research supporting their use, whereas other new intervention techniques require research exploring their effectiveness. Rapid Prompting Method (RPM) is a relatively new teaching technique for children with ASD. It is a learning process through which attention, memory, retrieval, motor functions, and communication can be targeted and improved [5]. RPM is individualized to each participant's sensory learning needs. In addition to the therapist, parent, or other adults asking questions or giving verbal directions to the individual with ASD, sensory prompts (including visual, auditory, tactile, and kinesthetic) are incorporated to facilitate learner participation and understanding. Some examples of these prompts might be tearing the paper (auditory), writing key words or choices on small pieces of paper (visual), tapping the pencil on the written key words or choices (auditory), increasing the pace of questions or communication (auditory), touching the choices to the participant's hand prior to the individual making a selection (tactile), or shaking the paper choices to gain the participant's attention (visual) [5]. These prompts provide increased sensory input which is utilized to help participants initiate a response as well as to decrease selfstimulatory behaviors that may distract them from active learning. As the participant gains accuracy and confidence with use of the technique, the prompts are faded to encourage independence [5]. RPM is tailored to each participant's current learning level and information is frequently taught to the individual based on age level [5]. In addition to presenting information using sensory cues that support the participant's most effective channel of learning, the instructor typically speaks at a rapid rate when presenting material in order to stimulate the participant and encourage engagement in the learning process. Consistent verbal flow is utilized to build confidence and to give positive feedback to the participant. Participants are taught about a particular topic and then asked to retrieve and express what they have learned through their most effective motor output [5]. For many participants, output begins with choosing between 2 written answers placed on the table and progresses to more complex motor outputs including making choices in a field of 3-4 written choices, pointing to letters on a stencil followed by a letter board (9-26 letters), and eventually use of a keyboard [5]. The method increases in complexity as the participant gains understanding and the ability to make more complex motor movements. As the participant becomes more independent with use of the method, the instructor is able to fade the amount of prompts he / she provides in order to for the individual be more independent with the method. Little research has been conducted on RPM to date. While some have compared RPM to facilitated communication (which has been shown to be an ineffective communication strategy for individuals with ASD), RPM differs from facilitated communication; when individuals are learning to use RPM, they may be given verbal prompts to increase the accuracy of their responses, but they are not given physical assistance to influence choices and materials are not manipulated to increase their accuracy. Preliminary research on RPM has produced some positive results. An exploratory study of RPM in autistic children 8-14 years of age found that RPM was associated with a decrease in repetitive behaviors and an increase in the number of choices an individual could choose between while still having a similarly successful response rate to questions [6]. Autism Open Access Occupational therapy (OT) is a healthcare service that is frequently utilized by individuals with ASD [7]. OT services are individualized and based upon the client or caregiver's goals for enhancing participation in functional activities [8]. Goals may focus on improving an individual's ability to engage in activities of daily living (such as grooming or dressing), instrumental activities of daily living (such as preparing a meal or doing laundry), education, work, leisure, social participation tasks, etc. [8]. As a result of the wide variety of goals that OT services may address, occupational therapists must utilize a variety of treatment techniques to meet the individual needs of each of their clients. Outcomes of OT intervention may be improved client performance or participation in meaningful activities, adaption to various environments or tasks, increased personal satisfaction with one's skill level, and increased participation in life roles [8]. Although it was originally developed as a method for use in educational settings, RPM may be used as a teaching tool during OT treatment to maximize an individual's ability to participate and to promote functional outcomes. Use of RPM within OT treatment sessions allows the therapist to match his/her teaching to the participant's style of learning. It utilizes multiple sensory channels and places greater emphasis on the sensory channel that is best for learning at the moment of teaching [5]. RPM utilized as a teaching technique during OT treatment sessions can help to promote improved therapeutic outcomes. There is currently no published research on the use of RPM as a therapeutic tool to improve participation or outcomes in OT, and there is limited research regarding the effectiveness of RPM as a teaching technique for individuals with ASD. RPM was noted to be one intervention technique utilized within an educational setting in an article published by Shoener et al. [9]; however, use of RPM as a tool during OT was not specifically discussed. As a result, there exists a strong need for research into the use of RPM as a therapeutic tool for persons with ASD as well as how it can be utilized within OT intervention for individuals with ASD. Therefore, the purpose of this case report is to describe how incorporating RPM within OT treatment sessions improved a young adult male's functional participation, specifically increased self-confidence impacting his ability to participate in routine self-care activities and activities around his home (such as helping with laundry and washing his face), improved communication abilities, and increased participation in desired leisure activities. Research design A case report design is utilized in this research report. Case studies are useful for building foundational work for science and research of new and unique methods [10]. Due to the paucity of research available regarding RPM as an intervention technique, the case report methodology allows for incorporating a thorough description of RPM as a treatment technique within OT and how RPM affected this client's outcome. Participant E.D. is a 22 year old male with a diagnosis of ASD. He has participated in outpatient OT frequently throughout his lifespan with limited results with traditional OT intervention. E.D. was referred for a re-evaluation by his previous occupational therapist due to concerns with his functional skill acquisition including consistent access of his communication device, participation in activities of daily living, behavior, and social engagement. At his OT evaluation in October 2013, E.D. 's mother expressed concerns with E.D. 's participation in functional self-care skills, such as washing his face, his handwriting and keyboarding skills, behavior, and functional communication. During the evaluation, E.D. demonstrated the ability to isolate his index finger to type using a "hunt and peck" method; he was able to type his name on a keyboard independently. E.D. maintained his grasp on a pencil for short durations to engage in graphomotor tasks. His writing was large with inconsistent legibility, including ability to maintain letters within boundaries/on the lines. E.D. was able to follow basic, one step directions but required consistent prompting to remain engaged in tasks that were not highly motivating. Due to difficulties with functional communication, E.D. was working on using a communication program on his i-pad called Word Power. At the time of the evaluation, he was able to use his i-pad to make simple requests for highly motivating items, such as food or drink. His speech language pathology notes indicated that at the time of his OT evaluation, E.D. was able to use his communication device to label actions and emotions with approximately 50% accuracy when given cueing. E.D. was able to vocalize one word answers or short phrases ("yes", "gift shop"), at a whisper volume only, if he was prompted to vocalize these words. During the summer of 2013, E.D. 's speech pathologist attempted to complete standardized testing activities with E.D. to assess his receptive and expressive language abilities. These notes indicate E.D. would not cooperate with the majority of standardized testing activities; he was able to demonstrate receptive vocabulary up to that of a child 6 years of age. Measure: The Canadian occupational performance measure (COPM) The Canadian Occupational Performance Measure (COPM) [11] identifies and prioritizes individual occupational performance problems and evaluates the change in these performance problems [12]. The client or a proxy (caregiver) identifies problems in the areas of daily occupations and then ranks each problem in order of personal importance [11]. Next, the participant rates each problem for performance, and satisfaction on a scale from 1 to 10 with "10" being the highest [11,13]. The performance and satisfaction are then reassessed at a later date [14]. The COPM has demonstrated reliability and validity as on outcome measure in a variety of OT practice settings [15,16]. increasing his initiation and involvement in daily self-care activities and life skills, and improving his expressive communication skills including keyboarding skills, use of his communication device, and handwriting accuracy. As discussed above, RPM was developed as an educational method for persons with ASD and can be utilized to promote skill acquisition in academics, functional skills, motor skills, and communication. Lesson plans are developed from a variety of academic areas to give an individual a solid foundation from which he/she can develop reasoning and understanding [5]. When using RPM, the instructor presents material using a variety of sensory supports or prompts to maximize the participant's ability to take in and process the information. This helps overcome the individual's preferred stimming behaviors at the time of learning. Although RPM was initially developed for use in an academic setting, its principles can be directly applied to skills being taught within OT treatment sessions. OT practitioners' help people of all ages participate in activities and tasks that promote both independence and engagement in purposeful activities. RPM can be incorporated into this setting with persons with ASD to help teach the skills needed to gain greater independence with life skills. The skills addressed in OT can also help to strengthen a participant's ability to use RPM successfully across environments (for example, an individual may work on tasks to improve motor planning during OT sessions that improve his/her ability to access a letter board or keyboard). During E.D.'s initial sessions of OT his mother stated that she would like him to improve his ability to engage in functional tasks and in his ability to express himself. ED initially used written choices during his sessions and then was able to transition to use of a stencil with a field of 26 letters followed by a laminated letter board with a field of 26 letters. E.D. was noted to be an auditory learner. E.D. was able to engage in seated work tasks with intermittent tactile sensory breaks (bubbles, use of therapy putty) for the duration of a therapy session (approximately 50-60 minutes). His family members were present for all sessions so they were able to help E.D. generalize what he had learned into his home environment. During his sessions, E.D. responded well to a constant flow of auditory input from the therapist. In E.D.'s first sessions, it was determined that he was better able to access the letter board when it was held to the right of his midline, midway between his shoulder and his waist, centered off of the placement of his right hand due to his visual preference and right hand dominance. E.D. used his finger or a pencil to touch choices or letters on a laminated letter board containing all 26 letters in alphabetical order. As his sessions progressed, E.D. was able to sequence a series of letters into words and then sentences when the letter board remained stationary at an angle in front of him. He was able to consistently answer questions related to material that was presented during his sessions as well as questions about participation in daily activities, such as how to increase his independence with washing his face, and more open ended questions, such as what his goals were for the future. Prompts were built into E.D.'s learning process to encourage continual initiation and execution of the movement pattern needed to access his letter board including: visual prompts (placement of the board in front of him when it was his turn to engage), auditory prompts (use of encouraging phrases including "that's it", "keep going", "lift your elbow"), and tactile prompts (placement of the pencil in his hand only when it was time for him to respond). The same type of prompting could then be generalized to teach functional skills in other settings. E.D. is continuing to work toward transitioning his skills to other adults/caregivers and accessing the letter board while it is on a stable surface instead of held in front of him. He is also working toward use of a letter board in standard keyboard format (as opposed to the letters being in alphabetical order) to match the keyboard with his home computer and the keyboard on his communication device. Results OT intervention for individuals with ASD does not generally include formal pre-and post-testing as many individuals with ASD are not appropriate for standardized testing typically utilized by occupational therapists. The COPM was utilized to establish goals for OT intervention and to assess parent perception of E.D. 's progress toward goals during the course of this episode of therapy. The COPM was completed with E.D. 's mother two times during this episode of OT using RPM. Using the COPM, E.D. 's mom identified goals for treatment and gave a numeric score (from 1-10) regarding her perception of E.D. 's performance of these goals and her satisfaction with his performance; she then gave re-assessment scores approximately 3-6 months after setting these goals. On both occasions, E.D. 's mother's scores indicated a clinically significant change (greater than two points) for all goals (4-5 goals each time) that she had identified for E.D. to work toward during OT. Additionally, E.D. demonstrated improved functional communication abilities and receptive and expressive language skills. As discussed previously, during standardized testing administered by E.D. 's speech language pathologist in 2013, E.D. was unable to complete most standardized testing activities. He demonstrated receptive speech skills up to age 6, identifying 7/10 words correctly. E.D. 's speech therapist re-administered standardized testing in March 2015, and E.D. was cooperative to complete standardized testing activities using RPM. Per his speech therapy note in the medical record, E.D. completed portions of the Expressive One-Word Picture Vocabulary Test [17] using RPM. He was able to identify 16/20 words correctly that were in the 15 year to 18 year, 11 month age range; examples of words that E.D. was able to identify during standardized testing were microscope, hammock, Africa, spices, funnel, scroll, scales, bulldozer, hexagon, column, and stethoscope. This represents a clinically meaningful change both in his willingness to participate in standardized testing in speech therapy and with his ability to identify and express advanced vocabulary words in comparison to his previously measured expressive and receptive language abilities. During OT sessions, E.D. is now able to accurately respond to close ended and open ended questions related to the material presented during the session as well as to discuss other topics. He demonstrates the ability to sequence letters to form both words and sentences. E.D. uses his index finger or a pencil to touch letters to spell words. E.D. is able to engage in RPM for up to 50-60 minutes when provided with sensory supports and occasional short breaks from therapy tasks. He demonstrates minimal to no self-injurious behaviors during his OT sessions incorporating RPM. Subjectively, E.D., his mother, and his two sisters (who are his caregivers) report many improvements since E.D. began OT using RPM. His mother and sisters report that E.D. is now able to follow verbal directions to complete chores, such as getting sheets from the dryer, spraying clothes with stain solution, and putting food away, when he was not previously able to do these things. They report that E.D. seems to be talking more frequently and using a more powerful voice instead of whispering when he talks, and he participates in making decisions now at home. They also report improvements with E.D. being more focused overall and more deliberate in his actions, resulting in better sequencing of steps to complete tasks. A big area of improvement they noted is that E.D. is more accepting when he makes mistakes, with improved behavior and less self-injurious behavior overall. Finally, E.D. 's caregivers report that he seems to be more empathetic and his personality is more evident since beginning OT using RPM. E.D. reports improvements since beginning to use RPM as well. Using an alphabetical-order, laminated 26 letter board, E.D. was able to communicate about why he likes using RPM. He reported, "I like having to not get mad so much, " and "I can say so many things I couldn't say before. . . So many amazing things. . . Like I love my mom. " Discussion The purpose of this case report was to describe how RPM can be utilized in OT for individuals with ASD and how using RPM improved this young man's functional outcomes. E.D. had participated in traditional outpatient OT sessions on and off throughout the course of his life, and while he made some improvements in OT, such as learning to tie his shoes and write his name, E.D. continued to have a variety of difficulties impacting his ability to participate in everyday functional activities. Once E.D. began participating in OT using RPM, E.D. made significant improvements with his functional communication, decreased aggressive and self-injurious behaviors, and increased participation in activities of daily living and instrumental activities of daily living, such as meal preparation, face washing, and laundry tasks. While this case report specifically highlights how an occupational therapist utilized RPM when working with a client with ASD that had significantly limited verbal ability, it has applicability to a variety of professionals who provide intervention for individuals with ASD. E.D. is similar to many clients with ASD that receive OT intervention and other services. Individuals with ASD have varying verbal abilities, and those with significantly limited verbal ability or those that are nonverbal often are perceived to have lower cognitive abilities. In actuality, these individuals may need a less traditional intervention in order to help maximize their functional skills, learning, and communication. RPM demonstrates promise as a tool to help individuals with ASD that are nonverbal or have significantly limited verbal ability to learn information and to communicate. There are several limitations with this research paper. First, a case report is only the account of a single individual and the impact that the intervention had on him, and therefore, the ability to generalize the findings is limited. Additionally although findings were clinically significant, outside of the COPM and standardized testing completed by an SLP, no formal testing was completed, and data is not analysed for statistical significance in a case report. Given the limitations particularly in regard to study design, this case report provides preliminary support for the benefit of using RPM with a young adult male with ASD who had significantly limited verbal abilities. RPM appears to be a promising teaching tool for individuals with ASD that are nonverbal or have significantly limited verbal ability. While this case report demonstrates the effectiveness of using RPM with an adult male with ASD who had significantly limited verbal abilities, additional studies with more rigorous designs will be needed to substantiate the use of RPM with individuals with ASD.
2019-03-13T13:31:11.430Z
2016-02-29T00:00:00.000
{ "year": 2016, "sha1": "80c1141d14b767fcc3e2636b59be7de9a6853fc0", "oa_license": "CCBY", "oa_url": "https://doi.org/10.4172/2165-7890.1000165", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "a8dffb716f88c3b843d65c0c75c8d5274c872ff2", "s2fieldsofstudy": [ "Education", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
238236869
pes2o/s2orc
v3-fos-license
Analysis of acute lymphoblastic leukemia drug sensitivity by changes in impedance via stromal cell adherence The bone marrow is a frequent location of primary relapse after conventional cytotoxic drug treatment of human B-cell precursor acute lymphoblastic leukemia (BCP-ALL). Because stromal cells have a major role in promoting chemotherapy resistance, they should be included to more realistically model in vitro drug treatment. Here we validated a novel application of the xCELLigence system as a continuous co-culture to assess long-term effects of drug treatment on BCP-ALL cells. We found that bone marrow OP9 stromal cells adhere to the electrodes but are progressively displaced by dividing patient-derived BCP-ALL cells, resulting in reduction of impedance over time. Death of BCP-ALL cells due to drug treatment results in re-adherence of the stromal cells to the electrodes, increasing impedance. Importantly, vincristine inhibited proliferation of sensitive BCP-ALL cells in a dose-dependent manner, correlating with increased impedance. This system was able to discriminate sensitivity of two relapsed Philadelphia chromosome (Ph) positive ALLs to four different targeted kinase inhibitors. Moreover, differences in sensitivity of two CRLF2-drivenBCP-ALL cell lines to ruxolitinib were also seen. These results show that impedance can be used as a novel approach to monitor drug treatment and sensitivity of primary BCP-ALL cells in the presence of protective microenvironmental cells. Introduction Normal B-cell lineage development occurs in bone marrow, where association with stromal cells regulates the early stages of progressive B-cell lineage maturation and selection [1][2][3]. Human B-cell precursor acute lymphoblastic leukemia (BCP-ALL) also originates in bone marrow. This is also a frequent location of primary relapse after conventional cytotoxic drug treatment. BCP-ALL cells are protected against chemotherapy by association with stromal cells. Such stromal cells act as a communication and instruction network, regulate local cytokines and chemokines levels, secrete extracellular matrix and overall create a microenvironment enhancing leukemia cell survival and proliferation [4][5][6][7][8]. We model this ex vivo by co-culture of human patient-derived xenograft (PDX) and primary BCP-ALL cells with OP9, murine bone marrow-derived stromal cells (mesenchymal stromal cells, MSC) that are widely used to support human stem cells and human T-cell development [9][10][11][12]. In this co-culture system, OP9 monolayers secrete the chemokine SDF1α, a known migration factor for human BCP-ALL [13,14]. The leukemia cells migrate towards the monolayer, nestle underneath the stromal cells, and proliferate at that location. When BCP-ALL cell numbers sufficiently increase, they also migrate into the cell culture medium. When such co-cultures are used to follow drug sensitivity of BCP-ALL cells over time, viability or numbers of these floating cells are typically the targets that are counted. Therefore, although this system is able to represent some aspects of drug treatment, it does not score the effects of drugs on the BCP-ALL cells that are in the most proximal contact with the stroma, at the location where they are expected to be maximally protected [15]. To address this problem, we investigated the possibility of a novel approach to measure the effects of chemotherapy treatment on BCP-ALL cells in a co-culture set up using the xCELLigence system. That system, which continuously monitors cell proliferation while allowing the leukemia cells to remain undisturbed in their microenvironment, was evaluated and compared to traditional 2D methods. Cells and culture The murine OP9 bone marrow stromal cell line (CRL-2749) and the human HS5 bone marrow stromal cell line (PCS-500-041) were obtained from the American Type Culture Collection (ATCC, Manassas, VA, USA). Cells were grown in OP9 medium (see below) and used at an early passage. In co-culture experiments, these cells were mitotically inactivated by irradiation for 16.7mins (74.31Gy) using a 137Cs irradiator. Alternatively, OP9 cells that had been plated and allowed to adhere overnight were treated with 10 μg/ml mitomycin C (Sigma cat M4287) for 3 hrs in complete medium, washed with media and used the next day for support of human BCP-ALL cells. For Transwell co-cultures, OP9 cells were plated in the bottom chamber and were separated from the leukemia cells by a 0.4 μm membrane filter (Corning, Cat# 3381). Ethics statement All human specimen collection protocols were reviewed and approved by Children's Hospital Los Angeles Institution Review Board (IRB) and Committee on Clinical Investigations (CCI). All methods were performed in accordance with the relevant guidelines and regulations. Collections were in compliance with ethical practices and IRB approvals. All specimens were deidentified/ anonymized before acquisition for research. BM61, BM47, and BM13 were collected as leftovers of samples that were initially collected for clinical diagnostic purposes and were discarded as medical waste when no longer needed for clinical purposes. US7, ICN13, BLQ1, and BLQ5 have been previously described [14,[16][17][18]. Cell growth and proliferation assay under drug treatment using the xCELLigence system The xCELLigence system (RTCA DP, Agilent Technologies, Santa Clara, CA, USA) was used to analyze BCP-ALL cells in co-culture with stromal cells. The system consisted of RTCA Resistor E-Plate 16 devices (Agilent Technologies, Santa Clara, CA, USA, Cat # 380601050) placed within the RTCA DP device housed in a humidified atmosphere with 5% CO 2 at 37˚C, and RTCA software (version 2.0) that was used to verify system conditions, sample monitoring and data analysis. The E-Plate 16 devices are cell culture plates that have gold microelectrodes embedded within each well ( Fig 1A) allowing readings to be taken by the RTCA DP device. The wells of the E-Plate 16 devices connected to the xCELLigence system were coated with 10 μg/mL fibronectin (Thermo Fisher Scientific, Waltham, MA, USA, Cat #PHE0023) or 0.2% gelatin in DPBS (Sigma Aldrich, St. Louis, MO, USA, Cat #G1393-100ML) for 30 minutes at 37˚C. Once the coating was removed, 50 μL of complete MEM-α media was added to each well and a background reading was recorded with the analyzer. OP9 cells (10,000 cells/well in 50 μL complete MEM-α media) were then seeded into each well and the plates were left at room temperature in a laminar flow hood for 30 minutes to allow the OP9 cells to settle. The plates were then placed back into the analyzer to allow the OP9 cells to adhere overnight to establish the feeder layer and to monitor them. Human leukemic cells were added to the wells with the established OP9 feeder layer (except where indicated in the Figure legend, 1 x 10 5 cells/well in 100 μL complete MEM-α media) and left undisturbed for another 24 hr to allow migration underneath the feeder cells. Anticancer drugs (in 50 μL complete MEM-α media) were added after 24-hr co-culture and impedance was measured by RTCA software at 15 min intervals over a period of 7 days. The real-time cell index, representing changes in impedance from the initial background reading, was determined using the RTCA DP software, and a histogram representative of the cell index over 7 days was generated for each BCP-ALL/drug combination. Viability and apoptosis monitoring Cell index values were derived from impedance readings of the E-Plate 16 devices. These plates were subsequently used for cell counting and apoptosis monitoring along with the wells from the 96-well tissue culture plates and HTS 96 well Transwell plates. Cells were harvested from plate wells by pipetting and wells were washed once with 200 μL PBS. 10 μL of sample was mixed with 10 μL of 0.4% Trypan blue (Gibco, Thermo Fisher Scientific, Waltham, MA, USA, Cat #15250061) by gently pipetting, and then 10 μL of the mix was loaded into a chamber of the hemocytometer. Manual counts were performed under a 10x objective according to standard methodologies. Annexin V assays were performed by resuspending the rest of the cells in 1x Annexin V Binding Buffer (BD Biosciences, Franklin Lakes, NJ, USA, Cat# 556454) and staining with Annexin V-FITC (BD Biosciences, Cat# 556419) and PE Mouse Anti-Human CD45 (BD Biosciences, Cat# 561866) for 15 min at room temperature in the dark. Stained cells were washed and suspended in binding buffer. The final cell suspension was stained with 7-AAD (Thermo Fisher, Cat# 00-6993-50) for 5 min at room temperature in the dark and was processed through a flow cytometer (BD FACS Canto II). Data was processed using the instrument's software. Derivation of Cell Index (CI) As described previously, a unit-less parameter termed cell index (CI) was derived to represent cell status, based on the measured relative change in electrical impedance detected by sensor electrodes from the initial background reading [19]. % Displacement calculation, IC50 calculation, and statistical analysis The % Displacement for a drug treated BCP-ALL sample was calculated according to the following formula: Cell Index ðOP9 AloneÞ À Cell Index ðTreated SampleÞ Cell Index ðOP9 AloneÞ À Cell Index ðOP9 plus BCP À ALLÞ � � IC50 was automatically calculated using RTCA Software. The Levenberg-Marquardt method is used to fit the concentration-Cell Index data, (C1, CI1), (C2, CI2), . . ., (Cn, CIn), to the 3-parameter or 4-parameter logistic dose response model to derive the IC50 values. Prism 8 (GraphPad, San Diego, CA, USA) was used to calculate correlation between Cell Index (CI) versus apoptosis analysis parameters as well as live cell counts. Statistical significance was determined using one-way ANOVA, followed by Dunnett's multiple comparison test using OP9 plus BCP-ALL samples as control. To compare values between different time points, plate and drug conditions, statistical significance was determined using two-way ANOVA, followed by Bonferroni posttests. BCP-ALL cells proliferation in co-culture can be measured through feeder cell displacement The xCELLigence system works by measuring electron flow transmitted between gold microelectrodes, in the presence of an electrically conductive solution such as tissue culture medium ( Fig 1A). Adhering cells disrupt the interaction between the electrodes and the solution, thus impeding electron flow. This resistance to alternating current is referred to as impedance, which can be described in arbitrary units called cell index ( Fig 1B). Since bone marrow stromal cells are typically adherent cells that have a relatively large footprint, while BCP-ALL cells are non-adherent, we asked if it would be possible to measure the migration of the BCP-ALL cells to the location underneath the MSC by displacement of these stromal cells off the substratum to which they attach. This effect is possibly measurable through the use of the xCELLigence system. This system uses tissue culture wells that have electrodes in the bottom, onto which adherent cells can be plated [21,22]. Adhesion and reattachment of the stromal cells can then be monitored over time, without any label, as an increase over time in impedance (resistance) of an electric alternate current that flows through the medium to the electrodes (Fig 1). We plated mitotically inactivated OP9 BM MSC in this system. As shown in Fig 2A (blue line), these cells provided a high and constant impedance signal after they had attached to the electrodes. As expected, US7 human PDX-derived BCP-ALL cells, when added alone to a well, did not change the impedance (Fig 2A, grey line) as these cells remained floating in the medium and did not migrate to or adhere to the electrodes, even when these were coated with gelatin or fibronectin. Importantly, when US7 cells were added to wells that had been seeded with OP9 cells, there was a marked drop in impedance within the first 24-36 hours (Fig 2A, Fig 2. Impedance measurement of OP9 stromal cell adhesion and displacement by BCP-ALL cells. (A) Impedance was measured over 7 days in wells containing OP9 alone (10,000 cells/well), US7 alone (50,000 cells/well) and US7 plated on OP9 feeder cells as indicated (n = 2). When added to the well, OP9 stromal cells adhere to the electrodes and increase impedance (Cell Index) over time. PDX-derived BCP-ALL cells (US7) do not attach to the electrodes and do not affect the measured impedance; however, when co-cultured with OP9, they cause a reduction in impedance. (B) Impedance in co-culture of OP9 cells with different amounts of US7 cells as indicated below the figure (n = 2, triangle = no cells, diamond = 50,000 cells, square = 5,000 cells, circle = 500,000 cells). The reduction in impedance is proportional to the number of added US7 cells. (C) Bright field images of OP9 alone (10,000 cells/well) or OP9 and US7 BCP-ALL cells (50,000 cells/well) 7 days after leukemia cell addition. Dark areas are generated by electrodes blocking the light path in the inverted microscope. Cells are visible in the clear space between electrodes. Scale bar = 100 μm. https://doi.org/10.1371/journal.pone.0258140.g002 red line) as the leukemia cells migrated to and pushed underneath the OP9 cells, displacing their firm contact with the electrode. Since the impedance progressively decreased with time, this suggested that the leukemia cells were increasing in numbers. To test this, we next added different amounts of US7 cells to a constant number of plated OP9 cells. As shown in Fig 2B, there was a clear dose-response, with lower numbers of US7 cells causing a smaller drop in impedance than higher amounts of cells, confirming that differences in impedance measures differences in BCP-ALL cells underneath the stromal cells. The concordance between reduction in impedance and proliferation of US7 cells was also supported by bright field images ( Fig 2C). BCP-ALL cells derived from patients are expected to be unique because they contain different molecular lesions and can be stratified into different risk categories depending on identified driver mutations. One poor-prognosis subtype described as Philadelphia positive (PH+) contains the fusion of two genes, resulting in a deregulated tyrosine kinase called BCR/ABL1 [23]. As these molecular lesions could affect their migration and adhesion towards the OP9 stromal cells as well as their proliferation rates, we compared growth of primary BCP-ALL cells from three different relapsed patients. All tested BCP-ALL cells reduced the OP9 impedance, but with kinetics and profile that were unique for each sample (Fig 3), likely reflecting differences in their growth properties but also in interaction with the stromal environment. For example, Ph+ BM13 cells generated a stronger effect, suggesting these cells proliferated more robustly. After the initial displacement of OP9, Ph+ BM61 gradually reduced OP9-associated impedance, indicating a relatively slow proliferation. The growth of BM47 only became evident after day 6 of culture after the initial displacement. Thus, further studies with large sample numbers could be done to determine if a correlation exists between impedance, the interaction with the stromal environment of these cells and the driver oncogenes expressed by them. Selective drug cytotoxicity can be measured through co-culture displacement/impedance analysis We next investigated if the effect of cytotoxic drug treatment on BCP-ALL cells can be monitored using this indirect OP9 displacement approach. US7 cells were isolated from a patient at diagnosis, and are sensitive to treatment with vincristine, a component of standard, first-line chemotherapy. Vincristine is an inhibitor of microtubule polymerization and is cytotoxic Primary (non-cultured) BCP-ALL samples Ph+ BM61, Ph+ BM13 and BM47 were plated (100,000 cells/well) one day after OP9 feeder cells (10,000 cells/well) and impedance was measured over more than 7 days (n = 2). BM13 and BM61 were run on the same plate, whereas BM47 was from a different experiment. because it inhibits cell division. Therefore, it should not have an effect on the OP9 cells, which have been mitotically inactivated through irradiation. As shown in S1 Fig, we verified lack of effect on these BM-MSC cells through impedance monitoring. We next treated the US7 BCP-ALL cell co-culture with vincristine. As shown in Fig 4A, whereas US7 in co-culture with OP9, as expected, reduced impedance caused by the OP9 cell adherence, treatment with chemotherapy had a marked effect: within about 48 hours after addition of vincristine the impedance began to increase, and upon continued chemotherapy reached the levels seen with OP9 alone cells. This indicates that the OP9 cells re-adhered firmly to the electrode after the leukemia cells had been eradicated with this drug. To determine if this could be used to determine IC 50 values, we treated US7 cells with a range of vincristine concentrations varying from 315 pM to 20 nM. Fig 4B shows that changes in impedance correlated with the dose of vincristine used to treat these BCP-ALL cells. After 48 hours, the Cell Index (CI) value increased in a dose-dependent fashion, compared to the US7 cells that did not receive any vincristine treatment. Using the CI value at 150 hours from the beginning of the experiment, we were able to calculate an IC 50 for US7 cells at 1.8 nM ( Fig 4C). We also treated ICN13 cells, which are drug resistant in our standard OP9 co-culture (S2 Fig), with vincristine in the xCELLigence system ( Fig 4D). In concordance with their lack of reaction to this cytotoxic drug, impedance of ICN13 co-cultures with and without vincristine treatments were identical ( Fig 4D) and no IC 50 could be determined due to lack of sensitivity to all tested concentrations (S3 Fig). Imaging of the US7 cells and ICN13 cells (Fig 4E) supported the concept that US7 co-cultures treated with vincristine had a marked decrease in BCP-ALL cell numbers, because the OP9 feeder layer became more visible, as compared to the ICN13 co-culture, which did not have reduced BCP-ALL cell number under vincristine treatment. We next compared this system with a standard end-point culture after 9 days of treatment with vincristine. As shown in Fig 5A (DMSO), US7 cells plated only on fibronectin-coated xCELLigence plates without stromal support, and US7 cells not directly contacting OP9 stromal cells by separation through a transwell membrane had lower live cell numbers than BCP-ALL cells growing in direct contact with stroma. The percentage of spontaneous apoptosis was also lowest in direct co-culture (Fig 5B, DMSO). OP9 stromal support furthermore protected the leukemia cells against 1 nM vincristine (Fig 5B, grey bars) compared to cultures without OP9. Compared to the DMSO-treated controls, US7 cells exposed to 1, 5 and 10 nM vincristine showed clearly decreased live cell numbers (Fig 5C) but the distinction between 5 and 10 nM was small. Under continuous monitoring ( Fig 5D) there was an early differentiation visible between the controls and 1, 5 and 10 nM between 21 h and 30 h post drug addition which was maintained till the end of the observation period on day 9. A comparison of the cell index values versus terminal counts confirmed the correlation between cell index versus apoptosis, live cells, and cell counts on day 9 (Fig 5E). Coculture monitoring can potentially reveal complex mechanisms of drug resistance Ph-chromosome positive ALLs can be treated with targeted tyrosine kinase inhibitors (TKI). However, drug resistance frequently emerges with such cells harboring point mutations in the ATP binding site of ABL1 which prevent binding of the inhibitor. To address if xCELLigence analysis could be used to detect this, we tested two Ph-positive ALL cell lines, BLQ1 and BLQ5, which both contain a T315I mutation [24] that makes them insensitive to the second generation TKI nilotinib. However, the third generation TKI ponatinib was designed to be able to inhibit the T315I mutant [25,26], and the Aurora kinase inhibitor VX-680 also has "off target" PLOS ONE Monitoring BCP-ALL drug treatment using xCELLigence inhibition of the BCR/ABL1 T315I mutant [27,28]. All drugs except VX-680 are FDAapproved. None had an effect on the OP9 cells (S4 Fig). BLQ1 had a modest response to 12 nM ponatinib and 100 nM nilotinib (Fig 6A, S5 Fig left panels), whereas impedance in the BLQ5 co-culture was totally unaffected (Fig 6B, S5 Fig right panels). Interestingly, proliferation of both BLQ1 and BLQ5 were affected by VX-680, possibly because of its additional activity as Aurora kinase inhibitor (S5 Fig). The inhibition of proliferation of BLQ5 was maintained over the course of the measurements, whereas the cytostatic effect on BLQ1 waned over time, suggesting that inhibition of proliferation was gradually overcome (S5 Fig). We also treated both BCP-ALL cells with vincristine as cytotoxic therapy. BLQ1, although somewhat responsive, appeared to start re-proliferation similar to its response to VX-680 whereas BLQ5 cells were more sensitive, and impedance after 225 hours approached that of OP9 cells alone, indicating significant BCP-ALL cell killing (S5 Fig). It should be noted that such differences in the response kinetics would be difficult to identify with end-point assays because such differences first became apparent at different time points. The responses of BLQ1 and BLQ5 to 16 nM nilotinib would have looked the same if the analysis had been performed 66 hours (no displacement) or 150 hours (full displacement) after treatment ( S5 Fig). Thus, the detailed kinetics of the xCELLigence technology allows clear differentiation in the response. Although OP9 cells are excellent at supporting BCP-ALL cell proliferation and survival, some applications may need to utilize human MSC stromal support. We therefore tested the human MSC cell line HS-5, which has a similar pattern of gene expression to primary bone marrow MSC [29] and is widely used to support survival of malignant human hematopoietic cells [30,31]. As shown in Fig 7A, to the electrodes, causing an increase in cell index values over time. BLQ5 cells added to these plates displaced these stromal cells as they proliferated. We then treated the cultures with VX-680 and monitored the effect of drug treatment on the cell index over time. As shown in Fig 7B, effects of the drug were measurable as an increase in the cell index after about 2 days post VX-680 addition and on d5 there was a clear difference between control DMSO and 20 nM VX-680. Examination of parallel tests of BLQ5 on HS-5 in tissue culture plates using Trypan blue live cell counting similarly documented effects of the drug but did not clearly show an effect at the lowest drug concentration (Fig 7C). We also evaluated the cells in end-point cultures for percentages of live and apoptotic cells using a FACS-based assay (Fig 7D and S1 Table). These experiments show that specific human MSC can substitute for OP9 cells to provide chemoprotection. However, seeing that OP9 cells consistently supported higher BCP-ALL We next determined if we could measure responses to the Janus kinase 2 (JAK2) inhibitor ruxolitinib, which is in clinical trials for treatment of a subclass of Ph-like ALLs (NCT02723994). We first tested LAX7R, which expresses constitutively active Ras (K-RasG12C mutation, see [24]). As shown in Fig 8A, after around t = 100 hours, the obvious displacement of the OP9 cells by LAX7R (compare red and grey lines) had clearly been reduced by treatment with 10 μM ruxolitinib (light blue line), which had little effect on OP9 cells alone at this time point (S7 Fig). We next tested MUTZ5 and MHH-CALL-4 as two BCP-ALL cell lines that should be sensitive to this drug: both cell lines overexpress CRLF2 via an IGH@-CRLF2 translocation. MUTZ5 and MHH-CALL-4 additionally contain JAK2 R683G and JAK2 I682F mutations, respectively [32]. As shown in Fig 8 (grey lines), we found that OP9-associated impedance was (Fig 8A and 8B). Discussion The current study was taken to explore the possibility of applying the xCELLigence impedance-based system to monitor chemotherapy treatment on BCP-ALL cells while they are in co-culture with BM stromal cells. When kept in the presence of stromal cells, but without direct cell to cell contact, leukemic cells can undergo spontaneous apoptosis, with viability significantly dropping over time [33][34][35] (see also Fig 5B). With a direct contact co-culture system, viability bias due to spontaneous apoptosis is reduced and long-term monitoring of the cells in their protective physiological environment obtained without the necessity of adding external factors such as cytokines that have only partially been identified. The system studied here offers a middle ground between 2D and 3D culture. While 3D cultures can provide environmental "niches" that allow proper cell-cell interactions and mimic natural structures of tissue, they often result in low reproducibility and require expert handling [36]. Thus the current system can be viewed as a 2D culture system with a "set-and-forget" protocol feature. In contrast, most standard in vitro assessments involve the measurement for viable cells at single time points, generally 72 h after drug addition, [37] which may be too short to detect responses that, in patients, may take longer to become evident. With the xCEL-Ligence system, the leukemic cells can be continually monitored for at least 7 days, while in direct contact with BM stromal cells, providing longer kinetic data in a system more comparable to the in vivo microenvironment. In addition, the xCELLigence system allows label-free monitoring, reducing manipulation and alteration of cell physiology, while the instrument software allows for scheduled time interval measurements without the necessity of manual computation. Finally, data from various time points are acquired from a single well, resulting in a reduction of experimental error. There is also an economical advantage in comparison to traditional methods that require extensive hands-on time involvement and reagents to generate equivalent data measurements. Here, the system is used to indirectly monitor cell proliferation rates in a non-invasive manner because cells are not counted by physically removing them from the cultures, but by observing the rate of change they cause in impedance shifts: a faster drop in impedance would imply an increased rate of cell proliferation, with the slope used to measure the speed of cell division. This was evident when we compared Ph+ BM61 to Ph+ BM13, with the latter growing more rapidly. The real-time and continuous monitoring feature of the system allow the possibility of assaying the type of effect of drug treatment: cytostatic and cytotoxic drug activity can be distinguished from each other by observing the pattern of impedance displacement over a detailed kinetic. With cytostatic drugs such as the tyrosine kinase inhibitors used here, cell proliferation is inhibited. Although the leukemia cells are no longer dividing, BCP-ALL cells that had displaced the OP9 at the beginning of the experiment are still attached to the electrodes, maintaining a lower plateau CI value in such cultures compared to the plateau value for OP9 cells alone. In contrast, in the case of a cytotoxic drug such as vincristine, apoptosis of BCP-ALL cells occurred. As they died, the OP9 or human HS5 stromal were able to reattach to the electrodes, increasing the impedance and resulting in a plateau similar to that of stromal cells alone. The xCELLigence system could possibly also flag emerging long-term resistance, when curves would display a pattern in which initially a peak of impedance appears, indicating cell death, followed by a drop, indicating recovery of BCP-ALL cells and re-initiation of growth even under continued drug treatment. It is also possible to empirically detect subtle and less subtle differences between individual BCP-ALLs as to how they react to specific drugs, which allows a quantitative assessment with respect to 'drug resistance' and could be helpful in determining chemotherapy approaches to patient treatment. For example, complete resistance at a certain drug dose was illustrated by the lack of increase in impedance after drug treatment in US7, while with ICN13, cell proliferation continued, resulting in the CI value dropping until a plateau was reached when the BCP-ALL cells displaced the maximum possible OP9 cells from the electrodes. Using BLQ1 and BLQ5, we were also able to detect unexpected differences, as determined by divergent curve patterns in a side-by-side comparison. We found that BLQ5 was resistant to ponatinib whereas BLQ1 exhibited some sensitivity. This could not have been predicted based on sequencing of the ATP binding site of the oncogene that drives these leukemias: both BLQ1 and BLQ5 were isolated from patients who had relapsed and both contain the Abl T315I point mutation but lack other additional mutations in the ATP binding site that could explain differential resistance to ponatinib [38,39]. Interestingly, BLQ1 expresses BCR/ABL1 p210 whereas BLQ5 expresses BCR/ABL1 p190 as driver oncogene [24]. In mouse studies, the p190 form was found to produce a more aggressive B-cell lineage leukemia [40] and the two proteins also differ in their interactome [41]. While a similar response between BLQ1 and BLQ5 to the standard chemotherapeutic vincristine was expected due to their driver mutations, the degree of differential sensitivity to the treatment could not have been predicted. In addition, we tested sensitivity of two cell lines representative of Ph-like ALL, both carrying CRLF2 translocations but with different activating JAK2 mutations, for their sensitivity to the JAK2 kinase inhibitor ruxolitinib (Jakafi 1 ). Here too the system was able to tease out subtle differences in drug sensitivity, with 10 μM ruxolitinib having a relatively potent effect on MUTZ5, whereas MMH-CALL-4 had a much less deep or durable response to the same dose. In our experiments, the treatment of OP9 cells with ruxolitinib caused changes in morphology measured by changes in impedance, although this was transient and did not preclude its testing in this setting. Indeed, ruxolitinib may also have effects on cells in the bone marrow other than the intended target BCP-ALL cells. For example, AlMuraikhi et al [42] reported that human BM MSC treated with 3 μM ruxolitinib had differential expression of more than 1500 genes compared to vehicletreated controls. However, because leukemia cells treated with the inhibitor are presumed to be dependent on activation of the Jak/STAT pathway, whereas normal cells are not, ruxolitinib treatment does not appear to have consequences for non-leukemia cell survival: Jakafi 1 is clinically approved for treatment of myelofibrosis in adults as well as for acute graft-versus-host disease. It is possible that other drugs, such as F-actin disrupting compounds, could not be tested in the xCELLigence system because they would cause detachment of the OP9 stromal cells from the plate. Thus applications of this system are limited to drugs that do not cause permanent cellular detachment. However, if it is the experimental intent to test drugs that could be used for treatment of human BCP-ALL, this finding could also be early evidence of systemic toxicity. We conclude that because some primary BCP-ALL samples, and in particular those from relapsed patients, are able to grow in co-culture with OP9, the co-culture system described here could be a valuable assay platform to empirically assess the sensitivity of BCP-ALL cells to different second-line chemotherapy treatments.
2021-10-02T05:19:38.089Z
2021-09-30T00:00:00.000
{ "year": 2021, "sha1": "b5a455964da2fac2780b5385793931f80fa1cb26", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0258140&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b5a455964da2fac2780b5385793931f80fa1cb26", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257775843
pes2o/s2orc
v3-fos-license
Laryngeal edema following remimazolam-induced anaphylaxis: a rare clinical manifestation Background Remimazolam is an ultra-short-acting intravenous benzodiazepine, which has been used as sedative/anesthetic in procedural sedation and anesthesia. Although peri-operative anaphylaxis due to remimazolam has been reported recently, the spectrum of the allergic reactions is still not fully known. Case presentation We describe a case of anaphylaxis following remimazolam administration in a male patient undergoing colonoscopy under procedural sedation. The patient presented complex clinical signs including airway changes, skin symptoms, gastrointestinal manifestations and hemodynamic fluctuations. Different from other reported cases, laryngeal edema was the initial and main clinical feature of remimiazolam-induced anaphylaxis. Conclusions Remimazolam-induced anaphylaxis has a rapid onset and complex clinical features. This case reminds anesthesiologists should be particularly alert to the unknown adverse reactions of new anesthetics. Supplementary Information The online version contains supplementary material available at 10.1186/s12871-023-02052-w. Background Of all wide spectrum of adverse drug reactions, peri-operative anaphylaxis is undoubtedly the most disconcerting event to anesthesiologists. It is a lifethreatening reaction characterized by acute onset of symptoms involving different organ systems and requiring immediate medical intervention [1]. Remimazolam besylate, a novel ultrashort-acting benzodiazepine, has recently been approved for clinical use as a general anesthetic. Although peri-operative anaphylaxis due to remimazolam has been gradually ascertained and reported [2][3][4][5], the spectrum of its allergic reactions is still not well known. Here, we describe a case of anaphylaxis caused by remimazolam, which presented with complex clinical signs, including airway changes, skin symptoms, gastrointestinal manifestations and hemodynamic fluctuations. Written informed consent was obtained from the patient for the publication of this case report. Case presentation A 41-year-old male (height, 165 cm; body weight, 63 kg) was scheduled for colonoscopy review. He had no history of any remarkable disease, symptom, or drug allergy. He had undergone gastroscopy and colonoscopy with unknown sedative drugs in the physical examination institution two years ago, and colorectal polypectomy by endoscopic mucosal resection by sedation with propofol half a year ago in our hospital. Deep sedation with monitored anesthesia care was induced and maintained with the administration of propofol and alfentanil without any problems. Before anesthetics administration, the standard vital signs were monitored and were as follows: non-invasive blood pressure (NIBP), 116/69 mmHg; heart rate, *Correspondence: Xiangming Fang xmfang@zju.edu.cn Department of Anesthesiology, The First Affiliated Hospital, Zhejiang University School of Medicine, Hangzhou, Zhejiang, China 85 bpm; and SpO 2 , 100%. After starting oxygenation at 2 L/min by nasal oxygen cannula, we administered 10 mg of remimazolam (0.15-0.2 mg/kg) (Yichang Humanwell Pharmaceutical Co., Ltd, China) for introduction. Within 1 min, the patient presented audible laryngeal stridor with marked depression of suprastemal fossa. Immediately, we observed a large area of erythema on his face, neck and chest. We further observed periorbital edema and lip swelling (Fig. 1A, video in Appendices). With a stethoscope, we listened to the breath sounds of both lungs, but these were too light at that moment. Following SpO 2 drop to 91%, we performed a jaw thrust maneuver, manual ventilation and 100% oxygen. Spontaneous breathing continued, but the SpO 2 level fluctuated between 85-95%. Considering the skin features, we speculated the presence of laryngeal edema, causing severe inspiratory dyspnea. Epiglottic edema was observed under visual laryngoscope, which confirmed our hypothesis (Fig. 1B). Copious oral secretions were noted, requiring aggressive suctioning. In parallel with respiratory compromise, 1 min after introduction, NIBP was stable at 100/64 mmHg, but after approximately 3 min, NIBP dropped to 77/47 mmHg, and heart rate increased to 95 beats/min. As anaphylaxis was strongly suspected based on the clinical presentations, we intravenously administered 25 μg of adrenaline, but hemodynamics did not change drastically. We then administered 25 μg of intravenous adrenaline repeatedly (total 50 μg) and 2000 mL of crystalloid, which improved NIBP to 116/79 mmHg. Despite improvement in the patient's blood pressure, SpO 2 still fluctuated between 81-87%. We initiated arterial blood pressure monitoring with cannulation of the left radial artery. Blood gas test results showed a significantly elevated PCO 2 of 104 mmHg. Considering the patient had previously used propofol without adverse reactions, we intravenously administered 12 mg propofol, combined with 4 mg vecuronium, and inserted a laryngeal mask airway to facilitate mechanical ventilation. Consequently, SpO 2 rose gradually to 100% and the patient's vital signs were stable. After discussion, the physician was allowed to proceed with colonoscopy examination, which revealed a large amount of yellow rectal discharge, and this was consistent with the diarrhea caused by allergy. Furthermore, for a definitive diagnosis, the patient's intestinal mucosa was biopsied and sent for histopathologic examination. The whole process lasted 2 and a half hours. The patient naturally awoke and was hemodynamically stable, but complained of eye photophobia and tearing. After 30 min of observation, the patient was returned to the general ward. The change in the patient's vital signs during the episode is shown in Fig. 2 and the results of blood gas test are shown in Table 1. Histopathologic examination showed eosinophil infiltration in the mucosa stroma of the ascending colon, with about 70/HPF in the dense area (Fig. 3). Four weeks after the event, the patient underwent skin tests to confirm the causative allergic agent. The intradermal tests for remimazolam, midazolam, dextran 40 were performed with 0.1 ml of each sample. Remimazolam and midazolam were first diluted with saline to 1 mg/ml and further again with saline to a ratio of 1:10 and 1:100. Interestingly, a markedly positive reaction was recorded at the test site with midazolam (erythema of 16 × 11 mm and swelling of 10 × 8 mm), but not with remimazolam and dextran 40 (Fig. 4). Discussion and conclusions Remimazolam besylate, a new ultrashort-acting GABA A receptor agonist, was approved in 2020 for general anesthesia and/or procedural sedation worldwide [6]. Owing to its advantageous characteristics of fast onset of action and rapid recovery without significant cardiovascular or respiratory depression, remimazolam has been widely used for pre-procedural sedation for endoscopy outside the operating room [7]. Herein, we demonstrated a case Diagnosis of anaphylaxis is mainly based on history and clinical criteria for organ system involvement [8]. There is various clinical evidence supporting anaphylaxis caused by remimazolam. First, typical clinical manifestations were recorded following remimazolam administration. Airway changes were the main and first symptoms observed, including dyspnea with laryngeal stridor and subsequent hypoxia. In addition, skin changes presented with a sudden flush on patient's face and chest, edema around the orbit and lips, photophobia and tears. The patient also presented with diarrhea, suggesting gastrointestinal involvement. Second, NIBP showed no change when dyspnea first happened. Hypotension occurred approximately 3 min after administration of remimazolam, indicating that respiratory compromise preceded hemodynamic fluctuations. Last but most importantly, a pathological examination of the mucosa stroma in the colon showed extensive infiltration of eosinophils, which strongly indicated anaphylaxis. Following the clear diagnosis of anaphylaxis, identification of the causative agent is critical for the patient. According to the pharmaceutical label indication, the remimazolam solution contains lactose monohydrate, hydrochloric acid, sodium hydroxide, remimazolam, Fig. 2 The patient's vital signs during the anaphylactic episode. BP: blood pressure; NIBP: non-invasive blood pressure; ABP: intra-arterial blood pressure; HR: heart rates; sBP: systolic blood pressure; dBP: diastolic blood pressure; EtCO2: end-tidal carbon dioxide; E, adrenaline. The arrow indicates the time of initial adrenaline administration and dextran 40. Among them, remimazolam itself and dextran 40 are the most likely to be the causative allergens. Remimazolam was a new agent but it has a similar chemical structure to midazolam. Thus, we performed skin testing with midazolam, remimazolam and dextran 40. Interestingly, in our case, the intradermal tests only presented positive reaction to midazolam, but not remimazolam nor dextran 40. Nevertheless, the methodology for remimazolam skin tests has not been standardized. We used 1:10 and 1:100 remimazolam dilutions with saline. To maintain consistency with the diluted concentration of remimazolam, we also used 1:10 and 1:100 midazolam dilutions. It is still questionable whether the concentration of these two drugs diluted in equal proportion is comparable. Therefore, there was a possibility of false-negative result of remimazolam. Although this patient was unable to provide the record of sedative agents when he underwent gastroscopy and colonoscopy two years ago, we firstly presume that remimazolam itself as the likely allergen in this case. This presumption is also declared in other case reports, for example, the skin prick test result of Tsurumi et al [2]. indicated that both remimazolam and midazolam showed positive reactions. Intradermal tests by Hasushita et al [5]. yielded positive reaction to remimazolam but not midazolam. On the other side, the diluent of dextran 40 was from. "Dextran 40 and Glucose Injection" 500 ml (Kelun Pharmaceutical Co., Ltd, China), containing 30 g dextran 40 and 25 g glucose. Dextran 40 is known to cause anaphylaxis and the anaphylactoid symptoms is non-IgE-mediated [9]. As expected, the skin test result of dextran 40 showed negative. In addition, five case series reported by Kim et al [4]. maintained dextran 40 as the cause of anaphylaxis rather than remimazolam. Thus, we can not absolutely rule out the possibility of dextran 40 causing anaphylactoid symptoms. There have been several case reports of anaphylaxis caused by remimazolam recently. Almost all cases occurred in the induction period of general anesthesia with endotracheal intubation. Severe hypotension was the initial and main evidence for anaphylactic shock. For example, one case report described hemodynamic collapse within 2-3 min after tracheal intubation in 3 cases [4]. Hasushita et al [5]. found that blood pressure dropped sharply and skin erythema occurred 6 min after tracheal intubation. Subsequently, this patient developed cardiac arrest. Fortunately, our case presented with anaphylaxis during the induction episode of procedural sedation. Airway compromise and skin signs could be found initially, which led to our suspicion of peri-operative anaphylaxis. Therefore, when hypotension and tachycardia presented, we administered an intravenous bolus of adrenalin for a prompt response. As expected, early use of adrenaline combined with the muscle relaxant vecuronium quickly stabilized the patient's hemodynamics and relieved airway obstruction. In comparison with our case, anaphylaxis in other patients was discovered rather late -more than 3 min or even 6 min, and hypotension and tachycardia were the prominent presenting features. It was speculated that the anaphylaxis induced by remimazolam would present with isolated cardiovascular features, since raised airway pressure or oxygen desaturation was not reported. We also considered the possibility that mechanical ventilation following neuromuscular blocking agents partly masked the airway problem. Altogether, comprehensively understanding the variations in presenting clinical signs of remimazolaminduced anaphylaxis between different patients can help anesthesiologists diagnose and deal with anaphylactic situations as early as possible in their practice.
2023-03-29T14:12:39.780Z
2023-03-29T00:00:00.000
{ "year": 2023, "sha1": "21f5863a80a2241fa930072a04e6b3ab01c4732a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "21f5863a80a2241fa930072a04e6b3ab01c4732a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
260895231
pes2o/s2orc
v3-fos-license
Guidelines for Innovative Leadership Development of Private Vocational College Administrators in the Northeastern Region , Background and Significance of the Problem The Ministry of Education Office of the Vocational Education Commission has established policies and priorities aimed at fostering excellence and specialization in colleges, thereby enhancing the nation's human capital potential through reskilling, upskilling, and new skills development (Re-Skills, Up-Skills, NW-Skills).Additionally, there is a focus on strengthening collaboration and expanding the role of the private sector in intensive education management.This includes addressing the country's needs by developing diverse learning materials (on-site, on-air, online, on-demand) and equipping learners with 21st-century skills such as professional competencies, English/Chinese fluency, and digital literacy.Moreover, efforts are being made to improve the quality of vocational education management, implementing digital systems as tools to rebrand vocational education.The Ministry is open to recruiting teachers and experts from various sectors to meet the demands of vocational manpower production and development, and it encourages their active involvement in private sector vocational education management (Suthep Kaeng Santhia, 2020). In the context of managing an innovative organization, an administrator with innovative leadership plays a crucial role in enhancing education management efficiency and effectiveness across all dimensions.Such an administrator drives the innovation strategy towards the desired goals, possessing unique and superior skills associated with innovative leadership.This leadership style involves a distinct mindset and the ability to generate innovation, serving as a leader in promoting new ideas, methods, techniques, processes, products, services, and problem-solving approaches in a dynamic operational environment.This approach is responsive to present and future needs (Horth and Buchner, 2009) and fosters the creation of an organizational culture that embraces and supports innovation.By cultivating an environment that encourages collaboration and stimulates the imagination of personnel, thought leaders can emerge within the organization.Additionally, the administrator can create new, more efficient factors of production, setting the organization apart from its competitors (Volk, 2012). Innovative leadership development is crucial for private vocational college administrators because it can help them to adapt to the constantly changing environment of the education sector.Private vocational colleges face unique challenges that require innovative thinking and strategic decision-making to overcome.Developing innovative leadership skills can help administrators to find new ways to improve the quality of education, increase student enrollment, and optimize the college's financial resources.Innovative leadership development can help private vocational college administrators to stay ahead of the competition.As the education sector becomes more competitive, institutions must find new ways to differentiate themselves from their competitors.By developing innovative leadership skills, administrators can stay ahead of the curve and position their institutions for long-term success.In summary, innovative leadership development is critical for private vocational college administrators because it can help them to adapt to a constantly changing environment, identify and capitalize on new opportunities, build strong relationships with stakeholders, and stay ahead of the competition. Therefore, the researcher is interested in studying the Development of Indicators and Guidelines for Innovative Leadership Development of Administrators of Private Vocational Colleges in the Northeastern Region It is a study from theory to measurement modeling where researchers can verify the consistency of innovative leadership measurement models developed from theory and research with empirical data.And by examining the quality of the indicators from experts and stakeholders, including using the research results to create innovative leadership development guidelines to be used as a guideline for planning for executive development in line with current conditions, which will help promote the efficiency and effectiveness of executive performance in the future. Conceptual Framework After reviewing 11 relevant literature sources, including Woi (2013), John (2006), Ailin and Lindgren (2008), Horth and Buchner (2009), New and Improved (2009), Dyer, Hal and Clayton(2011), Gates (2012), Sena and Erena (2012), Vlok (2012), Zhang (2012), andPhisitwat Klinthaisong (2016), it was observed that there are multiple components of innovative leadership.In total, 17 components were identified; however, for the purpose of this research, the researcher employed a criterion based on the frequency with which components were identified by a majority of researchers as being integral to innovative leadership at a high level (60 percent or more).These components were then utilized as a conceptual framework for the research, resulting in five elements: Innovative Vision, which comprises the following indicators: Research Methodology The first phase involves studying the components and indicators of innovative leadership among administrators of private vocational colleges in the northeastern region.Qualitative data will be gathered through semi-structured interviews conducted with seven experts.The aim is to assess the validity and appropriateness of the variables.The collected data will be analyzed using Content Analysis. In the second phase, the consistency of the measurement model for innovative leadership among administrators of private vocational colleges in the northeastern region will be tested.Quantitative data will be collected and analyzed using advanced statistical methods.The sample will consist of teachers and administrators from 93 private vocational colleges in the region, selected randomly from 50% of the total population based on the ratio criteria between sample units and the number of parameters or variables in the factor analysis.(Factor Analysis) as described by Hair et al. (2010).Typically, a sample ratio of 20:1 is used for the parameters being studied.In this research, as there are 20 parameters, a total of 400 participants will be selected through stratified random sampling.Online questionnaires will be employed to collect the data.Confirmatory factor analysis will be conducted to test the component structure relationship model and determine the weights of the sub-variables used to generate the indicators.This analysis will be based on empirical data derived from the questionnaire, and it will assess the consistency of the research model.A theoretical model created by the researcher will be analyzed using second-order confirmatory factor analysis with empirical data. The third phase involves developing guidelines for the development of innovative leadership among administrators of private vocational colleges in the northeastern region.Qualitative data will be collected through focus group discussions with experts.These discussions will explore the suitability, possibilities, and usefulness of leadership development approaches. The final phase focuses on studying the results of implementing the innovative leadership development guidelines among administrators of private vocational colleges in the northeastern region.Qualitative data will be collected from the outcomes of applying the guidelines, which will be adapted to the interested and willing private vocational colleges.The content of these results will be analyzed based on key elements. Research Results Components and indicators of innovative leadership among private vocational college administrators are as follows: Innovative Vision: Change management Technology utilization capability The results of the confirmatory factor analysis on innovative leadership factors among administrators of private vocational colleges in the northeastern region revealed the following details: Innovative Vision: The component with the highest factor loading was the implementation of the vision (Factor Loading = 0.85, Forecasting coefficient = 0.73).The component with a factor loading of 0.78 and a forecasting coefficient of 0.62 was dissemination of the vision.The component with the lowest factor loading was vision creation (Factor Loading = 0.74, Forecasting coefficient = 0.55). Courage to take risks with innovation: The factors with high factor loadings included having a new initiative (Factor Loading = 0.77, Forecasting coefficient = 0.60).The factor loading was 0.75 with a forecasting coefficient of 0.56.The component with the lowest factor loading was courage (Factor Loading = 0.69, Forecasting coefficient = 0.48). Innovation networking: The component with the highest factor loading was membership of the network (Factor Loading = 0.80, Forecasting coefficient = 0.64).The component with a factor loading of 0.785 had a forecasting coefficient of 0.61.The component with the lowest factor loading was exchange interaction (Factor Loading = 0.78, Forecasting coefficient = 0.60). The component with the lowest factor loading was judgment (Factor Loading = 0.80, Forecasting coefficient = 0.64). Leading change in innovation: The component with the highest factor loading was the ability to use technology (Factor Loading = 0.83, Forecasting coefficient = 0.70).Systems thinker had a factor loading of 0.82 and a forecasting coefficient of 0.68.The component with the lowest factor loading was factor loading (Factor Loading = 0.79, Forecasting coefficient = 0.63), as shown in Table 1.The results of implementing an innovative leadership development guidelines for administrators of private vocational colleges in the northeastern region can be summarized as follows.  Innovative vision 1) Creating a vision, the performance of the college is practiced by administrators to promote the vision for teachers and personnel by organizing various activities to develop and provide professional skills and knowledge.2) Dissemination of the vision, the performance practiced by administrators gives opportunities for teachers and personnel to jointly design concepts for educational institute development.3) Implementation of the vision, the performance is practiced by Management assigning tasks to personnel.Connect with college goals responding to student needs community establishment  Courage to take risks in innovation 1) Being challenging, the performance of the college is practiced by the administrators having clear goals in their work.Then there is a work plan for solving problems and summarizing work within that plan.2) Being responsible, the performance of the college is practiced by the administrators with diligence.And be patient with obstacles, and follow up on work performance to improve consistently.3) Having a new initiative, the performance of the college is practiced by executives with ideas.Dare to always bring new things and take action to improve the college  Innovation networking 1) Membership of the network is practiced by the executives setting up a committee.network coordinator cooperation in various fields 2) Participation of network members, the performance of the College is practiced by administrators, promoting and providing opportunities for teachers and personnel to attend training in new skills and knowledge and to promote development in various fields.3) Exchange interactions, the performance of the College is practiced by the administrators to develop knowledge and competence with various organizations both domestic and international.  Innovative Creativity 1) Having expertise, the performance of the College is practiced by the administrators to encourage teachers and personnel to develop knowledge and skills according to the line of work of teachers and personnel.And able to solve problems that arise during work.2) Having an innovative mind, the performance of the college is practiced by the administrators with the concept of knowledge development.Professional skills consistently by setting guidelines for the possibility of developing knowledge.3) Discretion, the performance of the college is practiced by the administrators to apply knowledge.Ability to manage tasks through systematic problem-solving based on statistics  Leading change in innovation 1) Systems Thinker, College performance is practiced by Executives systematically inculcate concepts.Thinking systematically, there is an operational plan for PDCA Model.2) Change management.There is the practice by executives to manage problems that arise effectively.However, the principle of flexibility must be taken into account, adjusting a positive attitude, not coercion.3) Technology utilization capability, the performance It is practiced by executives to plan information technology development and create modern and efficient communication channels. Discussion of Research Results The results of the development of innovative leadership indicators of administrators of private vocational colleges in the northeastern region found that the indicator model created by the researcher was statistically significant for all indicators, indicating that the elements discovered are critical components of innovative leadership.This is due to the development of the indicators in this study, the researcher used the empirical definition method.Which is a definition that is close to the theoretical definition that the researcher has determined which sub-variables the indicator consists of and set the format Methods for combining variables to obtain indicators using research theory as the basis.Therefore, the generated indicator model is more accurate.Consistent with empirical data (Nonglak Wiratchai, Sajeemas Na Wichien, and Pisamai Orathai, 2008). The results of testing the consistency of the innovative leadership indicator measurement model of administrators of private vocational colleges in the northeastern region with empirical data showed that are consistent with the empirical data as follows: 1) Innovative vision was found that the indicator with the highest component weight was the implementation of the vision.Demonstrating the fulfillment of the vision is putting the created vision into action by setting goals, plans, and activities.That is in line with the vision by promoting and supporting teachers and educational personnel to participate in the implementation of the set vision.And for the created vision to be effective, it must be monitored.This is in line with Samut Phuapsuwan (2013) who said that following a vision is an expression of executives to bring the created vision into practice.by targeting plans and activities in line with the vision through the participation of members in the organization for the established vision to be accomplished, follow-up audits are required.Vision compliance indicators include: setting goals, plans and activities, engagement, and monitoring and tracking.ISSN 1925-0746 E-ISSN 1925-0754 2) Courage to take risks with innovation found that the indicator with the highest component weight was initiative.Shows that originality is an expression of new ideas that are different and creative.Dare to improve new ways of working to increase work efficiency.There is a development of a new work system.And accept both positive and negative impacts arising from the new way of working, in line with Promboon Panitchpakdi (2007).He mentioned creative thinking as thinking outside the box that we are used to, or thinking out of the box.Thinkers must be open-minded to choices and new opportunities.Although it may seem risky or requires investment to achieve new learning creativity or innovation and not caused by a momentary thought that accidentally entered the brain.It comes from the process of thinking, observing, analyzing, and synthesizing.It is also said that fostering initiatives at work is the mission of modern executives.There are several ways to create an atmosphere in the team that is conducive to the initiative: 1) Promote thinking beyond traditional boundaries.Combine a variety of ideas.2) Changing perspectives from the original.Look for alternatives from new perspectives.3) Research.Research designed to test new ideas or to understand the essence of how to think about solving problems and lead to new alternatives. 3) Innovation networking found that the indicator with the highest factor loading was the participation of network members.Demonstrate that participation is an activity that produces groups and teamwork.For exchange, support, interdependence, activities, or production between groups or institutions.There is contact and support for the exchange of information and news.And voluntary cooperation Incentivize a person or group to come and help.or support to make benefits in various matters Participation in any activity at any level may be participation in the decision-making process or participation in the administrative process voluntarily.Enthusiasm, and earnestness, to push for the achievement of the set goals, in line with Thawilvadee Burikul (2005) has explained the meaning of participation in many dimensions.Which can be summarized as follows: 1) Participation is considering the voluntary contribution of an individual or group to any project.Which are various public projects which are expected to affect the development of the nation without changing the project or criticizing the content of the project. 2) Participation in a broad sense means giving rural people feel alert to know how to receive help and respond to development projects.And at the same time support the initiatives of local people. 3) In the field of rural development.Participation is the involvement of the public in the decision-making process.Process project evaluation process and jointly receive benefits from that development project. 4) Innovative Creativity it was found that the indicator with the highest component weight was innovative imagination.It shows that having an innovative mind is an idea, the ability to set clear goals for innovation development.The pursuit of new ideas to achieve the goals and take those ideas to be evaluated.Selection for the best method and the implementation of the concepts that have been put into action plans, consistent with Wanich Sutharat (2004) explained that innovative thinking means the capacity of the human brain that arises from bringing knowledge, information, news, and experiences.to be integrated by creating a new form of Imagination, therefore, arises in the form of a combination of ideas from both fields of science, that is, scientific creativity.And the arts as appearing as works of literature, art, science, ethics, etc. 5) Lead innovation change It was found that the indicators with the highest component weight were: Technology capability It shows that the ability to plan technology is an important component of executives in administration and is used in communication and access to information service sources.to be systematically used in school management Including the ability to digitize school information systems, consistent with Suriya Madthing (2014) explained that competency in using information technology means having knowledge and the ability to access information.Information management information integration information assessment information creation and the use of information to communicate to bring to work. In conclusion, the implementation of innovative leadership development guidelines for administrators of private vocational colleges in the northeastern region has resulted in the following outcomes: -Innovative Vision: Administrators have successfully created, disseminated, and implemented a vision for the educational institute.They engage teachers and personnel in promoting the vision, provide opportunities for professional development, and align tasks with college goals and student/community needs. -Courage to take risks in innovation: Administrators exhibit a challenging and responsible approach to innovation.They set clear goals, create work plans, evaluate progress, and demonstrate diligence and patience in overcoming obstacles.They also bring new ideas and take action to enhance the college's development. -Innovation networking: Administrators establish committees and facilitate cooperation within various fields.They promote training opportunities and foster development for teachers and personnel in diverse areas.They also engage with domestic and international organizations to enhance knowledge and competence. -Innovative Creativity: Administrators encourage the development of knowledge and skills among teachers and personnel, enabling effective problem-solving.They emphasize knowledge expansion and apply their expertise through systematic problem-solving based on statistical analysis. -Leading change in innovation: Administrators systematically instill conceptual thinking and employ operational plans following the PDCA (Plan-Do-Check-Act) model.They effectively manage arising problems with flexibility and foster a positive attitude instead of coercion.They also plan the development of information technology and establish modern and efficient communication channels. Overall, the implementation of these innovative leadership development guidelines has led to a culture of innovation, improved collaboration, professional growth, effective problem-solving, and positive change management within private vocational colleges in the northeastern region. Suggestion From the research results, there suggestions for applying the research results and recommendations for further research as follows: 1) Suggestions for applying research results, the results of the development of innovative leadership indicators of administrators of private vocational colleges in the region Northeast executives can apply for self-improvement planning and educational personnel. in order of importance as follows: 1) Innovative Vision, Administrators develop a leadership team to jointly determine the workload of personnel in connection with the college vision's goal values to encourage personnel to have a sense of empathy.And willing to work to achieve vision and supervision Evaluate performance according to criteria and goals within the specified period. 2) Courage to take risks with innovation, Administrators place importance on thinking outside the box and daring to present.New ideas to develop or improve new ways of working, seeking new ways of working that produce better results and drive change. 3) Innovation networking, Administrators should have a plan for conducting exchange activities.Systematic and continuous learning creates channels for information exchange or collaboration through the network and organized study visits to external educational institutions to open up the world. 4) Innovative Creativity, Administrators should seek new ideas or perspectives from a variety of information sources and use the research process to find new things or new methods used in job development.5) Leading change in innovation.Administrators should plan the future of information technology of the college where possible and establish a communication channel between teachers' parent student through the internet network and know the sources of information services and know how to access information resources. Suggestions for further Research -There is a need for research and development on innovative leadership among administrators of private vocational colleges to enhance the management model and educational innovation as a source and model of learning. -Policy research should be conducted to establish guidelines and provide direction for innovative leadership among administrators of private vocational colleges, serving as a national and international reference point. of the network Innovative Creativity, encompassing the following indicators: a) Having expertise b) Having an innovative mind c) Discretion Leading change in innovation, which involves the following indicators: a) Systems Thinker b) Change management c) Technology utilization capability These elements and their respective indicators are illustrated in Figure 1. Figure 1 . Figure 1.Conceptual Framework 1.3 The Research Objectives Are as Follows:  Investigate the components and indicators of innovative leadership among administrators of private vocational colleges in the northeastern region. Assess the consistency of the measurement model for innovative leadership among administrators of private vocational colleges in the northeastern region. Develop guidelines for the development of innovative leadership among administrators of private vocational colleges in the northeastern region. Examine the results of implementing the guidelines for the development of innovative leadership among administrators of private vocational colleges in the northeastern region. Table 1 . Presents the Results of the Confirmatory Factor Analysis for the Key Components of Innovative Leadership among Administrators of Private Vocational Colleges in the Northeastern Region
2023-08-15T15:01:50.101Z
2023-08-12T00:00:00.000
{ "year": 2023, "sha1": "1789f7d55ad7d7402b97a0e428baa582dee44ae0", "oa_license": "CCBY", "oa_url": "https://www.sciedupress.com/journal/index.php/wje/article/download/23813/15090", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "d2fe87509f1ae4a47972cd2c0c72b2718a9b365c", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
231603709
pes2o/s2orc
v3-fos-license
Application of a Modified Generative Adversarial Network in the Superresolution Reconstruction of Ancient Murals Considering the problems of low resolution and rough details in existing mural images, this paper proposes a superresolution reconstruction algorithm for enhancing artistic mural images, thereby optimizing mural images. The algorithm takes a generative adversarial network (GAN) as the framework. First, a convolutional neural network (CNN) is used to extract image feature information, and then, the features are mapped to the high-resolution image space of the same size as the original image. Finally, the reconstructed high-resolution image is output to complete the design of the generative network. Then, a CNN with deep and residual modules is used for image feature extraction to determine whether the output of the generative network is an authentic, high-resolution mural image. In detail, the depth of the network increases, the residual module is introduced, the batch standardization of the network convolution layer is deleted, and the subpixel convolution is used to realize upsampling. Additionally, a combination of multiple loss functions and staged construction of the network model is adopted to further optimize the mural image. A mural dataset is set up by the current team. Compared with several existing image superresolution algorithms, the peak signal-to-noise ratio (PSNR) of the proposed algorithm increases by an average of 1.2–3.3 dB and the structural similarity (SSIM) increases by 0.04 = 0.13; it is also superior to other algorithms in terms of subjective scoring. The proposed method in this study is effective in the superresolution reconstruction of mural images, which contributes to the further optimization of ancient mural images. Introduction Ancient murals are the bright pearl in the treasure house of cultural heritage. At present, the protection of murals mostly focuses on the field research of ancient murals and the restoration of damaged areas of murals. For instance, Tong et al. [1] studied the paint layer and bottom layer of a Tang Dynasty tomb mural using spectral-domain optical coherence tomography. Wu et al. [2] proved the existence of casein in ancient Chinese mural pigments. Liang and Wan [3] proposed the idea of making color charts for Dunhuang frescoes to improve the color and spectral accuracy of digital imaging of cultural works of art. Cao et al. [4] proposed an ASB-LB algorithm to solve the problem of flake shedding of temple mural images. Sun et al. [5] proposed a line drawing generation method, which made mural images have different artistic styles. With the gradual development of ancient mural protection work and the maturity of superresolution reconstruction technology, mural image conservation will be further extended. Image superresolution reconstruction refers to a technique that inputs one or more low-resolution images and outputs the corresponding high-resolution images through a specific algorithm. is will make mural images have a better effect on high-frequency detail information and overall image performance. From the perspective of algorithm types, superresolution reconstruction technology can be divided into interpolation-, reconstruction-, and learningbased superresolution reconstruction. Li and Orchard [6] proposed a new edge-oriented self-adaptive interpolation scheme. Based on the invariability of edge direction resolution, a high resolution was used to guide interpolations to enhance the resolution of still images. Interpolation-based methods have the advantages of low computational complexity and ease of understanding, but they also have some serious defects. Restored superresolution images often appear blurred or sawtooth. Li et al. [7] proposed a single image superresolution reconstruction method based on a genetic algorithm and regularization prior model. In this model, a genetic algorithm was used to search the solution space to avoid a local minimum value; then, the regularization prior model was used to perform a single point search in the solution space, and a higher quality superresolution reconstruction estimation was obtained. Zhao et al. [8] proposed a novel single image superresolution reconstruction method based on the unified partial differential equation, and the method achieved a good effect in enhancing the image edge and suppressing noise robustness. Zhang and He [9] proposed a single image superresolution reconstruction method based on mixed sparse representation, which was particularly effective for optimizing the reconstruction of noisy images. Bahy et al. [10] proposed a method based on local adaptive regularization parameters instead of fixed regularization parameters, which is convenient for addressing the reconstruction of low-resolution multifocus images. Nayak and Patra [11] proposed a new RSRR framework to keep the reconstructed image structure consistent. Dai et al. [12] proposed a method to represent soft edge smoothness based on SoftCuts measurements. is method first obtains the key information in the original image and combines the prior knowledge of the unknown superresolution image to constrain the generation of the corresponding superresolution image. Compared with the interpolation-based method, the above method has a better image superresolution reconstruction effect, but the method of constraining the generation of superresolution images by prior knowledge of unknown superresolution images may make the edge of superresolution images too sharp. Furthermore, the details of the image may become increasingly unstable with increasing image size. In contrast, the method based on deep learning uses a large quantity of training data through multilayer nonlinear transformation to learn the corresponding relationship in some high-level abstract features between low-resolution images and high-resolution images and then realizes the superresolution reconstruction of the image according to the mapping relationship between the acquired images. Dong et al. [13] applied a convolutional neural network (CNN) to the field of superresolution reconstruction for the first time and proposed a deep-learning method for single image superresolution. For an input low-resolution image, the input image was first magnified to the target size by bicubic interpolation; then the nonlinear mapping between the interpolated low-resolution image and the high-resolution image was fitted by the CNN, and, finally, the reconstructed high-resolution image was output. However, this algorithm still retains part of the difference algorithm and does not thoroughly apply the idea of deep learning. Mao et al. [14] proposed a complete convolution encoding-decoding framework. is network consists of multiple convolution layers and deconvolution layers. Convolution layers capture the abstract content of the image and eliminate the damage; deconvolution layers upsample the features and restore image details; additionally, symmetric skips are introduced, which makes the training converge faster. However, this algorithm is likely to produce overfitting in superresolution reconstruction of mural image datasets. Huang et al. [15] proposed a multiframe superresolution method based on the consideration of image enhancement and image denoising. is method suppressed Gaussian noise and salt and pepper noise and made the edge of the reconstructed high-resolution image clear. However, the feature extraction ability of the algorithm is low, and, therefore, the superresolution reconstruction of the image with complex information is slightly fuzzy. Anagun et al. [16] used a variety of loss functions to combine with the Adam optimizer for the selection of a satisfactorily convergent loss function. ey also increased the residual module of the network to improve the performance of the model and used the Charbonnier or L1 loss function to reduce the time cost of model construction. Qin et al. [17] proposed a novel multiscale feature fusion residual network, which improved the expression ability of the network to obtain more accurate high-resolution images with satisfactory accuracy and visual effect. Zhang and An [18] introduced a superresolution reconstruction method based on migration learning and deep learning, which can not only obtain high-quality, high-resolution images but also reduce the time cost of model construction. Ledig et al. [19] proposed the SRGAN algorithm and designed a loss function to enhance the reality of the restored image. In their method, the adversarial loss function of the generative adversarial network (GAN) was combined, which enables the output superresolution image to be more authentic. Lim et al. [20] proposed the EDSR algorithm, which removed batch standardization, reduced the space used during training, and removed the unnecessary modules in the traditional residual network. To improve the efficiency of high-resolution image reconstruction, Mei et al. [21] extended traditional nonlocal attention to a new cross-scale nonlocal attention to model cross-scale self-similarity. Jiang et al. [22] proposed a hierarchical dense connection network structure to improve the efficiency of superresolution reconstruction. Yi et al. [23] proposed a multitemporal ultradense memory network for video superresolution, which expanded the width of the network and reduced the layer depth to reduce computational complexity. Jiang et al. [24] proposed a GAN-based edge-enhancement network, based on which clearer images were obtained, compared with previous GAN-based methods. All of the above learningbased superresolution reconstruction algorithms optimize the network structure and loss function from different perspectives and solve specific problems. However, due to the characteristics of mural images, such as small image datasets, uneven image quality, and rich image color, there are still many defects in the direct application of the existing superresolution reconstruction methods, such as fuzzy restoration of important texture information of the image, impure color in the reconstructed image, and changes in the overall artistic effect of the original image artistic. 2 Computational Intelligence and Neuroscience Based on the aforementioned information, this study proposes a new superresolution reconstruction algorithm, which is applied to the superresolution reconstruction of ancient mural images. e improvement of the proposed algorithm is mainly as follows: (1) e network design takes GAN as the basic framework, including the generative network and the discriminate network; MSE loss, VGg loss, and adversarial loss functions are introduced to optimize the network in two stages. (2) e generative network is based on CNN, in which deconvolution operation is replaced by subpixel convolution, batch standardization is removed, and residual module is introduced to deepen the network to optimize the network structure. (3) e discriminant network increases the number of network layers, and residual modules are integrated to enable the network to extract more image information, and the expression ability of the discriminant network is increased to further optimize the generative network model. GAN. Since GAN was first proposed by Ian Goodfellow in 2014 [25], there has been a new upsurge of research. GAN is composed of generators and discriminators. e generator is responsible for generating samples, and the discriminator is responsible for determining whether the samples generated by the generator are true. e generator should confuse the discriminator as much as possible, and the discriminator should distinguish the samples generated by the generator from the real samples as much as possible. e basic structure of the GAN is illustrated in Figure 1. e target function of GAN is as follows: (1) In the first part, the optimization of the discriminator is realized through max D V(D, G), V(D, G) is the objective function of the discriminator, and the first item V(D, G) represents the mathematical expectation of the probability of the samples from the real data distribution, which are determined as the real samples by the discriminator. For the sample from the real data distribution, the closer the probability of being predicted as a positive sample is to 1, the better. e second item E z∼p z (z) [log(1 − D(G(z)))] represents the expectation for the negative logarithm of the prediction probability by the discriminator for the image generated by the generator that originates from the noise distribution p z (z). A higher expectation value indicates a better performance of the discriminator. In the second part, the optimization of the generator is realized through min G max D V(D, G). e generator is not the objective function of the minimized discriminator min G V(D, G) but the maximum value of the objective function of the minimized discriminator. e maximum value of the objective function of the discriminator represents the Jensen-Shannon (JS) divergence between the distribution of the real data and that of the generated data. JS divergence can measure the similarity of distributions. e closer the two distributions are, the smaller the JS divergence will be. Residual Network. By increasing the number of network layers in the CNN, more abstract and semantic image features can be extracted. However, simply increasing the number of layers of the network causes gradient dispersion and degradation, which may eventually lead to saturation or even decline in the accuracy of the model in the training set. To solve this problem, He et al. [26] proposed a residual network (ResNet) and achieved satisfactory results in the classification task in the ImageNet competition. For its simple and practical characteristics, ResNet has been widely used in the fields of target detection, image segmentation, and text recognition. e basic structure of the residual module is shown in Figure 2. In the figure, X represents the input of the residual block of the current layer, F (x) represents the residual error of the module, and the weight layer represents the weight of this layer. X is an input value, and F (x) is the output after linear change and activation of the first layer. Between the linear change in the second layer and activation, the input value of the current layer X is added, and then F (x) is output after activation. Superresolution Reconstruction Algorithm of Mural Images to Enhance Artistry. Based on the characteristics of ancient murals and image restoration algorithms, this study designs a new algorithm for the superresolution reconstruction of artistic mural images. e overall structure of the algorithm is shown in Figure 3, which mainly focuses on three aspects: the network structure design, loss function, and the training and testing process. Network Structure Design. e mural image optimization network is divided into two parts: the generative network and the discriminant network. e generative network aims to output high-resolution images after superresolution reconstruction. e discriminant network aims to determine the authenticity of the output image of the generative network and the real mural image. e design architecture of the generative network follows the encoder-decoder structure, which is mainly divided into feature extraction and image reconstruction. e input of the Computational Intelligence and Neuroscience network is a low-resolution mural image and the output is a high-resolution image corresponding to the input image. In the feature extraction, 16 residual modules are introduced for feature extraction based on the idea of residual learning. It is worth noting that when dealing with high-level computer vision problems, such as image classification, batch normalization (BN) is usually integrated before each activation function of the convolution layer of the neural network to speed up the training time of the network model and solve the problems of gradient explosion and gradient dispersion [27]. When dealing with the problem of low-level computer vision and the deep network trained under the GAN framework, the addition of the BN layer will produce artifacts, consume more computing performance, and reduce the effect of image superresolution reconstruction. In such a situation, the BN operation is removed from the residual module to further optimize the network structure. e comparison between the traditional residual module and the residual module used in this study is shown in Figure 4. After the low-resolution features are extracted, superresolution image restoration is performed, followed by the final output of the high-resolution image. During upsampling, this study uses subpixel convolution instead of transposition convolution. Subpixel convolution uses a normal convolution structure, but the output channel is related to the target resolution. A shuffle operation is performed over the channel to obtain the output whose resolution is the same as that of the target. Compared with transposed convolution, the best feature of subpixel convolution is that the receptive field of the feature map is larger, which can provide more image information for superresolution reconstruction. e main purpose of the discriminant network is to accurately classify real superresolution images and superresolution image output by the generative network. With the improvement in classification accuracy, it promotes the optimization of the generative network, thereby producing high-quality, high-resolution images. e discriminant network consists of the input layer, convolution layer, and fully connected layer. Network inputs include authentic high-resolution images and generated high-resolution images. To extract higher-dimensional image features, 11 convolution layers are used. e 9-11th layers form the residual module, and the outputs of the 8th layer and 11th layer are then summed to obtain the final image features. is treatment makes the network avoid gradient dispersion and other problems to some extent. Finally, the discriminant network design is completed after the classification of the fully connected layer. e details of the discriminant network are summarized in Table 1. Loss Function. e loss function of the generative network consists of content loss (l SR X ) and adversarial network loss (l SR Gen ). e loss function of the generative network is calculated as follows: Content loss includes MSE loss (l MSE ) and VGG loss (l VGG ). Generally, mean square error loss is used for network optimization to obtain high-resolution images with high similarity at the pixel level. MSE is the sum of the square of the distance between the target variable and the predicted value of each sample. MSE is calculated, which is the sum of all squared losses for each sample, and then is divided by the number of samples. e MSE loss function is as follows: where N refers to the number of samples, (x, y) refers to the sample (x is the feature set in the training sample, and y is the Computational Intelligence and Neuroscience real value in the training sample), and prediction (x) refers to the predicted value of the sample. e mere use of the MSE loss function is likely to produce local area smoothing, which creates difficulty in recovering lost high-frequency details, such as texture information. erefore, the VGG loss function is integrated. e VGG loss function obtains the difference in the feature map between the real high-resolution image and the generated high-resolution image and then optimizes the model in a higher feature dimension using a gradient descent algorithm. Specifically, the generated high-resolution image and real high-resolution image are input into the pretrained 19-layer VGG network. Based on the feature map obtained after VGG network processing, the Euclidean distance is calculated, which is taken as the VGG loss. e calculation formula of VGG loss is as follows: where i and j represent the jth convolution (after activation) before the ith pooling layer, W and H represent the width and height of the feature map, respectively, I HR represents the real high-resolution image, I LR represents the low-resolution image, G θ G (I LR ) is the superresolution image of the low-resolution image generated by the network model, and ϕ i,j (I HR ) x,y − ϕ i,j (G θ G (I LR )) x,y is the difference between the real superresolution image and the generated superresolution image in the feature map obtained through the VGG19 network. Finally, the idea of adversarial learning is introduced into the network, and the generative adversarial loss is included in the calculation of the loss function to further optimize the generative network model. e calculation formula of the generative adversarial loss is as follows: where D θ G (G θ G (I LR )) represents the probability that the high-resolution image generated by the generative network is identified as the real high-resolution image by the discriminant network. Training-Testing Flow Sheet. In this study, the training process is divided into the training of the generative network and that of the generative network combined with the discriminant network. e specific training algorithm for the network model is described as follows (Algorithm 1). e flowchart of the model training is shown in Figure 5. After model training, the generated network model and discriminant network model are finally obtained. en, the generative network model is tested. In the testing process, the basic processes are basically consistent with those of the training processes, except that the parameters of the network model are no longer updated. Experimental Design. Dataset: In this study, the publicopen DIV2K dataset combined with a small number of mural image datasets is used to complete the construction of the network model. e DIV2K contains 800 pairs of images with various types and rich features, and the mural datasets contain 100 pairs of ancient high-quality mural images, which, to a certain extent, solves the problem of the adaptation of the depth domain. In the process of training, we use DIV2K and 50 pairs of mural images to complete the construction of the model. In the test process, 50 other pairs of mural images are used to collect the results data analysis. e verification dataset in this study is ancient mural images. e comparative experiment is divided into objective index comparison and subjective evaluation comparison to make the experiment more complete and the experimental results more convincing. Experimental environment: e effectiveness of the proposed algorithm is verified. e hardware environment primarily consists of an Intel core i5-9400fF@2.90 GHz, with 16 GB of memory and a Nvidia GeForce RTX2070 video card. e software environment is Python 3.7 for language programming on the Windows 10 system, with TensorFlow as the framework to complete the superresolution reconstruction of mural images. Experimental Results and Analysis. Ten mural images of different styles with different color contrast and rich texture details were locally magnified four times, and the superresolution reconstruction effect of the proposed algorithm was compared with those of the bicubic interpolation (BI) algorithm [6], EDSR algorithm [19], and SRGAN algorithm [20]. e results are shown in Figure 6. As shown in Figure 6, the superresolution images restored by the interpolation-based BI algorithm appear blurred with zigzagged image texture. is is because this algorithm assumes that the gray value of image pixels changes continuously and smoothly. However, this assumption is not in line with the actual situation. Additionally, this algorithm does not consider the degradation model of the image, resulting in unsatisfactory superresolution. Currently, the deep-learning-based EDSR algorithm and SRGAN algorithm are extensively used in practice. Compared with the BI algorithm, these two greatly improved the repair effect. However, due to the small number of network layers used in these algorithms for image feature extraction, more image details cannot be obtained. is drawback results in a blurred superresolution reconstruction effect at the edge region of the image. Moreover, large deviations may sometimes exist in the optimization of image color based on these algorithms, and, therefore, the restoration effect on reconstructed image details needs to be improved. Compared with the above-mentioned superresolution reconstruction algorithms, the algorithm proposed in this study achieves a better effect on the superresolution reconstruction in terms of texture information and color saturation. Subjective Assessment. Comparisons in objective indices may not fully reflect the human visual perception of the mural superresolution reconstruction image. To make the superresolution image reconstruction more universal, we also selected five experts in the field of mural work and 20 professionals with normal vision to score the optimization effect of the four different algorithms. e highest score was 10, and the lowest was 1. e quality of the mural image was judged according to the score. All the selected experts performed much research work in the field of murals and have a deep understanding of murals. e selection of experts for scoring enables the comparative results to be more referential and authoritative. Ten representative mural images were selected by the five experts, and the optimization effects of the four algorithms were evaluated from the perspectives of overall esthetics and texture details. e scores assigned by the experts in terms of overall esthetics are shown in Figure 7(a), which reflects whether the overall color of the reconstructed images is rich and in line with the artistic conception of the mural. e scores assigned by the experts in terms of texture structure are shown in Figure 7(b), which reflects whether the changes in the lines and texture color are consistent with the painting habits of the murals. e average scores are shown in Figure 7(c), which can avoid scoring contingency and more scientifically exhibit the advantages and disadvantages of different algorithms. e purpose of mural image optimization is not only to protect ancient cultural relics but also to encourage ordinary people to learn and appreciate the beauty of ancient murals. For this reason, we selected 20 people from different work positions to score 5 mural images optimized by different algorithms. e average scores of each algorithm were obtained. ese scores represent the recognition of the vast majority of people for different quality mural images and the excellence of the corresponding algorithm. erefore, the score results were more universal. e scoring results are shown in Table 2. Compared with other superresolution reconstruction algorithms, the experts in the field of murals gave a higher evaluation of the algorithm proposed in this study in terms of overall esthetics and texture detail structure, which reflects the effectiveness and superiority of the algorithm in the professional field. Similarly, the volunteers from different industries also gave a higher score for the mural image optimized under the proposed algorithm, which shows that the mural images optimized by the algorithm in this study achieved satisfactory results. erefore, the algorithm proposed in this study also outperformed other algorithms in terms of subjective evaluation. Objective Assessment. In addition to the three algorithms mentioned above, this study also selects four recently proposed algorithms that are representative in the field of the superresolution reconstruction of images, and the algorithm obtained after the discriminant network is removed from the algorithm proposed in this study, for comparisons in terms of PSNR, SSIM, natural image quality evaluator (NIQE), and inference time. PSRN evaluates the quality of the image by comparing the differences between the corresponding pixels of the two images. A higher PSNR indicates a smaller distortion and better superresolution reconstruction effect. e PSNR is calculated as follows: PSNR � 10 log 10 where W is the width of the image, H is the height of the image, and X (i, j) and Y (i, j) represent the pixel values of two superresolution images. Input: low-resolution image and the corresponding high-resolution image Output: the generative and discriminant network models (1) Step 1: the low-resolution image in the dataset is read (2) Step 2: the features of the mural image are extracted, and then upsampling is performed to obtain the high-resolution image of the target size (3) Step 3: MSE is calculated to update the network model; the reconstructed high-resolution mural image is output (4) Step 4: steps 1-3 are repeated to optimize the network model until the MSE tends to be relatively stable (5) Step 5: the generated high-resolution image as the false sample, and the corresponding real high-resolution image as the real sample are input into the discriminant network (6) Step 6: high-resolution image features are extracted, and finally, a feature vector is output after the fully connected layer (7) Step 7: the sigmoid function is used to transform the feature vector into a probability value and then determine whether the input image is a real superresolution image (8) Step 8: the sum of the content loss value and countermeasure network loss value is summed, and the generative network model and the discriminant network model are updated and saved (9) Step 9: steps 4-8 are repeated to update and optimize the parameters of the generative network and discriminant network model until the loss value of the model tends to be stable and remains for a period of time ALGORITHM 1: Algorithm training process. Computational Intelligence and Neuroscience 7 SSIM is an index for evaluating image similarities in terms of brightness, contrast, and structure. e value range of SSIM is [0, 1], and a higher value indicates higher similarity. SSIM is calculated as follows: where x and y represent the reconstructed superresolution image and the original high-resolution image, respectively; μ x and μ y are the average values of x and y, respectively; σ 2 x and σ 2 y are the variances in x and y, respectively; σ xy is the covariance of x and y; and c 1 and c 2 are constants. NIQE serves as an objective evaluation index, which extracts features from natural landscapes for image testing [28]. e extracted features are fitted into a multiple Gaussian model, which is responsible for measuring the difference in the multivariate distribution of the image to be tested (the distribution is constructed with the features extracted from a series of normal natural images). e experimental results of various algorithms in terms of PSNR, SSIM, NIQE, and inference time are summarized in Table 3. As shown in Table 3, the performance of the algorithm proposed in this study ranks the first in SSIM and the second in PSNR and NIQE. However, the proposed algorithm shows poor performance in terms of inference time, which ranks sixth among all considered algorithms. is is because the algorithm proposed in this study uses a more complex network structure for image feature learning, which obtains better quality of images at cost of the inference speed. Compared with the classic algorithms BI, EDSR, and SRGAN, the proposed algorithm increases PSNR by Figure 5: e training flow of the proposed model. 8 Computational Intelligence and Neuroscience Computational Intelligence and Neuroscience 9 1.2-3.3 dB and SSIM by 0.04-0.13. Compared with the algorithms used in the latest research in the field of image superresolution reconstruction, that is, literature [21], literature [22], literature [23], and literature [24], the algorithm proposed in this study also exhibits overall more satisfactory experimental results: It reduces the algorithm's interference time while obtaining higher image quality. Compared with the algorithm used in the ablation study, the proposed algorithm in this study has greatly improved image quality, which illustrates the effectiveness of the algorithm improvement in this study, although the improvement increases the inference time. Based on the above analyses, the algorithm proposed in this paper fully exhibits its effectiveness and excellence in improving the quality of mural images. Conclusions Aiming at mural image optimization, this study proposed a superresolution reconstruction algorithm to enhance the artistry of mural images. A CNN is used as the infrastructure to realize feature extraction for the mural images. e generative network is optimized by residual learning, and then the superresolution reconstruction of the mural image is realized based on the extracted features through the upsampling of the subpixel convolution. In the discriminant network, the deep convolutional neural network and residual modules are used to distinguish between the generated high-resolution images and the real high-resolution images. Different from common single loss functions, the algorithm proposed in this study adopts a combination of multiple loss functions. Additionally, it uses the method of staged network model optimization to realize the scientific transformation process from low-resolution images to high-resolution images. Compared with existing algorithms, mural images optimized by the proposed algorithm have noticeable improvement according to the subjective visual effect and objective experimental data. e results show that this algorithm has a better effect on the superresolution reconstruction of mural images with rich color and strong texture structure. However, this algorithm also suffers from some drawbacks. In superresolution reconstruction of mural images, noise of other colors often appears in regions with a single color and strong contrast, which makes the image color impure and reduces the artistic value. e training time of the adversarial neural network is uncertain. In the future, we will fuse superresolution reconstruction with an image noise reduction algorithm to solve the phenomenon of noise in high-resolution images. We will also conduct research by adopting more scientific GAN training termination conditions to realize superresolution reconstruction of mural images. Data Availability All data for the analysis in this study are included within the article. Conflicts of Interest e authors declare that there are no conflicts of interest.
2020-12-31T09:04:05.831Z
2020-12-29T00:00:00.000
{ "year": 2020, "sha1": "0918d2adce37d34af363969e4ee5c31f79d27a49", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/cin/2020/6670976.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "205f17c2d596599e36f98a153041d4bd2e4971f9", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
238255185
pes2o/s2orc
v3-fos-license
Cost of Illness in Patients with Duchenne Muscular Dystrophy in Portugal: The COIDUCH Study Objective The aim of this study was to estimate the cost of illness (COI) of Duchenne muscular dystrophy (DMD) and its relation to disease progression, using age as a proxy, and according to the ambulatory status of patients. Methods We conducted a cross-sectional study of patients diagnosed with DMD identified through the Portuguese Neuromuscular Patients Association (APN). Data regarding patient and caregiver demographics, patient health status, resource utilization and cost, and informal care were collected using a custom semistructured questionnaire. Labor productivity and absenteeism losses were captured using the Work Productivity and Activity Impairment questionnaire. Costs were valued using a societal perspective. Results A total of 46 patient–caregiver pairs were included, of which eight of the patients were ambulant and 38 were nonambulant. Age had a decreasing effect on COI, independent of the patient’s disease stage. Annualized lifetime costs were at their highest in nonambulant patients around the mean age of loss of ambulation (10 years of age). The mean per patient stage-specific costs (year 2019 values) of DMD were estimated at €48,991 in the nonambulant stage and €19,993 in the ambulant stage. Direct nonmedical costs were the main cost drivers, followed by indirect costs. Conclusions Our results indicate a close relation between overall disease costs and disease progression. DMD is associated with a substantial economic burden, which appears to be larger around the time ambulation is lost (10 years of age). The availability of new therapeutic options that delay disease progression, especially loss of ambulation, may prove to be highly beneficial for not only patients with DMD but also their families and society. Supplementary Information The online version contains supplementary material available at 10.1007/s41669-021-00303-5. Background Duchenne muscular dystrophy (DMD) is a rare neuromuscular X-linked disorder that primarily affects males and is caused by mutations in the dystrophin gene. It has an estimated prevalence of 7.1 (95% confidence interval [CI] 5.0-10.1) cases per 100,000 males and a birth prevalence of 19.8 (95% CI 16.6-23.6) cases per 100,000 live male births [1,2]. The lack of functional dystrophin causes the progressive degeneration of muscle fibers, resulting in increased muscle atrophy and weakness. Symptoms such as delayed walking and difficulty running and climbing stairs start manifesting before the age of 5 years. Disease severity increases as patients age [3], eventually leading to the loss of ambulation around the early teenage years, followed by the need for ventilation support and eventually death before 40 years of age [4]. The natural history of the disease has evolved in recent years as the use of physiotherapy and glucocorticoids and the advent of drugs targeting the underlying cause of the disease in some specific patient subpopulations have shown efficacy in delaying disease progression [2,5]. DMD is associated with a considerable socioeconomic burden because of the high demand for healthcare and nonhealthcare resources as well as the substantial caregiver burden, both in terms of informal care hours and reduced work productivity [6][7][8]. Evidence further suggests that the economic burden is influenced by disease progression, with patients in more advanced, nonambulant stages incurring greater annual costs than ambulant patients [6,8]. Nevertheless, given the rare nature of the disease, comprehensive cost data and its relation to disease progression across different geographical settings remain sparse. COIDUCH (Cost-of-Illness study in patients with Duchenne muscular dystrophy) aimed to assess the patient and societal burden of DMD in Portugal. In this paper, we report upon the cost of illness (COI) and its relation to disease progression, based on age as a proxy, and according to the ambulatory status of patients. Study Design and Data Collection This was a cross-sectional study of patients diagnosed with DMD identified through the Portuguese Neuromuscular Patients Association (APN). All patients and caregivers were informed of the study objectives and data confidentiality. Patients receiving experimental drugs or placebo in randomized controlled trials were excluded. After participants provided written informed consent, a face-to-face interview was conducted by trained interviewers. The fieldwork was carried out in June and August 2019. Primary caregivers and patients with DMD were asked to answer a customized questionnaire, developed in collaboration with the APN to ensure that questions were adequate and correctly understood. The questionnaire included questions regarding caregiver and patient demographics, patient health status, DMD-related healthcare and nonhealthcare resource utilization and costs, and DMD-related household expenses. Caregiver absenteeism and presenteeism due to patient DMD was collected using the Work Productivity and Activity Impairment (WPAI) questionnaire [9]. Data were collected retrospectively with prespecified recall periods. To minimize recall bias, most questions regarding resource utilization covered the previous 6 months. The recall period was extended to the patient's lifetime when pertaining to items with one-off acquisition costs (such as devices, DMD-related household expenses, and surgeries) that would be underreported if restricted to the 6-month recall period. Data collected from the WPAI questionnaire referred to the previous 7 days. Assessment of Cost of Illness Costs were valued using a societal perspective and reported in € standardized to year 2019 values using the consumer price index for Portugal [10]. Direct medical and nonmedical costs, the majority of which had a recall period of 6 months, were annualized assuming constant use of resources. Oneoff acquisition costs had a more extensive recall period and were estimated as cumulative per-patient mean costs during an extended time period. Unit costs for both medical and nonmedical goods and services were obtained, preferably from caregivers and/or patients or alternatively from national databases, legislation, and wholesalers [11][12][13][14][15]. Informal care costs were valued according to the proxy good method [16], in which the time spent on informal care is valued according to the price of a close market substitute. As a conservative estimate, the number of care hours was valued at a market price of €5/h, which is the average wage paid to formal caregivers of patients with DMD according to the APN. To avoid double counting of time, primary caregivers were asked about the average number of daily hours actively dedicated to informal care, discounting working hours (if working) and/or leisure time. If the caregiver was not working because of DMD, costs were calculated according to the number of informal care hours reported minus the national mean number of daily working hours, adjusted for sex of the caregiver. Indirect costs due to absenteeism and changes in working situation or presenteeism of the primary caregiver were estimated using the human capital approach [17], in which the average gross wage of the individual is considered a good proxy for the loss of labor productivity. Each hour of lost productivity was valued according to the Portuguese average gross wage [18,19], assuming the national average working hours in 2017, both adjusted for sex [20]. The formula used in the calculation is described in the electronic supplementary material. Two analyses were conducted. The first estimated the cost of illness of DMD according to ambulatory status. For the ambulant stage, all costs of patients still ambulant at the date of interview were considered. Conversely, costs in the nonambulant stage consisted of all annualized costs reported during the past 6 months by nonambulant patients and all annualized costs of one-off resources incurred exclusively during the nonambulant period. One-off acquisition costs were annualized as the cumulative per-patient mean costs divided by the time period between ambulation loss and the interview. The second analysis aimed to determine the annualized lifetime cost for each patient and included all costs incurred during their disease trajectory from diagnosis to the moment of the interview. Statistical Analysis Collected data were summarized descriptively according to ambulatory stage, using summary statistics such as mean and standard deviation for continuous variables and absolute and relative frequencies for categorical variables. The relationship between the annualized lifetime cost and disease progression was analyzed using patient age as proxy for disease progression, given its close relation with motor and cardiovascular function deterioration [21][22][23][24][25][26][27]. In this analysis, the patient's age at the date of interview was estimated with a flexible approach using a generalized additive model. To address the skewed distribution of costs, a gamma distribution with a logarithmic link function was used, whereas a possible nonlinear effect of age was addressed adopting a thin plate regression spline [28]. To control for confounding effects, the model was adjusted for ambulatory status at the date of interview. This flexible modelling approach allowed the estimation of hidden patterns of the behavior of the annualized lifetime cost as a function of disease progression, not restricted to estimating a linear or multiplicative effect. All statistical analyses were performed using the statistical software R ® [29]. Results A total of 46 patient-caregiver pairs provided informed consent and completed the questionnaires. All patients were male, with a mean age of 8 and 21 years in the ambulant (n = 8) and nonambulant (n = 38) groups, respectively. Loss of ambulation occurred at a mean age of 10 years, followed by use of assisted ventilation at a mean age of 17 years ( Table 1). The majority of caregivers were mothers to the patients with DMD. A high proportion of caregivers of nonambulant patients were not working and also reported a high mean number of hours dedicated to informal care (Table 1). Estimated Cost of Illness All DMD-related stage-specific costs are presented in Table 2, divided according to the major cost categories. The mean per patient total annual cost of DMD was estimated at €48,991 in the nonambulant stage compared with €19,993 in the ambulant stage. Direct nonmedical costs were the main driver of DMD costs (nonambulant 61%; ambulant 60% of total costs), followed by indirect costs (nonambulant 21%; ambulant 22% of total costs) and by direct medical costs (nonambulant 19%; ambulant 19% of total costs). Mean direct annual medical costs were two times higher in the nonambulant stage than in the ambulant stage (€9063 vs. 3708). Ambulant-stage direct medical costs were primarily driven by physician and/or other health professional visits (€3126). In the nonambulant stage, the two main cost drivers were physician and/or other health professional visits (€3157) and medical devices (€3582) ( Table 2). Mean annual direct nonmedical costs during the nonambulant stage were over twice those of the ambulant stage (€29,717 vs. 11,890), with informal care costs being the main cost driver (nonambulant €20,772; ambulant €9996) ( Table 2). Indirect costs of primary caregivers from loss of productivity due to DMD were more than three times higher for the nonambulant group than for the ambulant group (€10,211 vs. 4395) ( Table 2). Figure 1 shows a scatterplot of the patient's annualized lifetime costs and their respective age, stratified according to their ambulatory stage at the time of the interview. Age is used here as a proxy of disease progression. The average of 10 years of age for loss of ambulation is corroborated in the plot. The generalized additive model estimated a decreasing effect of age independent of the patient's disease stage (Fig. 1). Annualized lifetime costs started at €25,000 and slightly declined as age increased for the ambulant patients, with age ranging from 2 to 10 years. Although nonambulant patients also presented a decreasing annualized lifetime cost, the model fitted suggested a greater disease burden around the mean age of loss of ambulation followed by a slightly linear decrease until around 20 years of age, when the cost stabilized at around €50,000 per year. Discussion To our knowledge, this is the first study to analyze the COI of DMD in Portugal according to patients' ambulatory status. Our results point to a close relation between overall disease costs and disease progression. Mean annual costs in the nonambulant stage were more than double those in the ambulatory stage (€48,991 vs. 19,993, respectively). Direct nonmedical costs were the main component of annual costs in both stages and were primarily driven by informal care costs (nonambulant 42%; ambulant 50% of total cost). As DMD progresses and mobility decreases, caregivers try to compensate for the greater need for supervision and support by allocating a larger proportion of their day to informal care. This is consistent with what has been reported in other publications [6,7]. Surprisingly, costs related to nonmedical services such as formal caregivers, social institutions, and transport services remained similar in both stages (nonambulant €1615; ambulant €1894). This suggests that, even when confronted with a greater need for care, families continue to rely primarily on informal care rather than using third-party services that could potentially help reduce caregiver burden. The substantial caregiver burden also had important impacts on indirect costs because of loss of productivity, which were almost three times higher in the nonambulant stage than in the ambulant stage (€10,211 vs. 4395, respectively). Around two-thirds (68%) of caregivers of nonambulant patients reported having altered their employment status because of DMD, either stopping work completely (58%) or reducing their work hours (11%). Direct medical costs were the least influential cost driver and resulted primarily from visits to physicians and/ or other health professionals (nonambulant 6%; ambulant 16% of total costs) and medical device costs (nonambulant 7%; ambulant 2% of total costs). Hospital care costs and nutrition support and other health products, while barely present during the ambulant stage, reached an annual mean of €889 during the nonambulant stage. This was also the case for medical device costs, resulting from the high acquisition and maintenance costs of wheelchairs and respiratory assistance devices during the nonambulant stage. Treatment costs remained fairly low in both stages (< 1% of total annual costs), which was expected because, despite some recent therapeutic developments, standard of care continues to rely predominantly on the use of low-cost treatment options such as corticosteroids. Furthermore, as illustrated in Fig. 1, the majority of annualized lifetime disease costs appear to be larger around the time ambulation is lost (10 years of age). Loss of ambulation marks a critical milestone in disease progression, not only because of its direct clinical implications but also because of the higher economic burden it entails. Around this time, families may be confronted with a greater need to pursue one-off acquisitions, such as wheelchairs or even making house and/or car adaptations which, given the short time period between diagnosis and the moment of the interview, led annualized lifetime disease costs to reach a high level far above €64,500 per year. Although lifetime annualized costs continued to be substantial across the late teens and adulthood, maintaining a level far above those observed in ambulant patients, they trended towards a lower level of €50,000 per year among older patients. This can be explained by the high discounting in one-off acquisition costs applied to older patients, resulting from the wider time gap between diagnosis and the moment of the interview, as well as the lower frequency of new acquisitions later in life. Interestingly, lifetime annualized costs appeared to increase again after the age of 30 years. This may be a reflection of the greater end-of-life medical costs expected to occur around this time. However, it is difficult to draw any definite conclusion because of the small number of patients in the group aged 30-40 years. Younger patients who were still ambulant at the time of the interview had a mean annualized lifetime cost of less than €25,000. This cost later appeared to decline as age/ disease progression increased, a similar trend to that identified by Schreiber-Katz et al. [6], where patients in the clinical severity stage II (late ambulatory with high impairment) appeared to have higher annual costs than those in stage III (early nonambulatory). Younger patients with DMD, even in the absence of any significant symptoms, naturally require more care. This may make it difficult for caregivers to estimate the amount of informal care or work productivity impact that can be directly attributable to DMD, resulting in its overestimation. Entry into primary school around the age of 6 years may also help free up caregiver time, reducing caregiver burden. Differences in setting, costing categorization, and methodology made it challenging to compare DMD costs across different publications. Nevertheless, the results in this study appear to be largely consistent with those of other COI studies [6][7][8]. First, studies show a trend towards higher costs in patients with greater disease progression, often defined according to ambulatory status. However, one important distinction is that our analysis suggests that lifetime annualized costs are actually larger for nonambulant patients in their early teens, soon after ambulation is lost, than for both younger ambulant patients and older nonambulant patients. Second, in Portugal, as in other European countries, direct medical costs appear to be substantially lower than both direct nonmedical and indirect costs. The question remains as to which of the latter two costs categories is the main cost driver. According to our analysis, as well as those of Cavazza et al. [7] and Landfeldt et al. [8], direct nonmedical costs appear to be more influential across European countries, but there are some exceptions, for example in Italy [8] or in ambulant patients in Germany, according to Schreiber-Katz et al. [6]. Although this may be due to country-specific differences, other factors, such as cost methodology, may also have played an important role. Although the DMD costs estimated in this study were substantial, they still underestimate the true economic cost of the disease. The interview-based and retrospective design of this study made it difficult to capture all relevant direct medical costs, for example, end-of-life costs expected to occur at the end of the nonambulant stage. This was a common issue across similar publications [6][7][8] and could be solved by future studies using hospital records, which could potentially capture most of these costs. Informal care and indirect costs may also have been undervalued because we excluded the contribution of other relevant family members, who are also likely to be affected by the patient's condition. We were also unable to include potential patient indirect costs. From a technical standpoint, it would require a number of assumptions, such as assuming a mean starting age of employment for these patients had they not been affected by DMD. One limitation of this study relates to a possible referral bias because patient identification and recruitment was conditional on their membership in the APN. Even though APN is a nationwide association, membership is voluntary and grants access to a wide range of services (e.g., rehabilitation care), making it more appealing to patients with a more severe clinical presentation. This may explain the small number of ambulant patients identified for this study (n=8) which, in itself, represents another limitation. Including more ambulant patients is always going to be a challenge given the small number of confirmed cases of DMD in Portugal (142 according to one estimate [30]), the rapid progression of the disease, and the difficulties of establishing an early diagnosis while patients are still ambulant. A third limitation involves the recall period used. To limit the risk of memory bias, most items pertaining to past resource consumption and costs were restricted to a recall period of up to 6 months. However, this short recall period may have led some relevant seasonal data to be neglected from this analysis, for example, related to acute infections occurring during the winter months. To limit the impact of this, data concerning one-off acquisition items (medical devices, DMD-related household expenses, and surgeries) were captured over the patient's entire lifetime. Although this might have increased the risk of memory bias, we believe that the benefits of including past resource consumption that would otherwise be underreported if limited to a 6-month recall period, largely outweighed any potential downsides. A major strength of this study relates to the data collection methodology. Face-to-face interviews using a semistructured questionnaire allowed for a more condensed and flexible data collection while also reducing the risk of respondents misinterpreting questions. Overall, this made data collection more robust, at least when compared with alternative methods such as telephone or mail surveys or using data from a national registry. Our results highlight the economic burden of DMD, further reinforcing the potential clinical and economic benefits associated with delaying disease progression. Disease costs during the nonambulant stage were substantial and were especially high around the time ambulation was lost. Given the current unmet treatment needs, it is crucial to weigh the potential benefits of therapeutic innovations against the expected increase in direct medical costs. Conclusion Our analysis supports the thesis that disease progression contributes to an increase in overall DMD costs. Therefore, the availability of new therapeutic options that can delay disease progression, especially loss of ambulation, may prove to be highly beneficial for not only patients with DMD but also their families and society. Open Access This article is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, which permits any non-commercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by-nc/4. 0/.
2021-10-04T13:32:15.234Z
2021-10-03T00:00:00.000
{ "year": 2021, "sha1": "a0e0e4a3d49b70155da883dd699aee63e979aa14", "oa_license": "CCBYNC", "oa_url": "https://link.springer.com/content/pdf/10.1007/s41669-021-00303-5.pdf", "oa_status": "GOLD", "pdf_src": "Springer", "pdf_hash": "a0e0e4a3d49b70155da883dd699aee63e979aa14", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
237458114
pes2o/s2orc
v3-fos-license
The Effects of Age, Biological Maturation and Sex on the Development of Executive Functions in Adolescents The development of executive functions (EF) has been widely investigated and is associated with various domains of expertise, such as academic achievement and sports performance. Multiple factors are assumed to influence the development of EF, among them biological maturation. Currently the effect of biological maturation on EF performance is largely unexplored, in contrast to other domains like physical development or sports performance. Therefore, this study aimed (a) to explore the effect of chronological age on EF performance and (b) to investigate to what extent age-related changes found in EF are affected by biological maturation on both sexes. To this end, EF performance and degree of maturity, indexed by percentage of predicted adult height (%PAH), of 90 adolescents (11–16 years old, 54% males) were measured on three occasions in a time frame of 12 months. A Generalized Estimating Equation (GEE) approach was used to examine the association between chronological age and %PAH and the weighted sum scores for each EF component (i.e., inhibition, planning, working memory, shifting). All models were run separately for both sexes. The males’ results indicated that EF performance improved with age and degree of maturity on all four components. Interaction effects between age and %PAH on inhibition showed that at a younger age, males with a higher %PAH had a lower chance of performing well on inhibition, whereas at later ages, males with a higher %PAH had a higher chance to have a good inhibition performance. For working memory, it seems that there is no maturity effect at a younger age, while at later ages, a disadvantage for later maturing peers compared to on-time and earlier maturing male adolescents emerged. Females showed slightly different results. Here, age positively influenced EF performance, whereas maturity only influenced inhibition. Interaction effects emerged for working memory only, with opposite results from the males. At younger ages, females with lower %PAH values seem to be scoring higher, whereas at later ages, no maturity effect is observed. This study is one of the first to investigate the effect of biological maturation on EF performance, and shows that distinct components of EF are influenced by maturational status, although the effects are different in both sexes. Further research is warranted to unravel the implications for maturation-driven effects on EF that might significantly affect domains of human functioning like academic achievement and social development. The development of executive functions (EF) has been widely investigated and is associated with various domains of expertise, such as academic achievement and sports performance. Multiple factors are assumed to influence the development of EF, among them biological maturation. Currently the effect of biological maturation on EF performance is largely unexplored, in contrast to other domains like physical development or sports performance. Therefore, this study aimed (a) to explore the effect of chronological age on EF performance and (b) to investigate to what extent age-related changes found in EF are affected by biological maturation on both sexes. To this end, EF performance and degree of maturity, indexed by percentage of predicted adult height (%PAH), of 90 adolescents (11-16 years old, 54% males) were measured on three occasions in a time frame of 12 months. A Generalized Estimating Equation (GEE) approach was used to examine the association between chronological age and %PAH and the weighted sum scores for each EF component (i.e., inhibition, planning, working memory, shifting). All models were run separately for both sexes. The males' results indicated that EF performance improved with age and degree of maturity on all four components. Interaction effects between age and %PAH on inhibition showed that at a younger age, males with a higher %PAH had a lower chance of performing well on inhibition, whereas at later ages, males with a higher %PAH had a higher chance to have a good inhibition performance. For working memory, it seems that there is no maturity effect at a younger age, while at later ages, a disadvantage for later maturing peers compared to on-time and earlier maturing male adolescents emerged. Females showed slightly different results. Here, age positively influenced EF performance, whereas maturity only influenced inhibition. Interaction effects emerged for working memory only, with opposite results from the males. At younger ages, females with lower %PAH values seem to be scoring higher, whereas at later ages, no maturity effect is observed. This study is one of the first to investigate the effect of biological maturation on EF performance, and shows that distinct components of INTRODUCTION Executive functions (EF) are cognitive processes required for the behavioral control of numerous daily-life tasks and are crucial for cognitive, social, and psychological development (Diamond, 2013). Typically, EF performance and development are investigated from a "chronological age"-point of view. However, it is known that there is considerable inter-individual variation in the rate and timing of biological maturation, which makes chronological age an estimate of development at best (Lloyd et al., 2014). This is especially true for adolescence, which is accompanied with many biological within-person changes (Grumbach and Styne, 1998). Biological maturation refers to the timing and tempo of the progress toward the mature biological state related to growth (Malina et al., 2004). In contrast to the EF research field, studies on physical development in the sports context have widely investigated and applied the impact of biological maturation (Cumming et al., 2017). These studies generally indicate an advantage for early maturing adolescents compared to late maturing adolescents on sports performance during these pubertal phases due to their advanced growth and physical fitness level (Meylan et al., 2010;Rommers et al., 2019). Biological maturation could have a similar influence on EF and EF development (Juraska and Willing, 2017;Chaku and Hoyt, 2019;Stumper et al., 2020), and can affect academic performance, social development or even risk behavior (Magnusson et al., 1985;Baxter-Jones et al., 2005;Koerselman and Pekkarinen, 2017). Four main EF factors are categorized during adolescence, i.e., inhibition, shifting, working memory and planning (Miyake et al., 2000;Laureys et al., 2021). Inhibition is associated with "the deliberate, controlled suppression of prepotent responses" (Miyake et al., 2000). Shifting concerns switching between multiple tasks, operations or mental sets (Miyake et al., 2000). Working memory refers to remembering, monitoring, coding incoming information and updating information (Miyake et al., 2000;Nemati et al., 2017); and planning is related to problem solving (Laureys et al., 2021). These four EF components (i.e., inhibition, planning, shifting, working memory) will further be used in this paper to determine EF performance during adolescence. In spite of the abundance of publications on EF and its factor structure, the development of EF is not fully understood. EF development is associated with brain maturation in general and specifically with maturation of the prefrontal cortex (Diamond, 2002;Huizinga and Smidts, 2011), resulting in a relatively rapid improvement of all four EF components during childhood (until the age of 12) in comparison to early and late adolescence (12-18 years old) (Anderson et al., 2001). During adolescence (around the age of 15), the rate of improvement decreases and adult levels of EF are attained (Huizinga et al., 2006). Zooming in on the adolescent phase, the suggestion that biological maturation may influence the rate and timing of EF development has repeatedly been made (Best and Miller, 2010). Two main hypotheses have been presented to explain such effects. The hormonal influence hypothesis suggests that during adolescence, an increase in sex hormones in the brain are related to the start and end of sensitive periods in the brain that affect EF and behavior (Chaku and Hoyt, 2019;Laube and Fuhrmann, 2020). At the onset of adolescence, the release of these sex hormones can cause an excess of synapses, a phenomenon that can evoke a decrease in the quality of information processing and possibly even EF performance [see Blakemore and Choudhury (2018) for a review]. However, once the pruning process of neural pathways has started, the prefrontal cortex is also reorganized, leading to more efficient cognitive processing (Crone, 2009;Koolschijn et al., 2014;Juraska and Willing, 2017;Chaku and Hoyt, 2019;Laube and Fuhrmann, 2020). The second hypothesis takes into account the biological influence, as well as the social changes and challenges that are associated with adolescence (Ge and Natsuaki, 2009). This maturation disparity hypothesis states that, compared to their peers, earlier maturing adolescents might encounter more physical, cognitive and social challenges they are not always emotionally ready to cope with. The hypothesis is mainly investigated in females, and only rarely in males. Although this hypothesis is often used to clarify a higher number of psychopathological symptoms in early maturers, there could also be beneficial consequences for EF (i.e., increase in attention for early maturers; Chaku and Hoyt, 2019). The small number of studies on the relationship between EF development and biological maturation contrasts with the potential impact of maturation-driven differences EF might have on an individual's success in the social, academic, and professional domains (Diamond, 2013). From that perspective, further clarification of the association between biological maturation and EF is mandatory. Next to differences in the timing and tempo of EF development, biological maturation could also play a role in sexrelated differences during EF development. A recent review of Grissom and colleagues (2019) showed that there still is a lot of ambiguity in sex differences on the different EF components. Some studies indicate equal EF performance between males and females from childhood to adulthood (Grissom and Reyes, 2019), while in other studies females are found to outperform males on inhibition and working memory (Laureys et al., 2021). Especially during adolescence, possible sex differences in EF development could be explained by different timing and rate of biological maturation and the underlying hormonal processes (Anderson et al., 2001). Females typically mature earlier than males, where females start the adolescent period around 10-11 years, and males at around 11.5 years old (Malina and Bouchard, 1992). The difference in timing of maturation is also visible in brain maturation, more specifically, in the increase in frontal gray matter that reaches its peak at different ages for both sexes (11.0 years for females and 12.1 years for males) (Giedd, 2004). Until now, EF development has been mainly investigated from a "chronological age"-perspective. Hence, the main goal of this study is to examine the influence of biological maturation on agerelated differences in EF between 11-to 16-years-old adolescents. Because of the difference in rate and timing of biological maturation between males and females, we expect differences in EF performance. Therefore, we will investigate the influence on EF separately for males and females. The hypothesis is that older adolescents will perform better than their younger peers, and that by the end of the puberty phase, early maturing adolescents will also have an advantage over late maturing adolescents. Participants A convenience sample of 94 Flemish adolescents between 11 and 16 years participated in this study. The participants were all recruited from the first until the fifth year at one secondary school. Only data of students who participated in at least two or three test occasions were included for further analyses, leading to a total of 90 participants (54% males). From these 90 participants, 88 were present at the first test occasion, 84 at the second test occasion and 85 at the third test occasion. Reasons for dropout were sickness at the day of testing or changing schools between test occasions. This project has been conducted in accordance with the Helsinki Declaration and was approved by the Ethical Committee of the Ghent University Hospital (number 2017/1548). Since all participants were minors, parents or their legal representatives gave their written informed consent. All data were analyzed confidentially. Study Design A mixed-longitudinal follow-up design with three test occasions was set up to measure the influence of biological maturation on EF development. Originally, it was intended to have 4 months between each test moment. The first occasion was in October 2019, the second in January-February 2020. The third test moment was planned in May 2020, but due to the COVID-19 pandemic including the closing of the schools in Flanders, this test moment was postponed until October 2020. During each test occasion, anthropometric characteristics were measured, and EF performance was assessed using an online EF test battery. Both tests were administered in a separate room with enough space for the participants to work without disturbance of others. At least two of the researchers were present to explain all the tests separately and to answer questions. Before the first test occasion, parents were asked to complete a form with demographic data, including birth date and sex of the participant, and the biological parents' body height. Anthropometry During each test moment, stature and weight of the participants were measured. Stature was measured to the nearest 0.1 cm using a portable stadiometer. The participants were asked to stand barefoot with their heels against the stadiometer and with their head in a neutral position. Weight (0.1 kg) was measured using a digital scale. Body mass index (BMI, kg/m 2 ) was calculated with the height and weight measurements from the first test occasion. Further BMI classifications were made, based upon the BMI cut-off scores developed by the International Obesity Task Force (IOTF) (Cole and Lobstein, 2012). With these cutoff scores, the adolescents were categorized as being underweight, normal-weight, overweight or obese. Percentage of Predicted Adult Height The measured anthropometrics of the participants were used to determine the percentage of predicted adult height (%PAH) of each individual. In this study, the Khamis-Roche method was used (Khamis and Roche, 1995). With this Khamis-Roche method, an estimation of the biological maturation was made, based on anthropometrics. Therefore, the biological age of individuals can be biologically ahead of (early maturing), on time with (average maturing) or behind (late maturing) their chronological age. This type of estimation is previously linked to the pubertal status estimated with the Tanner stages (Cumming et al., 2017). The adolescent's chronological age, body height and weight, as well as the body height of both biological parents, were entered in the following equation: Predicted adult stature (cm) The intercept (β 0 ) and coefficients (β 1, β 2, β 3 ) in the equation depend on age and sex. No objective classification of early, on-time, and late mature adolescents could be made in this study. Instead, we compared EF performance of earlier (same age and sex peers with a higher %PAH) and later (same age and sex peers with a lower %PAH) maturing adolescents within this sample. Cambridge Brain Sciences Test The EF test battery used in this study was the web-based Cambridge Brain Sciences (CBS) test battery. The tests used in the CBS, are all computerized versions of well-known and widely used neuropsychological tests to measure EF constructs. Test-retest reliability of the CBS test battery further proved to be decent (r = 0.68) (Hampshire et al., 2012). All tests were administered online, on a 9.7-inch iPad 2017 (iOS 12.1, Apple Inc., Cupertino, CA, United States). The CBS test battery can contain up to thirteen EF tests, including a wide range of outcome variables. In this study, seven tests were used: Spatial Span, Double Trouble, Token Search, Odd One out, Spatial Planning, Monkey Ladder and Sustained Attention to Response tasks (SART). The Spatial Span is derived from the Corsi Block Tapping task (Corsi, 1972). An adapted version of the Stroop task is used here as the Double Trouble task (Stroop, 1992). The Token Search, otherwise known as the Spatial Search task, is based upon the work of Collins et al. (1998). The Odd One Out is a computerized version of the classic fluid intelligence test, used by (among others) Brenkel et al. (2017). The Spatial Planning task is an online version of the Tower of London task (Shallice, 1982). The Monkey Ladder is a visualspatial working memory task derived from non-human primate literature (Inoue and Matsuzawa, 2007). Lastly, the SART is based upon a Go/No-Go task (Robertson et al., 1997). The Spatial Span, Token Search and Monkey Ladder are tests to assess visual-spatial working memory. To assess inhibition, the Double Trouble and Sustained Attention to Response tasks are used. The Odd One Out was included to evaluate shifting performance and the Spatial Planning for planning performance. Detailed information about the seven specific tasks and the outcome measures is included in Supplementary Material 1. These seven tests were always assessed in the same order as described above. All tests started with the same (low) level of difficulty for each participant, and the complexity increased or decreased depending on the accuracy of response. Data Analysis The raw CBS scores were converted into a weighted sum score for the four EF components separately (inhibition, working memory, planning and shifting). These sum scores were calculated based on the four-factor model and factor loadings as described by Laureys et al. (2021). The outcome measure per EF task was multiplied by their respective standardized factor loading for each EF component. The sum per EF component of these weighted scores was then calculated. Detailed information about the four-factor model and factor loadings, can be found in Supplementary Material 2. Based on the weighted sum scores for each EF component, means and standard deviations (SD) could be calculated per age group (86)(87)(88)(89)(90)(91)(92)(93)(94)(95)(96)(97)(98)(99)(100). To end up with equal ranges for both the age and %PAH group, categorization of these groups was based on the lowest and highest numbers for both variables. Means and SD were also provided for the EF components and anthropometric data per sex, age and/or %PAH group. To facilitate comparison of age-and maturity-related changes between the different EF components, the mean difference between the oldest and youngest age group, and most and least mature group across all three test occasions was expressed as a percentage for each EF component. To examine the influence of age and biological maturation and the interaction between these factors on EF performance, a generalized linear model is necessary which investigates population average effects. Therefore, a Generalized Estimating Equation (GEE) approach (Gaussian family) was used. This approach requires repeated measures over time, without providing insight into longitudinal change over time (within individuals). Participants with at least two of the three time points completed, were included in this analysis. All individual data points (e.g., the two or three measurements of each participant are considered as single data points) were used to make population-based prediction plots, while still accounting for the non-independence (i.e., intra-personal clustering) of EF scores recorded at different time points for the same participant. Chronological age and %PAH were included in the model as continuous predictor variables and the weighted sum scores for each EF component as the outcome variable. Age and %PAH were added both separately and in interaction with each other in the different GEE models. Because differences in maturational timing for males and females occur during adolescence, the GEE models were run separately for both sexes. In total, three models were fit for each EF component and per sex, including the following independent variables: Variance Inflation Factor (VIF) was checked as a measure of multicollinearity. If the VIF factor scores above 10, collinearity is present and interaction effects should be excluded from the analyses (Bagheri, 2015). The age and %PAH models were compared using quasi-likelihood under the independence model criterion (QICu) (Pan, 2001;Xu et al., 2019). Generally, a lower QICu indicates a better model fit. All statistical analyses were conducted in R (version 3.5.2), and STATA (version 16.1) was used to visualize the results. RESULTS An overview of the means and standard deviations for each of the four EF components per sex, age and %PAH group is provided in Table 1. To allow for qualitative comparison of age-and maturity-related differences across the four EF components, the difference between the score of the oldest and youngest age group, and most and least mature group were expressed as a percentage score. The difference between the youngest participants (11.75-12.74) and the oldest participants (15.75-16.74) was 29% for inhibition, 38% for planning, 8% for shifting and 9% for working memory. With regard to %PAH, the group with highest %PAH (96-100%) scored 15% higher on inhibition, 51% higher on planning, 14% higher on shifting and 10% higher on working memory than the group with the lowest %PAH (79-85%). Next to EF descriptive values, we also provided information about the height, weight, BMI and %PAH per sex and age group in Table 2. BMI could not be calculated for five out of 90 participants (height and/or weight was missing; 5.6%). When BMI cut-offs were used on the first test occasion, 14 participants were classified as underweight (15.6%), 64 as normal-weight (71.1%), seven as overweight (7.8%) and no adolescents were classified as obese. Age The first GEE models used the available data points of 49 males and 41 females, and showed a significant effect of age for both sexes, with older participants having significantly higher scores on all four components compared to their younger peers. The results are presented in Table 3 under Model 1. A visualization of this set of GEE models can be found in Figure 1. Here, we can interpret a gradual increase in performance on all four EF components for both sexes. %PAH For the second set of GEE models (Model 2, Table 3), the influence of %PAH was investigated in more detail. Since %PAH could not be calculated for eight participants (i.e., body height of biological mother and/or father was missing), 45 male clusters and 37 female clusters were used in these analyses. Results from these models showed different results for both sexes. Male participants with a higher %PAH have significantly higher scores on all four EF components than participants with a lower %PAH value. In females, a higher %PAH led to higher scores on inhibition only. A marginal influence of %PAH was observed for shifting. These models are visualized in Figure 2, which again suggests that the EF performance improves with increasing %PAH, for both sexes. Interaction Age With %PAH Before running the interaction models between age and %PAH, the VIF score was checked. In this study, the VIF-score was 3.34, which implies no multicollinearity issues. In this final set of models (Model 3, see Table 3), 45 male clusters, 37 female clusters were used again. Main effects of age and %PAH on EF performance, addressed in Model 1 and 2, are not considered in Model 3. Results for the interaction effects of each EF component are interpreted using the prediction plots (see Figure 3). For inhibition, the age by %PAH interaction was significant for males. The males' prediction plot shows that, at a younger age, adolescents with a higher %PAH are likely to score lower than adolescents with lower %PAH. Around the age of 15, there seems to be a shift in this trend. Adolescents with higher %PAH are now more likely to have a better performance on inhibition than adolescents with lower %PAH. No significant interaction effect was found for females in inhibition. However, the females' plot shows a similar pattern, with the shift at a slightly earlier age. No significant interaction effect of age and %PAH was found for planning in either of the sexes. Nevertheless, the males' plot for planning is similar to the inhibition plot, although higher scores (i.e., indicated by the yellow, orange, and red colors) are only seen at a later age. The female prediction plot indicates that at the younger ages, performance is rather low (indicated by the blue colors). It is only around the age of 14 that the scores gradually increase for planning. From this age onward, adolescents are more likely to perform better with age or %PAH. For shifting again, the increase in EF performance with age is without interaction of biological maturation for both sexes. Shifting scores increase with age, independent of %PAH. The prediction plot for working memory shows a different image, with marginal effects for both males and females. Here it seems that, until the age of 13-14 years old, all male adolescents score medium to high on the working memory task, without a clear distinction between those with a lower or a higher %PAH. Around the age of 14, more distinct effects of biological maturation emerges. From that age onward, male adolescents at the higher-end of %PAH are more likely to have a better performance. In contrast, male adolescents at the lower-end of %PAH have more chance to have lower scores for working memory. The females' working memory plot shows a different pattern. Younger female adolescents with a higher %PAH seem to score lower on working memory. However, from 14 years old and onward, a high working memory score is observed, independent of the adolescents' %PAH. DISCUSSION The current study explored the association of age, biological maturation and sex on EF development during adolescence. For males, age and %PAH separately are positively associated with EF performance on all four EF components (inhibition, working memory, planning and shifting). Furthermore, a significant interaction between age and %PAH was observed for inhibition and working memory, indicating that the effect of maturity varied across age. For shifting and planning, no interaction between age and %PAH was found. Age also positively influenced all four components for females, whereas maturity only influenced inhibition. The age and %PAH interaction effect only emerged for working memory, showing that females at younger ages and with lower %PAH values seem to be scoring higher, whereas at later ages, the score is similar for females with higher and lower %PAH. The results of the current study indicate that EF performance improves with chronological age. Although a small percentage of the increase could potentially be attributed to practice effects, results are in line with previous research, indicating that EF keep developing during adolescence, although at a lower rate than during childhood (Anderson et al., 2001;Best and Miller, 2010). We observed variation in the overall % difference scores in all EF components. For shifting and working memory, relatively low differences of only 8 and 9% were observed between the oldest and youngest age group. Although other studies found a plateau for shifting performance around late childhood (12 years old) (Huizinga et al., 2006;Best and Miller, 2010), we still observed a small increase in score per year indicated by the GEE model next to the age-related differences. The relatively low level of complexity of the shifting and working memory tasks might explain this variation. More complex EF tasks are indeed documented to keep increasing at higher rates and at later ages (Miyake et al., 2000;Huizinga et al., 2006). In our study, the inhibition and planning components are based on these more complex tasks (i.e., an adapted version of the Stroop task and Tower of London task), and we also observed a difference in performance of 29% and 38% between the oldest and youngest group of adolescents. Mixed results are observed for the influence of biological maturation in both sexes. Biological maturation significantly influences all four EF components for males and seems to explain less variance in females. Nevertheless, when the age and %PAH models are compared, a better model fit (i.e., indicated by lower QICu values) is seen for %PAH across all EF components and both sexes. There are several possible explanations for the differences between both sexes. One could be that the females in this study already passed the onset of puberty and were systematically further down their maturation process compared to the males, since higher female %PAH values were seen. Therefore, it could be that the females in this study already passed the sensitive period for changes in plasticity caused by hormonal and neural reactions. Secondly, when observing the standard deviations on the average %PAH, the variation is smaller for females than males. Although this could also indicate that females indeed are reaching the end of maturity before the males, it might also indicate that there is more variation in timing of maturity in males. When the majority of females are "on-time" maturers, it is hard to detect differences in timing of maturity, possibly explaining why biological maturation has less influence on EF performance here. In general, the different results on EF performance might indicate that the existence of sex-related differences on EF performance could not only be related to chronological age, but also to biological maturation. We suggest that future studies expand the sample size in both sexes and start measuring both biological maturation and EF performance already at a younger age (around 10 years old) up until older ages (around 18 years old), to examine this potential mechanism in more detail. Biological maturation did not influence age-related differences in performance on shifting and planning for both sexes, but a significant interaction effect between age and %PAH was found for inhibition (males) and working memory (males and females). More specifically for the males, earlier maturing adolescents scored lower on inhibition compared to on-time and later mature adolescents, at a younger age. However, the difference seems to reduce during adolescence, and eventually, the earlier mature male adolescents outperform their later maturing peers at later ages on inhibition. For working memory, average scores were observed for young male adolescents, independent of maturity. By the end of the adolescent period, the later maturing adolescents are more likely to have a lower performance score, compared to the average to very good performance for earlier and on-time maturing adolescents. Female adolescents have slightly different results for working memory. At a younger age, earlier maturing adolescents are having lower scores, when females are older high working memory scores are observed independent of biological maturation. On both inhibition and working memory, a temporarily lower score is seen for earlier maturing adolescents. This could possibly be due to the reorganization of the prefrontal cortex, induced by sex hormones (Chaku and Hoyt, 2019;Laube and Fuhrmann, 2020), although the temporary decline in performance should eventually emerge in all adolescents at some chronological age, when puberty onset starts, a general temporary decline in score was not observed in this study. The maturation disparity hypothesis can explain why the short decrease in scores could potentially only happen to earlier maturing adolescents, because they might encounter more challenges at puberty onset than their peers, resulting in different EF scores during adolescence FIGURE 3 | Prediction plots based on the interaction between age and %PAH in GEE model 3 per sex (A = male, B = female), for each EF component (1 = Inhibition, 2 = Planning, 3 = Shifting, 4 = Working memory). On the x-axis, age is portrayed and, on the y-axis, %PAH. EF performance is divided in seven equal intervals, going from the lowest (blue color) to highest (red color) score on the particular EF component. %PAH, percentage of predicted adult height. (Ge and Natsuaki, 2009;Laube and Fuhrmann, 2020). We should also take into account that females have a 2-year head start in their biological maturation process compared to males (Malina et al., 2004). This might explain why for working memory, an influence of biological maturation was found at the older ages for males, but not for females. The end of the adolescent period in females occurs around the age of 16-17. Since the adolescent period for males can last until 18-19 years old, it could be that similar results can be found at these ages. Detailed neuroimaging studies are required to fully comprehend the brain maturation process during adolescence for both sexes and shed light on the possible differences between early and late maturing adolescents on EF development. The results of the current study using GEE models are promising to measure the influence of the timing of biological maturation on EF performance. Nevertheless, some limitations and future recommendations should be addressed. First, future studies should benefit by expanding the age range. Both younger age groups (closer to onset of puberty for females) and older age groups (closer to end of puberty for males) could be included to clarify the influence of biological maturation on EF performance and perhaps also indicate when the plateau of EF performance begins. Second, in the current study GEE models predict the influence of biological maturation, based on the raw scores of the individual data points of all participants. However, the majority of the participants are more likely to be maturing on-time, with only a few earlier and later maturing adolescents (compared to their same-age, same-sex peers within this sample), as is seen in a normal population. It should be noted that with each method estimation biological maturation (e.g., Khamis-Roche method), some degree of error should be taken into account. Furthermore, prediction for participants at the extreme ends of the maturation continuum (i.e., earlier or later maturing adolescents) could have a larger error rate and should be interpreted with caution. Future studies could benefit from a longer follow-up period with a wider age range and a larger sample size to investigate the interaction between EF development and biological maturation in more detail. Third, the current study used seven tasks of the CBS test battery, each with specific performance indicators. Since more research is revealing that the EF factor structure is in part dependent on the selected tasks and outcome measures (Karr et al., 2018), it would be advisable to replicate the current study with the seven EF tasks used in this study and perhaps add other EF tasks to see if the same results will hold up. Nevertheless, and in contrast with many underpowered research concerning EF factor structure, the performance indicators resulted in a fourfactor structure with weighted sum scores was based on a large sample (>2,000 participants), as was established by Laureys et al. (2021). Lastly, other influential factors, such as socioeconomic status (Jacobsen et al., 2017) IQ (Ardila et al., 2000) or physical activity (Alghadir et al., 2019) could be included to further examine their role in EF performance during adolescence. This study examined the influence of biological maturation on EF performance during adolescence. Previous research traditionally analyzed EF development in function of chronological age, but our results indicate that biological maturation should also be taken into account, and even provides a better fit when examining EF performance. This is especially the case in research where maturation could potentially clarify differences in EF development and sex differences on EF performance. However, it is also important in daily-life settings, since EF can affect academic, social, and emotional development during adolescence. The pattern of interaction between age and biological maturation differs between EF components and between both sexes, probably related to maturational timing. Inhibition and working memory are clearly affected by the timing and tempo of biological maturation, while the effect on planning and shifting was minimal. DATA AVAILABILITY STATEMENT The data will be made available upon reasonable request to the corresponding author. ETHICS STATEMENT The studies involving human participants were reviewed and approved by Ethical Committee of the Ghent University Hospital. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin. AUTHOR CONTRIBUTIONS FL, LM, SD, NR, EC, MM, FD, and ML were involved in the conceptualization of the study and wrote the manuscript. FL, LM, and MM were involved in data collection. FL, LM, NR, FD, and ML were involved in data analysis. All authors contributed to and approved the final version of the manuscript. FUNDING This work was supported with financial support of the Research Foundation-Flanders (FWO) with grant number FWO_3F0_2018_0031_01.
2021-09-10T13:30:39.056Z
2021-09-10T00:00:00.000
{ "year": 2021, "sha1": "90d3b3516b03fb25d13cdbec87b687e5599add27", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphys.2021.703312/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "90d3b3516b03fb25d13cdbec87b687e5599add27", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
257800522
pes2o/s2orc
v3-fos-license
Comprehensive Explorations of CCL28 in Lung Adenocarcinoma Immunotherapy and Experimental Validation Background Chemokines have been reported to play an important role in cancer immunotherapy. This study aimed to explore the chemokines involved in lung cancer immunotherapy. Methods All the public data were downloaded from The Cancer Genome Atlas Program database. Quantitative real time-PCR was used to detect the mRNA level of specific molecules and Western blot was used for the protein level. Other experiments used include luciferase reporter experiments, flow cytometric analysis, Chromatin immunoprecipitation assay, ELISA and co-cultured system. Results We found that the CCL7, CCL11, CCL14, CCL24, CCL25, CCL26, CCL28 had a higher level, while the CCL17, CCL23 had a lower level in immunotherapy non-responders. Also, we found that immunotherapy non-responders had a higher level of CD56dim NK cells, NK cells, Th1 cells, Th2 cells and Treg, yet a lower level of iDC and Th17 cells. Biological enrichment analysis indicated that in the patients with high Treg infiltration, the pathways of pancreas beta cells, KRAS signaling, coagulation, WNT BETA catenin signaling, bile acid metabolism, interferon alpha response, hedgehog signaling, PI3K/AKT/mTOR signaling, apical surface, myogenesis were significantly enriched in. CCL7, CCL11, CCL26 and CCL28 were selected for further analysis. Compared with the patients with high CCL7, CCL11, CCL26 and CCL28 expression, the patients with low CCL7, CCL11, CCL26 and CCL28 expression had a better performance of immunotherapy response and this effect might partly be due to Treg cells. Furthermore, biological exploration and clinical correlation of CCL7, CCL11, CCL26 and CCL28 were conducted, Finally, CCL28 was selected for validation. Experiments showed that under the hypoxia condition, HIF-1α was upregulated, which can directly bind to the promoter region of CCL28 and lead to its higher level. Also, CCL28 secreted by lung cancer cells could induce Tregs infiltration. Conclusion Our study provides a novel insight focused on the chemokines in lung cancer immunotherapy. Also, CCL28 was identified as an underlying biomarker for lung cancer immunotherapy. Introduction Globally, lung cancer is considered to be one of the major health concerns, responsible for almost 2.1 million new cases annually. 1 Lung cancer is generally a multifactorial disease that is influenced by many factors, like race, smoking, genetic susceptibility, and so on. 1 Among all subtypes, lung adenocarcinoma is a major one and accounts for approximately 40% of lung cancer. 2 Generally, surgery can provide satisfactory survival benefits to patients with early disease and is still the standard option. 3 Nevertheless, advanced lung cancer is often accompanied by local or distant metastases, making treatment more difficult. 3 Typically, chemotherapy and targeted therapy are used to treat advanced lung cancer, yet toxic side effect is still intractable. Additionally, the sensitivity rate of chemotherapy was only 30%. 4 Immunotherapy is an emerging therapy for lung cancer and has achieved promising results, especially in lung cancer. In the same way, there are still some people who are insensitive to immunotherapy. Gradually, researchers began to notice those indicators that indicated immunotherapy effectiveness. For instance, Zhang et al noticed the signature of CD8+ T cells related genes could indicate the immunotherapy response of lung cancer. 5 Zhou et al found that low-dose carboplatin synergizes with PD-1 inhibitors in lung cancer through the STING signaling pathway. 6 Consequently, it is important and meaningful to explore the underlying factors affecting immunotherapy sensibility. Chemokines are small cytokines or signal proteins secreted by cells, which can induce directional chemotaxis of nearby reactive cells. 7 Also, chemokines can coordinate the migration and localization of immune cells in tissues and therefore exert an important role in the tumor microenvironment. 7 Depending on the arrangement of amino-terminal cysteines (N-terminals), cytokines can be divided into four subfamilies, including CXC, CC, XC, and CX3C. 8 Previous studies have noticed the role of the CC chemokines family in cancers. Wunderlich et al found that obesity can induce M2 macrophage polarization by regulating IL6 and the polarized macrophages can facilitate colitis-associated cancer progression through CCL-20/CCR-6-mediated lymphocyte recruitment. 9 Tas et al indicated that the increased level of MAP-1 and CCL-2 can indicate a poor prognosis of gastric cancer patients treated with platinum-and taxane-based combination chemotherapy. 10 Facciabene et al found that in ovarian cancer, local hypoxia can increase the expression of CCL28 in tumor cells, which can recruit the Treg cells expressing CCR10, leading to poor survival. 11 Ren et al found that the increased HIF-1α induced by hypoxia leads to the overexpression of CCL28, which can directly interact with the CCR10 on the surface of Tregs. 12 However, the role of CC chemokines in lung cancer immunotherapy has not been fully investigated. In our study, we comprehensively explored the underlying role of the CC chemokines family in lung cancer immunotherapy. We found that many CC chemokines family molecules can significantly affect the immunotherapy response of lung cancer, especially CCL7, CCL11, CCL26 and CCL28. Moreover, we noticed that Treg cells in the tumor microenvironment can also affect the performance of lung cancer immunotherapy and were recruited by CCL7, CCL11, CCL26 and CCL28. Next, CCL28 was selected for further validation. Results indicated that under the hypoxia condition, HIF-1α was upregulated, which can directly bind to the promoter region of CCL28 and lead to its higher level. Also, CCL28 was associated with Treg cell infiltration. Data Collection The open-accessed next-sequence, mutation data, and corresponding clinical information of lung adenocarcinoma were downloaded from The Cancer Genome Atlas database (TCGA-LUAD project). The initial expression profile files were in the "STAR-Counts" form and the clinical data files were "bcr-xml" form. All the data were collated using the R software. Reference genomic file GRCh38.gtf was used for probe annotation. The probes with mean expression > 0.05 were selected for further analysis. Limma package was used to perform the differentially expressed genes (DEGs) analysis with the threshold of |logFC| > 1 and adj. P < 0.05. Before analysis, all the data were preprocessed, including missing value completion, probe annotation, and data Standardization. The gene list of IFN-γ and Expanded Immune Gene signatures was obtained from the study conducted by Ayers et al, which could indicate the immunotherapy response. 13 The makers of IFN-α/β, STING and innate immunity markers were obtained from the https://www.gsea-msigdb.org/gsea/index.jsp (REACTOME_INTERFERON_ALPHA_BETA_SIGNALING. v2022.1.Hs, GOBP_ACTIVATION_OF_INNATE_IMMUNE_RESPONSE.v2022.1.Hs, REACTOME_STING_MEDIATED _INDUCTION_OF_HOST_IMMUNE_RESPONSES.v2022.1.Hs). Evaluation of the Immunotherapy Response The assessment of patients on immunotherapy response was conducted using the Tumor Immune Dysfunction and Exclusion (TIDE) analysis. 14 The "Cancer type" was set as "NSCLC". The "Previous immunotherapy" was "No". All the patients were assigned a TIDE score based on the TIDE analysis, in which TIDE score > 0 was regarded as immunotherapy non-responders, while < 0 were responders. Biological Enrichment Pathway enrichment analysis of two specific groups was conducted using the Gene Set Enrichment Analysis (GSEA) and Gene Set Variation Analysis (GSVA). 17 Gene Ontology (GO) analysis was performed using the Clusterprofiler package. Cell Transfection Lipofectamine 2000 was used for transient transfection according to the standardized process. HIF-1α siRNA (sc-35561) and control shRNA (sc-37007) was bought from Santa Cruz Biotechnology. The constructed wild-type colon of HIF-1α is nearly 250 bp and directly synthesized onto the pGL3 basic vector. The mutant bases are ACACCTGC. Western Blot Western blot was used to detect the protein level of molecules. The BCA method was used to determine the protein content of cell lysates. According to the protocol, protein extraction kits were used to extract total protein. Western blot was performed following the standard process with 10% SDS-PAGE gel. The primary antibody of HIF-1α (1:2000), CCL28 (1:3000) and GAPDH (1:5000) were obtained from Protentech. After blocking with the primary antibody (4°C and overnight), the membrane was reacted with the secondary antibody for 2 hours at room temperature. Visualization was conducted using the enhanced chemiluminescence system. Luciferase Reporter Experiments Promega's Renilla luciferase reporter vector phRL-null was co-transfected with promoter constructs. To assess promoter activity, dual Luciferase reporter assays were performed on transfected cells. The luciferase activity from individual constructs was normalized by Renilla-driven luciferase activity in each experiment. Sample Collection and Separation The morning venous blood of all subjects was collected with vacuum anticoagulation blood collection vessels (5 mL × 3 pipes) and then made into single-cell suspension. Then, 15 μL FITC-labeled CD4 monoclonal antibody and PE-labeled CD25 monoclonal antibody were added into 150 μL single cell suspension and then were kept away from light for 30 min at room temperature. After that, 2 mL precooling PBS was added to the blood samples and were then centrifuged at 1, 000 r/min for 5 min. Then, a 1 mL film breaker was added to the blood samples. After standing for 1 hour, membrane permeation buffer was added. After that for 10 min, the samples were resuspended by centrifugation and added with 10 mL pe-cy5 labeled Foxp3 antibody. After the sample is kept away from light for 30 min at room temperature, 2 mL of precooled PBS was added. Then, samples were shaken with a centrifuge at 1, 000 r/ min for 5 min, and then was added 500 μL of PBS. This study design was reviewed and approved by the Medical Ethics Committee of the Zhongda Hospital, Southeast University (No. 2020ZDSYLL043-P01). Following the principles of the Declaration of Helsinki. All patients provided and signed the informed consent. Flow Cytometric Analysis CD8 + T cells were eliminated by negative selection method combining anti-CD8 + -FITC antibody and FITC antibody labeled magnetic beads. CD25 + T cells were selected by a positive selection method combining anti-CD25 + -FITC and FITC-labeled magnetic beads. Subsequently, the concentration of analyzed cells was adjusted to 1 × 10 6 /mL, and anti-CD4 + -PE antibody, anti-CD8 + -FITC antibody and Foxp3 + -APC antibody were added for labeling for 30 min. A FACS detection tube was used for computer detection, and FACS flow cytometry was used to detect the proportion of sorted CD4 + CD25 + Foxp3 + Treg cells. Co-Cultured System and Transwell Assay For the co-cultured system, the Treg cells were resuspended at a density of 3×10 4 cells per hole and were seeded in the upper chamber with a serum-free medium. Subsequently, 5×10 4 A549 cells and culture supernatant (800 μL, control or hypoxia group) were added into the lower chamber. After incubation for 24 hours, the cells on the lower surface of the Transwell membrane were fixed and stained. Elisa For the ELISA detection, 0.5 mL culture medium supernatant was firstly taken out and then centrifuged at 3500 r/min for 10 min. ELISA detection was performed using the ELISA kit following the standard process. Statistical Analysis The analysis of public data was conducted using the R software. SPSS 22.0 software was used for data analysis, and the measurement data conforming to normal distribution were expressed as X ± s, and a t-test was used for comparison between groups. Spearman or Pearson test was used for correlation analysis. P <0.05 indicates that the difference is statistically significant. Treg Cells Affect the Immunotherapy Response of Lung Cancer Further, we quantified the immune microenvironment of lung cancer using the ssGSEA algorithm ( Figure 2A). Correlation analysis indicated that the TIDE analysis was positively correlated with CD56 dim NK cells, NK cells, Tem, TFH, Th1 cells, Th2 cells and Treg cells, while negatively correlated with iDC and Th17 cells ( Figure 2B). Also, we found that immunotherapy non-responders had a higher level of CD56dim NK cells, NK cells, Th1 cells, Th2 cells and Treg, yet a lower level of iDC and Th17 cells ( Figure 2C). A positive correlation was found between TIDE score and TReg cells ( Figure 2D, Correlation = 0.144, P = 0.001). Moreover, a higher TReg level was observed in the immunotherapy non-responders patients ( Figure 2E). Moreover, in the patients with high TReg infiltration, we noticed a lower immunotherapy responder rate ( Figure 2F, 33.2% vs 44.1%). Biological Exploration of TReg Cells The overview of Treg infiltration in TCGA-LUAD was shown in Figure 3A. Then, DEGs analysis was conducted in the patients with high and low TReg infiltration, in which 107 upregulated and 3 downregulated genes were identified ( Figure 3B). GSEA analysis indicated that in the patients with high Treg infiltration, the pathways of pancreas beta cells, KRAS signaling, coagulation, WNT BETA catenin signaling, bile acid metabolism, interferon alpha response, hedgehog signaling, PI3K/AKT/mTOR signaling, apical surface, myogenesis were significantly enriched in ( Figure 3C). GO analysis showed that the terms of vitamin metabolic process (GO:0006766), water-soluble vitamin metabolic process (GO:0006767), cobalamin metabolic process (GO:0009235) and tetrapyrrole metabolic process (GO:0033013) ( Figure 3D). Interestingly, we found that Treg cells were positively correlated with all the key immune checkpoints, including PD-1 (CD274), PD-L1 (PDCD1), PD-L2 (PDCD1LG2) and CTLA4 ( Figure 3E CCL7, CCL11, CCL26 and CCL28 May Affect Lung Cancer Immunotherapy by Recruiting Treg Cells Previous studies have shown that the CC cytokines family can affect Treg cells in the tumor microenvironment. 11 Next, we found that the CCL7, CCL11, CCL26 and CCL28 had the same trend as TReg in immunotherapy response. Therefore, CCL7, CCL11, CCL26 and CCL28 were selected for further analysis ( Biological Exploration of CCL7, CCL11, CCL26 and CCL28 Based on the Hallmark gene set, we found that the hypoxia pathway was the most significant difference between patients with high and low TReg infiltration, indicating that hypoxia can induce the recruitment of Treg cells ( Figure 5A). The activity of hypoxia of TCGA-LUAD patients was shown in Figure 5B. Moreover, we found that the CCL7, CCL11 and CCL28 were significantly upregulated in the patients with high hypoxia activity ( Figure 5C). Positive correlations was found between 1334 mTOR signaling were significantly enriched ( Figure 5I). For the patients with high CCL11 expression, the pathways of interferon-alpha response, allograft rejection, G2M checkpoints, IL6/JAK/STAT3 signaling, apical junction and inflammatory response were significantly enriched in ( Figure 5J). For the patients with high CCL26 expression, the pathways of angiogenesis, reactive oxygen species pathway, MYC targets, IL6/JAK/STAT3 signaling, apical surface and interferon alpha response were significantly enriched in ( Figure 5K). For the patients with high CCL28 expression, the pathways of interferon-alpha response, spermatogenesis, KRAS signaling, and bile acid metabolism were significantly enriched ( Figure 5L). Clinical Correlation of CCL7, CCL11, CCL26 and CCL28 Univariate Cox regression was used to evaluate the prognosis value of CCL7, CCL11, CCL26 and CCL28. However, no significant prognosis correlation was observed in overall survival, disease-free survival and progression-free survival ( Figure 6A-C). Clinical correlation analysis showed that the T3-4 patients had a lower CCL11 expression compared to T1-2 patients ( Figure 6D); the N1-3 patients had a higher CCL11 expression compared to N0 patients ( Figure 6E); no significant difference was found between M0 and M1 patients ( Figure 6F); no significant difference was found in patients with the different clinical stage ( Figure 6G); the male patients had a higher CCL11 and CCL28 expression compared to female patients ( Figure 6H); no significant difference was found in patients with different age ( Figure 6I). Furthermore, we found that CCL7, CCL11 and CCL28 were positively correlated with TMB ( Figure 6J); CCL7 and CCL11 were negatively correlated with MSI ( Figure 6K); TReg, CCL11 and CCL28 were negatively correlated with mRNAsi, while CCL7 and CCL26 were positively correlated with mRNAsi ( Figure 6L); TReg was negatively correlated with the EREG-mRNAsi, while CCL26 was positively correlated with EREG-mRNAsi ( Figure 6M). Hypoxia Induces HIF-1α and CCL28 Elevation Based on our results, we suggested that hypoxia can induce the elevation of HIF-1α, which was responsible for the upregulation of CCL7, CCL11, and CCL28. The increased CCL7, CCL11 and CCL28 can recruit Treg cells and therefore affect lung cancer immunotherapy ( Figure 7A). We took two lung adenocarcinoma cell lines, H1299 and A549, for further experiments ( Figure 7B). By inducing hypoxia in A549 and H1299 cells, we detected the RNA expression level of CCL14, CCL24, CCL7, CCL11, CCL25, CCL17, CCL23, CCL26 and CCL28 in control and hypoxia groups ( Figure 7C and D). The result indicated that only CCL28 was significantly upregulated in the hypoxia cells. Moreover, we noticed that in hypoxia cells, the mRNA and protein levels of HIF-1α and CCL28 were all increased ( Figure 7E-I). Then, we knock down the HIF-1α in lung cancer cells ( Figure S6). The result of the Western blot also showed that the inhibition of HIF-1α can remarkably decrease the protein level of CCL28 ( Figure 8A). Also, ELISA experiments showed that the knockdown of HIF-1α can significantly reduce the secretion of CCL28 ( Figure 8B). After that, we cloned and sequenced the CCL28 promoter fragment and its mutants ( Figure 8C and Figure S7). Based on the luciferase reporter assay, wild-type CCL28-luciferase but not mutant CCL28-luciferase were able to respond to HIF-1α ( Figure 8D). CHIP assay showed that HIF-1α can bind to the promotor of CCL28 ( Figure 8E). CCL28 Was Associated with the TReg Cells Infiltration We tried to isolate the Treg cells from the blood based on the methods mentioned above. The experimental results showed that the content of Treg cells identified by FCS in peripheral blood was 3.87%, and the purity after purification was 89.14-94.38% ( Figure 9A). Moreover, we confirmed the morphology of Treg cells under light microscopy ( Figure 9B and C). We took the lung adenocarcinoma cell line, A549 (control and hypoxia group), and co-cultured them with Treg cells in the Transwell system, as shown in Figure 9D. After that for 24 hours, we found that the Tregs co-cultured with hypoxia A549 cells had more access to the lower chamber ( Figure 9E). Considering the higher level of CCL28 in hypoxia A549 cells compared to the control cells ( Figure 7F), we think the CCL28 might be associated with the local recruitment of Tregs, which could be induced by the hypoxia condition. Discussion Despite substantial technical advances in medicine, lung cancer is still a serious public health concern globally. 18 Recently, treatment patterns for lung cancer are gradually being changed by immunotherapy. Currently, researchers can conduct in-depth analyses based on public data and provide valuable research clues. [19][20][21] Here, we found that the CCL7, CCL11, CCL14, CCL24, CCL25, CCL26, and CCL28 had a higher level, while the CCL17 and CCL23 had a lower level in immunotherapy non-responders. Also, we found that immunotherapy nonresponders had a higher level of CD56dim NK cells, NK cells, Th1 cells, Th2 cells and Treg, yet a lower level of iDC and Th17 cells. Biological enrichment analysis indicated that in the patients with high Treg infiltration, the pathways of pancreas beta cells, KRAS signaling, coagulation, WNT BETA catenin signaling, bile acid metabolism, interferon alpha response, hedgehog signaling, PI3K/AKT/mTOR signaling, apical surface, myogenesis were significantly enriched in. CCL7, CCL11, CCL26 and CCL28 were selected for further analysis. Compared with the patients with high CCL7, 1337 CCL11, CCL26 and CCL28 expression, the patients with low CCL7, CCL11, CCL26 and CCL28 expression had a better performance of immunotherapy response and this effect might partly be due to Treg cells. Furthermore, biological exploration and clinical correlation of CCL7, CCL11, CCL26 and CCL28 were conducted, Finally, CCL28 was selected for validation. Experiments showed that under the hypoxia condition, HIF-1α was upregulated, which can directly bind to the promoter region of CCL28 and lead to its higher level. Also, CCL28 was associated with Treg cell infiltration. The result showed that the CCL1, CCL2, CCL7, CCL11, CCL24, CCL25, CCL26, CCL28, CCL14, CCL15, CCL17, CCL21 and CCL23 were significantly associated with the immunotherapy response of lung cancer according to the TIDE analysis. Previous studies have focused on their role in cancer immunotherapy. Zhang et al found that CCL7 can affect lung cancer immunotherapy through recruiting cDC1 cells, indicating that CCL7 might be an underlying biomarker and adjuvant for lung cancer immunotherapy. 22 Hoelzinger et al found that CCL1 can neutralize the immunosuppressive effect of Treg cells, but not affect the function of T effector cells, making it a mean for cancer immunotherapy. 23 Liu et al revealed that mice exposed to environmental stress have increased anti-tumor immunity and become more sensitive to immunotherapy against liver cancer, which was dependent on the β-ARs/CCL2 axis. 24 Also, Yang et al found that the CCL2/CCR2 axis can recruit tumor-associated macrophages to induce immune evasion through PD-1 signaling in esophageal cancer. 25 Our result identified these CC chemokines family molecules with the potential to affect the immunotherapy response of lung cancer, which can guide the direction of future studies. Our result also found that Treg cells can significantly affect lung cancer immunotherapy. The immune-suppressive microenvironment of tumors is strongly influenced by Treg cells. 26 Generally, in the tumor microenvironment, Treg cells are numerous and highly activated, mainly responsible for tumor-induced immunosuppression. 26 Li et al found that the TLR8 can specifically inhibit the glucose metabolism of Treg cells and reversibly hamper the immunosuppressive function of Treg cells, further affecting immunotherapy. 27 Bai et al indicated that the ANXA1 blocker Boc1 can decrease granzyme A mRNA expression in Treg cells, therefore antagonizing Treg cell-mediated immunosuppression. 28 In lung cancer, macrophage-related molecules MARCO and IL37R can block Treg cells and support cytotoxic lymphocyte function. 29 Redin et al found that the SRC family kinase inhibitor dasatinib can enhance the anti-PD-1 activity in lung cancer by inhibiting Treg cell conversion and proliferation. 30 Our results Moreover, we found that the hypoxia microenvironment can induce Tregs recruitment and was positively correlated with the expression of CCL7, CCL11, CCL26 and CCL28. Generally, most bioenergetic processes are based on oxygen. However, hypoxia status is prevalent in the tumor microenvironment and contributes to wide reprogramming in cancers. 31 In liver cancer, Suthen et al found that Tregs was significantly enriched in the local hypoxia area. Also, this recruitment effect was dependent on the CCL20 and CXCL5. In the hypoxia tumor microenvironment, the interaction between Tregs and cDC2 results in the loss of antigen-presenting HLA-DR on cDC2, further affecting the immunosuppression effect of Treg. 32 In addition, Liu et al revealed that hypoxia leads to the overexpression of CCL28, which could recruit Treg cells to enhance angiogenesis in lung cancer. 33 Shi et al indicated that the HIF-1α could regulate the metabolic checkpoints of Th17 and Treg cells in a glycolytic manner. 34 These results indicated the underlying crosstalk between hypoxia, CC chemokines family and Treg cells, which might provide direction for future studies or researchers in this field. We finally selected CCL28 for experiment validation due to its strong correlation with Treg cells. Our result showed that under hypoxia induction, HIF-1α and CCL28 were significantly upregulated compared to the normal conditions. Huang et al found that in lung adenocarcinoma, hypoxia-induced CCL28 targets CCR3 on endothelial cells to promote angiogenesis. 35 Moreover, the luciferase reporter and ChIP assay showed that HIF-1α can directly bind to the promoter region of CCL28. Meanwhile, we found that CCL28 was associated with Treg cell infiltration, which might partly explain its effect on immunotherapy response. Nowadays, immunotherapy has only succeeded in a small number of patients with lung cancer and most lung cancer patients suffer from insensitivity. Therefore, it is extremely important to find molecules that may affect the response rate of lung cancer immunotherapy. Our result indicated that CCL28 could influence lung cancer immunotherapy, making it a potential biomarker for clinical applications. There are some limitations to be aware of. Firstly, the public sample included in our analysis was mainly the Western population. Therefore, potential race bias is inevitable. Secondly, the experimental validation in our study was only limited to the cell level. Subsequently, in vivo experiments should be conducted to improve the reliability of our conclusions. Thirdly, bioinformatics analysis can not completely reflect the real biological situation, which might cause some underlying bias. Publication Permission All authors have agreed to publish this paper. Data Sharing Statement The raw data mentioned in this study can be downloaded from online databases. More detailed information can be provided by the corresponding author upon reasonable request. Ethics and Consent Statement This study design was reviewed and approved by the Medical Ethics Committee of the Zhongda Hospital, Southeast University (No. 2022ZDSYLL303-P01). In accordance with the principles of the Declaration of Helsinki. All patients provided and signed the informed consent.
2023-03-29T15:12:54.241Z
2023-03-01T00:00:00.000
{ "year": 2023, "sha1": "10b32dc1859905e4fceeeb46967f4307323ed899", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=88487", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "925110e32b3ba6a759613f116adc9e66b9e9c869", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
246950843
pes2o/s2orc
v3-fos-license
Review and Comparison of Antimicrobial Resistance Gene Databases As the prevalence of antimicrobial resistance genes is increasing in microbes, we are facing the return of the pre-antibiotic era. Consecutively, the number of studies concerning antibiotic resistance and its spread in the environment is rapidly growing. Next generation sequencing technologies are widespread used in many areas of biological research and antibiotic resistance is no exception. For the rapid annotation of whole genome sequencing and metagenomic results considering antibiotic resistance, several tools and data resources were developed. These databases, however, can differ fundamentally in the number and type of genes and resistance determinants they comprise. Furthermore, the annotation structure and metadata stored in these resources can also contribute to their differences. Several previous reviews were published on the tools and databases of resistance gene annotation; however, to our knowledge, no previous review focused solely and in depth on the differences in the databases. In this review, we compare the most well-known and widely used antibiotic resistance gene databases based on their structure and content. We believe that this knowledge is fundamental for selecting the most appropriate database for a research question and for the development of new tools and resources of resistance gene annotation. Introduction Antimicrobial resistance (AMR) means an emerging threat on humanity. Based on a 2017 report, it is estimated that~700,000 deaths can be attributed to AMR worldwide [1]. As stated by a CDC study, approximately 35,000 people die in the United States yearly due to antibiotic resistance [2]. A recent study, however, draws a more drastic picture. Based on data from 2019, approximately 1.27 million deaths can be directly attributed to AMR worldwide [3]. However, it is expected that the impact of AMR will further increase and claim approximately 10 million lives yearly by 2050 [1]. The emergence of resistant microbes will not only cause untreatable primer infections, but the safe performance of routine medical procedures (such as surgeries or chemotherapy treatment of oncological patients) will become impossible due to the inability of a successful antibiotic prophylaxis [1,2,4]. Even though one usually associates AMR with hospitals and the misuse/overuse of antibiotics by medical professionals, the influence of agriculture and the environment is no less important [1,2,[4][5][6]. Therefore, to tackle this global challenge the investigation of the spread of AMR between different environments is required. The genetic background of antibiotic resistance can be categorized into two main mechanisms. On one hand, AMR can arise through genetic mutations (e.g., modification of the antibiotic target site, overexpression of efflux pumps or the antibiotic target molecule etc.), under the selective pressure of antibiotics, or by the acquisition of specific genes conferring resistance (e.g., genes coding enzymes that degrade the antibiotic compounds or open alternative metabolic pathways for evading the effects of the antibiotic) through horizontal gene transfer (HGT) [7,8]. It is believed that the majority of antimicrobial resistance genes (ARG) transmitted between bacteria is not the novel product of widespread antibiotic usage by humans, but has evolved previously for a variety of functions and has been enriched with the extensive usage of antibiotics since the mid 20th century [9][10][11]. As environmental microbes have a significant role in the spread of resistance genes, the global surveillance of ARGs in various environments is critical for understanding and combating AMR [6,11]. As bacterial AMR is currently the most important form of resistance in microbes, we will refer it to when we mention AMR throughout this review.With next-generation genome sequencing (NGS) technologies become widespread in recent years, they are commonly used in AMR surveillance studies either in clinical settings [12,13], or in the agriculture and food industry [14][15][16][17] and the environment [12,18,19]. In line with the importance of genomic surveillance of AMR, several annotation tools and databases have been developed for the analysis of ARG content of bacterial genomes or NGS metagenomic samples [20][21][22][23][24][25][26][27][28][29][30][31][32][33][34][35]. Table 1 presents some information on the most well-known AMR databases. ARG databases can be divided into two major types [44], some of them contain species specific information (e.g., the MUBI database containing mutations conferring resistance in Mycobacterium tuberculosis [45]), while others focus on ARGs from all sources (e.g., the CARD database [21]). However, ARG databases can differ not only in the covered species, but on the type of AMR mechanism as well. Some database specialize only on acquired resistance genes, while others contain only mutations (e.g., the ResFinder [42] database focuses on acquired resistance genes, while the PointFinder [30] database from the same research group covers only AMR associated mutations). Unsurprisingly, however, there are databases with information on both AMR mechanisms (e.g., the CARD [21] or NDARO [34] databases). The number of tools and databases focusing on AMR has rapidly grown in recent years, and many review articles were published trying to summarize the information on these resources. However, they put more emphasis on the different tools designed for ARG annotation rather than the databases supplying the information for these tasks [18,44,[46][47][48]. As the performance of each tool heavily relies on the underlying database [36], it is important to understand the advantages and limitations of all databases available for the research community. By understanding it, researchers can select the best database for their purpose. Furthermore, this knowledge can be important for choosing the best resource for developing new annotation tools as well. The many available resources of ARGs are not only a blessing, but are a curse as well, as researchers need up-to-date and thorough understanding on them to select the most appropriate one for the task at hand. This can be rather cumbersome as each database differs in structure and logic, especially in the way they store the annotation and metadata associated with ARG sequences. Our main goal is to help such decision making by presenting the comparison of the resources from several aspects. In this review, we compare the most important ARG resources available today. Firstly, we review the structure of each database and then we directly compare them by their content. We present this comparison from the acquired and mutation based resistance mechanisms separately as databases can significantly differ in these regards. Researchers might prefer one mechanism of AMR more in their study, for example, in a study investigating environmental ARGs with potential mobilization properties, acquired resistance genes might be the primary focus, whereas mutations can be more important in a clinical context [49]. Databases Reviewed in this Article From the databases summarized in Table 1, ARDB, ARG-ANNOT and ResFams are not covered by this review as they are not actively updated (they haven't been updated since 2008, 2018 and 2015, respectively). Furthermore, Mustard is also not reviewed here as it was constructed for a study of the gut resistome profiling of humans and wasn't dedicated as a comprehensive resource of ARGs [31]. FARME and PATRIC database are not covered here as well. FARME is based on several metagenomic studies, which were characterized based on their predicted ARG content and AMR phenotype; however, those genes were not extensively validated and might contain false positives [29,48]. PATRIC is constructed for collecting genome sequence data and associated metadata of pathogen microorganisms [35], and necessarily relies on a specialized annotation system for the curation of the data. The ARG annotation pipeline employed by PATRIC is based on the NDARO and CARD databases as well as data from scientific literature, which was reannotated by experts [41]; however, this is not available on their FTP site. Therefore, the following six databases are covered in detail only in this review: ARGminer, CARD, MEGARes, NDARO, ResFinder and SARG. ARGminer ARGminer is an ensemble database assembled from several independent ARG resources. It is based on the CARD [21], ARDB [20], DeepARG [50], MEGARes [27], Res-Finder [42], and SARG [26] databases [32]. Only the acquired resistance genes were collected from these resources. After the acquisition of the sequences from these databases, they have clustered them to remove duplicates and annotated them by the best match from each of the above data resources. After the assignment of UniProt and GeneOntology metadata to the sequences, they guessed the best nomenclature of each gene name by a machine learning model. However, as several differences can be found between databases, they also utilize a crowdsourcing model to refine annotations (with a trust-validation filter to prevent misuse). Furthermore, they have collected mobility and pathogen predictions by fitting the sequences to the ACLAME [51] and PATRIC [40] databases, respectively. The database is periodically updated with the method described above and published after the verification of ARGminer evaluators. The date of the latest update of the database, at the time of writing of this review, is April 2019. CARD The Comprehensive Antibiotic Resistance Database (CARD) is a hand-curated resource that is developed to cover the entire spectrum of ARGs [21]. Every ARG is included in the database based on three criteria. All ARG sequences must be available in the GenBank repository and increase the Minimal Inhibitory Concentration (MIC) in an experimental validation setting which needs to be published in a peer-reviewed journal. Only a handful of historical β-lactam antibiotics are an exception from the above as they do not have an associated, peer-reviewed publication [37]. The CARD database is built around an ontologydriven framework, where the resistance determinants and their associated metadata is recorded in the Antibiotic Resistance Ontology (ARO) network and even the sequences and the threshold used for their detection is stored in a specialized ontology (Model Ontology, MO) [36]. CARD contains resistance genes and resistance mutations as well, which are organized in a species-specific manner. Furthermore, as CARD uses a strict curation procedure for incorporating genes, to increase sensitivity, they have developed a special database (the CARD Resistomes & Variants module) that contains in silico validated ARGs based on the genes stored in CARD [37]. The database is regularly updated based on reviewing the scientific literature by expert curators, whose work is augmented by a machine learning algorithm (CARD*Shark) that sorts scientific publications based on reference for the process. The current version of CARD was updated in October 2021. It is important to note that CARD is freely accessible for academic researchers only, and commercial parties' use is only permitted with a written license. MEGARes MEGARes are also an assembly of multiple resources in a way that is designed specifically for annotating metagenomic data [27]. The first version of the database was based on ResFinder [42], ARG-ANNOT [22], CARD [21] and the Lahey Clinic β-lactamase database curated by NCBI. During the update of the database to MEGARes 2.0 [38], further sequences were collected from the newer versions of CARD [36] and ResFinder [42] and the NCBI Bacterial Antimicrobial Resistance Reference Database [39]. Furthermore, MEGARes 2.0 also incorporates biocide-and metal resistance genes derived from the BacMet database [52]. After they have removed the duplicates from the sequences collected from these resources, the genes were reannotated which revealed several overlapping genes between the ARG databases and BacMet. As the purpose of the database is to form a basis of the ARG annotation of metagenomic reads that can be used to read abundance based analysis, the annotations are stored in the form of an acyclic graph which avoids that one read or contig is assigned to multiple nodes [27]. The database contains antibiotic resistance genes and mutations as well; however, the mutations are not ordered to microbial species due to the nature of the annotation graph. The current version of the database at the time of the writing of this review was last updated in October 2019. NDARO The National Database of Antibiotic Resistant Organisms (NDARO) is a comprehensive database dedicated to antibiotic resistance in the curation of NCBI [34]. The resistance genes are stored in The Reference Gene Catalog, of which, the predecessor was the Bacterial Antimicrobial Resistance Reference Gene Database, with the RefSeq PRJNA313047 BioProject (https://www.ncbi.nlm.nih.gov/bioproject/PRJNA313047) (accessed on: 15 February 2022) storing the reference sequences [39]. This database was constructed from the ResFinder [42], CARD [36], RAC [53] and INTEGRALL [54] databases with extensive curation of the associated scientific literature. Since the expansion of the database in 2021, AMR mutations, general stress response genes and virulence genes are also curated within NDARO for the clinically important pathogens [34]. NDARO is updated regularly; the latest database version was released in December 2021. ResFinder/PointFinder ResFinder [42] and PointFinder [30] are dedicated tools for acquired resistance genes and resistance mutations, respectively. These were separate AMR data resources; however, since ResFinder 4.0, they are developed under the same project [33]. ResFinder was originally developed on the basis of the Lahey Clinic β-lactamase database, ARDB [20], and an extensive literature review. To develop a more comprehensive resource of AMR determinants, the developers of ResFinder constructed a database dedicated to mutations conferring resistance only, named PointFinder. During the concatenation of the two databases under the ResFinder 4.0 project, not only was an extensive expert curation applied to the data, but phenotype prediction tables were also constructed to help researchers connect genotype information with potential phenotypic traits. With regular updates, the latest version of ResFinder and PointFinder was released in September and February 2021, respectively. SARG The Structured ARG reference database (SARG) is a hierarchically constructed database [26] based on the CARD [21] and ARDB [20] data resources. They only retained the acquired resistance genes from these databases, and after duplicate removal, they have ordered the genes to a two-level hierarchical architecture. The highest level of this hierarchy is the type of the resistance indicating the antibiotic that the genes confer resistance to, while the lower level is the class of the genes. In 2018, the developers of SARG expanded the database by ARG homologs found by aligning the NCBI nt database to SARG [43]. They are regularly updating the database in a similar manner, with the latest aired in January 2022. However, they have not introduced any new ARGs since the 2019 version. SARG, similarly to CARD, is only accessible freely for academic purposes, and a written permit is necessary for commercial use. Number of Sequences and ARGs in the Databases To compare the ARG content of the different databases, we first matched the number of sequences stored in them and the associated count of unique genes (Figure 1). Figure 1 only shows resistance genes and biocide resistance genes (to maintain comparability between databases), and virulence or metal resistance genes were omitted. The number of unique resistance genes was counted based on the names associated with the particular sequence (i.e., if only the gene family name was given for multiple variants, then only the gene family name was included in the gene count, but if variants had unique names, they were counted separately). In the case of the ARGminer, we have found several different nomenclature forms of the same ARGs, which is not surprising as one of the main goals of the database was to collect and standardize this information with the aid of crowdsourcing. However, as we did not intend to make such standardization through this review, it might be possible that the same gene was counted multiple times in the case of the ARGminer in Figure 1. We tried to reduce the risk of this bias by converting gene names to lowercase when comparing them, as usually the ARG name nomenclature differences concerned only the casing of the letters. Furthermore, we have found 13, 9 and 3 duplicate sequences in the NDARO, ResFinder and MEGARes databases, respectively (the number of sequences in Figure 1 is corrected for the presence of duplicates). The presence of duplicate genes and corresponding sequences in the database might cause overestimation of those genes if the user does not pay enough attention while reviewing the results. In Figure 1, a clear difference can be observed between CARD and the rest of the databases in the relationship between the number of unique sequences and corresponding genes. One might expect that with keeping one reference sequence for each gene, CARD is prone to producing false negatives in homology searches; however, this is overcome in CARD with the use of individual detection threshold for genes stored in the Model Ontology [36]. Figure 2 shows the differences in the number of antibiotic resistance genes (without those conferring resistance through mutations) associated with the antibiotic classes stored in the respective database for CARD and ResFinder. We have selected these two due to the extensive differences in the depth of the antibiotic classification. For the rest of the databases (MEGARes, NDARO and SARG), the same figures can be found in the Supplementary File S1. In the case of the ARGminer, we could not construct such figure as notable differences were found in some cases between the antibiotic classifications of different records for the same genes. In either of the above figures, the respective classification scheme of each database was used. As one would expect, aminoglycoside and β-lactam antibiotics are the most popular categories in either of the databases. However, there is a significant difference in the classification depth of β-lactams between the CARD and other resources. In CARD, separate β-lactam groups have their respective categories (such as penems, penams, carbapenems, cephalosporins etc.), while others label them only as β-lactams. Furthermore, the presence of several collective categories in the MEGARes database is notable (e.g., multi-drug resistance or drug and biocide resistance, etc.). The reason for the presence of such categories is due to the acyclic form of the MEGARes annotation graph, which does not allow the same gene to link to multiple groups. These figures clearly show that the most comprehensive antibiotic classification of the genes can be expected in the case of the CARD database; however, the differences also emphasize that expert knowledge is important for understanding the results of ARG annotation and one cannot expect to rely entirely on the output of a database. Microbial Genus with Corresponding AMR Mutations in the Databases Next, we compared the number of genes conferring resistance through mutations for microbial species in each database (Figure 3). Among the databases covered in depth in this review, only CARD, MEGARes, NDARO and PointFinder (element of ResFinder 4.0) comprise such information. Although MEGARes also has information on mutations causing resistance, connecting them to species is not applicable in this case due to the nature of the annotation architecture. In comparing the microbes for which data is stored in each database, we had to find a taxonomical level that can achieve a standardized comparison between all databases. We decided to count genes in the genus level. We had to diverge from this principle only in one case, where the arbitrary group propionibacteria had to be used instead of the corresponding genus. For simplicity, despite this exception, we further refer to the groups of microbes used for the classification in Figure 3 as genus. It is upfront in Figure 3 that CARD contains mutations for the highest number of genus (37 genuses) between the three databases, and it even has 19 non-species-classified genes as well. In contrast, NDARO store genes for 11 genus while PointFinder stores only for 10. Not every genus considered by the databases belongs to bacteria. CARD stores two genes for Chlamydomonas algae and two for the archaea genus Halobacterium, while PointFinder has six genes for Plasmodium protozoa. Those genus considered by the NDARO and PointFinder databases are primarily human pathogens, especially those among the critically important bacteria for human health, determined by the WHO in 2017 [55][56][57] (ESKAPE pathogens: Enterococcus faecium, Staphylococcus aureus, Klebsiella pneumoniae, Acinetobacter baumannii, Pseudomonas aeruginosa, and Enterobacter species). Furthermore, NDARO has 11 genes enrolled to the Salmonella genus, one of the most important foodborne pathogens and often associated with resistance conferring mutations [58]. PointFinder has a significant collection of Mycobacterium genes (36 genes), which is even more notable in CARD with 63 genes associated with this genus. Pathogens from the Mycobacterium genus, especially M. tuberculosis is one of the most important among disease causing bacteria that develops resistance through mutations. Furthermore, as it needs long incubation times for culturing, whole-genome sequencing based approaches are accountable alternatives [59]. CARD database extremely differs from the other data resources reviewed here, due to the high number of microbial genus it collects data for and the number of genes it stores considering AMR mutations. These properties make it especially suitable for AMR mutation screening in a wide variety of study settings; this is even in the case of environmental AMR surveillance as it stores mutations for typical environmental genus such as Thermus or Halobacterium. A notable number of genes is stored in the database for the Mycolicibacterium genus (4 genes), which is in the forefront as a potential bacterium for degrading plastic pollutants [60]. Conclusions Previously, several ARG database were constructed to form the bases of ARG annotation of whole-genome sequencing and metagenomic samples. With the advent of NGS, their significance is even more profound, and they became an important augmentation of previous phenotypic screening based studies. We compared the accessible and regularly updated ARG databases in this review, which have new versions released lately. The main focus in this review was on the architecture and content of the different databases, in contrast with previous studies mainly focusing on the tools used for annotation. However, understanding how databases are constructed and the differences between them is crucial for every researcher in the field of AMR, so they can use the most powerful tool for their research question. Based on the differences outlined in this review, it seems that CARD and NDARO are prominent among the databases. NDARO contains the most acquired resistance genes; however, CARD comprises of a similarly high number of genes, making both of them a suitable tool for ARG annotation. In the case of mutations conferring resistance, however, CARD dominates other tools. We advise that in cases where mutations or both type of resistance is considered, CARD should be the number one data resource. Otherwise, choosing NDARO can be a similar or somewhat preferable choice over CARD considering its higher acquired resistance gene content. However, usually one is interested in resistance genes and mutations as well and only in special cases considers acquired resistance only (e.g., when one is interested in environmental resistance determinants possible for transmission to pathogenic bacteria). Furthermore, one should also consider the annotation tool when selecting the most appropriate database. CARD has an advantage in this regard, as its annotation tool (RGI) is accessible through a web interface or can be downloaded as command line software to a computing cluster as well. In contrast, NCBI's AMRFinderPlus is exclusively accessible as a local tool for linux-based operating systems only, thus requiring specialized bioinformatic skills to operate it. However, not only technical aspects can lead the decision for selecting the most appropriate tool for a study. For example, deep learning approaches are usually considered to be superior in detecting novel resistance gene variants [50], but they rely on the database they were built on (e.g., the latest version of DeepARG was built on the ARGminer database). Although there are annotation tools applicable with any user-defined database [23,28]. The comparison of such tools, however, is beyond the scope of this review. In conclusion, CARD might be the first choice database in most cases, but the best option can differ based on research questions. Furthermore, the differences in antibiotic classification of the databases emphasize the importance of expert knowledge for interpreting the results. Moreover, as some databases are accessible for non-academic parties only with a written permit, it is important for one to be familiar with the terms of using these resources. Future Perspective We believe that during the evaluation of the performance of different ARG annotation tools, differences in the underlying database should also be considered. Moreover, as major differences can be observed in ARG nomenclature between databases, a standardization procedure would be advantageous for enabling direct comparisons between results generated from different resources. However, such standardization is not only advantageous for the comparability of ARG data resources. One solution for the issue was proposed by ARGminer in the form of crowdsourcing [32] which could standardize the nomenclature within one framework. However, for a unified conclusion, a development of ground rules would be necessary, as was proposed for other issues of ARG nomenclature [61,62].
2022-02-19T16:08:40.730Z
2022-02-17T00:00:00.000
{ "year": 2022, "sha1": "7208e5f6699ee7f1b5670ba9a57cafd480f186dd", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-6382/11/3/339/pdf?version=1646381700", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "10c4ad0b7b329c6f3cff39cf7d7d835d8d75fd1e", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
158385089
pes2o/s2orc
v3-fos-license
Research on the Transition from Kindergarten Language Education to Primary School Language Education . This paper analyzes the current situation and existing problems of the language education in the transition from kindergarten to primary school and proposes corresponding strategies by literature, observation, interview and other methods. Based on the perspectives of lifelong education and continuous education, with rich literature and a large number of factual evidence, supplemented by the actual data obtained from the survey interviews, this paper analyzes the language education problems in the transition from kindergarten to primary school, thereby proposing targeted suggestions on how to realize the language education in the transition from kindergarten to primary school. Survey purpose Through interviews on language education in the transition from kindergarten to primary school, this paper analyzes the current situation and existing problems of the language education in the transition from kindergarten to primary school, and proposes corresponding solutions to better promote the smooth transition of children to primary school. Survey object In this study, a large class teacher of a kindergarten in Dadong District of Shenyang City and a first-grade Chinese teacher of a primary school were selected as research objects. Survey content The content of this research survey can be divided into three parts; The first part is an interview with the primary school Chinese teachers. The main contents include the views on the preparation of the new language before entering the school, the views on the four aspects of the ability to listen, speak, read and write, and the suggestions for the preparation of the textbook. The second part is an interview with the kindergarten teachers, which mainly includes preparations for language education for children who are about to enter primary school, work in language education, communication with primary school teachers, and setting of related teaching materials. The third part is the classroom observation of the kindergarten large class language activity class and the primary school Chinese class. Through classroom observation, it analyzes the differences in language teaching in the transition stage 2.1.4. Survey methods This paper mainly uses the following research methods: Interview method: through the prepared interview outline, 13 teachers in the primary school and 6 teachers in the kindergarten are interviewed in depth, and some doubts, opinions, suggestions and so on for the language education in transition from kindergarten to primary school are obtained through interviews. Observation method: this study adopts the natural observation method in the observation method, and raises and analyzes the problem through observation and record. Differences in language education goals between kindergarten and primary school Some interview records are as follows: Teacher T: "Generally, I design teaching objectives based on the content of the textbook." Teacher Y: "The child's interest is really important, but parents still want to their children learn a relatively difficult knowledge." In interviews with teachers, it is found that most teachers design teaching objectives according to the content of language teaching; some teachers believe that the design teaching objectives should consider the requirements of parents; only a few teachers believe that language education goals should be based on children's interests. The language education goal in the "Guidelines for the Guidance of Kindergarten Education (Trial)" stipulates that "the different moods expressed by different symbols are understood in reading"; the goal of language education in "Criteria for Compulsory Education Chinese Courses" stipulates that "the students need to know 100 Chinese characters and write 50 Chinese characters." [3] By comparison, the goals of kindergarten and primary school language education are very different. It is recommended that kindergarten teachers and primary school teachers do a good job of communication, and take some effective methods in the teaching process to help children to successfully cross the transition from kindergarten to primary school stage. Differences in language education content between kindergarten and primary school Some interview records are as follows: Teacher L: "Because children are young and their concentration is more difficult to focus on, most of the time they take a language course in the form of a picture book." Teacher Q: "In the process of drawing lesson, children can freely play the plot and exercise their imagination through pictures. The picture book is still a good way of education." The primary school language education curriculum has a textbook uniformly compiled by the Ministry of Education, and the content is more focused on pinyin literacy. Therefore, it is difficult for children who are new to primary school to understand abstract pinyin and words, and thus may gradually lose interest in learning Chinese. It can be seen that the differences in language education content between kindergarten and primary school are very large. Differences in language education methods between kindergarten and primary school Some interview records are as follows: Teacher X: "Although we know that game teaching can stimulate children's interest, we only use the game teaching occasionally because of the heavy learning tasks." Classroom teaching in primary schools pays more attention to the teaching of knowledge and repeated practice. In teaching methods, teaching methods and practice methods are mainly used. "Playing through learning, learning through playing" and" teaching through lively activities " are the teaching philosophy and purpose of kindergarten. Kindergarten pays more attention to game teaching. Therefore, the differences in in language education methods between kindergarten and primary school are also very large. Differences in language education teaching evaluation between kindergarten and primary school Some interview records are as follows: Teacher L: "It is very necessary to praise and encourage children's language in a timely manner. For example, we should give timely praise and encouragement to children who have mastered new words. Teacher H: "Tests and examinations are the most important ways to test students' learning situation." It is learned from interviews that most kindergarten teachers are more superficial in their evaluation of language learning. Most primary school teachers take the test scores, the mastery of pinyin literacy and the ability to understand knowledge as the main aspects of language learning evaluation. It can be seen that there are also large differences in language education teaching evaluation between kindergarten and primary school. Parents' language education concept is backward, and they pay attention to knowledge and ignore ability As we all know, Chinese children are good at exams, and they are the most outstanding children in exams. However, Chinese children will obviously lag behind the children of the developed countries in the world when they turn their knowledge achievements into practical abilities, which is inseparable from the parents' educational philosophy. Chinese parents value how much book knowledge children have and rarely consider how much their children acquire. Therefore, parents pay more attention to how many words they learn, how many ancient poems they will recite, and rarely think about how to develop their children's language skills, such as reading ability and expression ability in language education. Teachers' language education is utilitarian, and they value literacy and ignore interest At present, most kindergarten large class teachers are eager to make children's language vocabulary reach a certain number in order to meet the goal of primary school language education. They adopt the practice of focusing on the introduction of new words to prepare for the promotion to primary Advances in Social Science, Education and Humanities Research, volume 215 school. However, the boring word teaching is difficult to attract children's interest in the teaching process. Interest is the premise of everything, if the child is not interested in this boring teaching method, then the teaching will get half the result with twice the effort. Therefore, cultivating interest in learning and letting children learn passively to learn actively enable educational achievements to get twice the result with half the effort The language education transition between kindergarten and primary school is not balanced, and the one-way transition is outstanding Transition from kindergarten to primary school is a very important job for kindergartens. Kindergartens can actively carry out all aspects of preparatory work for children before they enter primary school, regardless of their teaching objectives, content, methods, evaluation, and schedule. However, most primary schools do not take the initiative to contact with kindergartens, and rarely consider the physical and mental characteristics of children when they first enter school, thus creating a one-way connection. There is the lack of support for language education transition in the management of educational institutions At present, the work of language education transition from kindergarten to primary school only stays on the surface, such as the extension of class time, the reduction of games and activities. However, the content, methods, evaluation and other work of language education have not been really carried out. Therefore, language education transition from kindergarten to primary school requires the educational institutions to further plan from the perspective of management. Without the strong support of educational institutions, the work of language education transition from kindergarten to primary school is difficult to implement. Emphasis on the work of parents and the consensus of parents to obtain language education in transition The ecological theory of human development believes that transition from kindergarten to primary school is a three-dimensional educational system. Kindergartens, primary schools, families, and society all come from this educational system, which are also mutual influence. Meanwhile, as the first teachers of the children, parents have a profound impact on all aspects of the children. Therefore, kindergarten and primary school teachers must communicate with parents in a timely manner. By inviting parents to come to the school for visits and experiences, the school helps parents to establish a correct view of education and reach an educational consensus with parents to ensure that the work of transition from kindergarten to primary school is carried out smoothly. [4] Strengthen language teachers' professional training of language education in the transition from kindergarten to primary school Children and primary school students are at different ages, thus their physical and mental development characteristics are different. Kindergarten and primary school language teachers must follow the development characteristics and laws of children at different stages so as to do a good job on the language education in the transition from kindergarten to primary school. Through the unified training of language professional kindergartens and primary school teachers, kindergarten and primary school teachers can understand each other's language education and teaching work, and they can better cooperate and communicate with each other. Therefore, they can jointly explore the research and propose corresponding countermeasures on the problems in the in the transition from kindergarten to primary school stage. Enhance the interaction between kindergarten and primary school teachers, and improve the quality of language education in transition stage The fundamental purpose of the transition from kindergarten to primary school is to adapt to the continuity of children development by maintaining the continuity of learning environment continuity in kindergarten and primary school. The effective interaction between kindergarten and primary school teachers is the key to maintaining the continuity of the learning environment in kindergartens and primary schools, which ensures the continuity of the relationship between teachers and students in kindergarten and primary school, the continuity of educational concepts and the continuity of curriculum teaching. Through the interaction between kindergarten and primary school teachers, it is also possible to ensure the continuity of the relationship between teachers and students in kindergarten and primary school, the educational philosophy and the continuity of curriculum teaching. Therefore, the interaction between teachers at two different stages of kindergarten and primary school should be a form of normalization. Kindergartens and primary schools should develop effective communication rules through consultation to ensure the quality of language education. Establish relevant management systems to ensure the effective language education transition from kindergarten to primary school. Although the transition from kindergarten to primary school is a hot topic in the education fields, China has not given enough attention to the young at the policy level. Compared with the kindergarten language education, the primary school is indifferent. Based on the policy level, we should first strengthen the role of primary schools language education in the transition from kindergarten to primary school. Secondly, it is necessary to gradually incorporate pre-school education into the scope of compulsory education so that two different stages of education can be organically integrated, so as to break the vacuum between early childhood education and primary education, thus ensuring the effective language education transition from kindergarten to primary school. [5] Conclusions Combined with interviews with kindergarten and primary language teachers and observations of language classes, this study summarizes the language education problems in the transition from kindergarten to primary school and proposes strategies for these issues: 1. It is recommended to pay attention to the work of parents and the consensus of parents to achieve language education in the transition from kindergarten to primary school. 2. It is recommended to strengthen the professional training of the kindergarten and primary school teachers' language education in the transition work. 3. It is recommended to enhance the interaction between the kindergarten and primary school teachers and improve the quality of language education in transition stage. 4. It is recommended to establish relevant management systems to ensure the effective language education transition from kindergarten to primary school.
2019-05-20T13:06:52.741Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "33b33d36514c308e4f0cff1e07b71e37cfcfc9b8", "oa_license": "CCBYNC", "oa_url": "https://download.atlantis-press.com/article/25905123.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "8e4fea3a36f8900a9b8d7d8b7411c655d89db3e8", "s2fieldsofstudy": [ "Education", "Linguistics" ], "extfieldsofstudy": [ "Psychology" ] }
270731681
pes2o/s2orc
v3-fos-license
Molecular Characterization and Antibacterial Resistance Determination of Escherichia coli Isolated from Fresh Raw Mussels and Ready-to-Eat Stuffed Mussels: A Major Public Health Concern Our study focused exclusively on analyzing Escherichia coli (E. coli) contamination in fresh raw mussels and ready-to-eat (RTE) stuffed mussels obtained from authorized and regulated facilities. However, it is critical to recognize that such contamination represents a significant public health threat in regions where unauthorized harvesting and sales practices are prevalent. This study aimed to comprehensively assess the prevalence, molecular characteristics, and antibacterial resistance profiles of E. coli in fresh raw mussels and RTE stuffed mussels. E. coli counts in fresh raw mussel samples ranged from 1 to 2.89 log CFU/g before cooking, with a significant reduction observed post-cooking. RTE stuffed mussel samples predominantly exhibited negligible E. coli presence (<1 log CFU/g). A phylogenetic analysis revealed a dominance of phylogroup A, with variations in the distribution observed across different sampling months. Antibacterial resistance was prevalent among the E. coli isolates, notably showing resistance to ampicillin, streptomycin, and cefotaxime. Extended-spectrum β-lactamase (ESβL) production was rare, with only one positive isolate detected. A variety of antibacterial resistance genes, including tetB and sul1, were identified among the isolates. Notably, virulence factor genes associated with pathogenicity were absent. In light of these findings, it is imperative to maintain rigorous compliance with quality and safety standards at all stages of the mussel production process, encompassing harvesting, processing, cooking, and consumption. Continuous monitoring, implementation of rigorous hygiene protocols, and responsible antibacterial drug use are crucial measures in mitigating food safety risks and combating antibacterial resistance. Stakeholders, including seafood industry players, regulatory agencies, and healthcare professionals, are essential to ensure effective risk mitigation and safeguard public health in the context of seafood consumption. Introduction Mussels are a nutritious shellfish that are rich in protein, minerals, vitamins, and omega-3 fatty acids.They can be consumed both cooked and fresh.They are also low in fat (less saturated fat) and calories, making them a healthy choice for a balanced diet.The consumption of mussels can provide various health benefits, such as improving immune function, reducing inflammation, enhancing brain function, and preventing anemia [1,2].Mussels are also considered a sustainable and eco-friendly seafood option, as they do not require feed or fertilizers and can filter and improve water quality [1]. Stuffed mussels are a popular traditional food sold by vendors in the Mediterranean Sea countries.The mussel variety used for stuffed mussels is Mytilus galloprovincialis, which Pathogens 2024, 13, 532 2 of 12 is commonly known as the "black mussel".Ready-to-eat stuffed (RTE) mussels are typically prepared by first cleaning the cockleshells thoroughly, removing all feather-like structures, and then stuffing them with a mixture of rice, oil, salt, and spices.The cockleshells are then closed firmly and steamed [3]. E. coli is widely recognized as a key indicator of fecal contamination and is used to assess the microbiological quality of seafood, particularly mussels.Its presence in food, especially those that are RTE, can signal potential contamination with pathogenic microorganisms.E. coli can cause severe illnesses in humans, including gastroenteritis, urinary tract infections, and neonatal meningitis.Some strains, like EHEC, produce Shiga toxins that can lead to severe conditions such as hemorrhagic colitis (HC) and hemolytic uremic syndrome (HUS), which is a leading cause of acute kidney failure in children.Additionally, some of other serotypes of E. coli possess higher risks, such as carrying the stx and eae virulence genes, named STEC E. coli.These infections, often associated with E. coli, can result from consuming contaminated and undercooked foods and may also be transmitted through the fecal-oral route [4].It can contaminate marine environments and mussels through various sources like sewage, agricultural runoff, and animal manure and consequently enter the food chain [5,6].The prevalence of E. coli serotypes in both the marine environment and mussels is influenced by factors such as environmental conditions, seasonal changes, mussel species, and pollution levels in harvesting areas [7][8][9].These serotypes may possess different surface antigens affecting their pathogenicity and host range.The impact of these factors on public health is significant, especially for vulnerable populations [10,11].E. coli species may also harbor antibacterial resistance genes, posing a global concern [12]. Despite the nutritional benefits of mussels, harvesting areas that are polluted, with improper preparation, storage, and sale conditions, can lead to E. coli contamination.Contaminated water, exposure to temperature fluctuations, and processing with contaminated equipment can result in cross-contamination and rapid bacterial growth.The consumption of raw or undercooked mussel products may lead to serious illnesses.Consequently, the presence of E. coli in seafood products, particularly mussels, requires a thorough risk assessment to address potential health risks to consumers [13,14].Numerous risk assessment studies have focused on mussels and mussel products to ensure food safety [15][16][17][18][19]. While mussels and mussel products are widely consumed, there is a significant deficiency in evaluating and characterizing stx and eae-carrying E. coli isolates found in fresh raw mussels and RTE stuffed mussels, as well as along the RTE stuffed mussel processing line.A molecular characterization of E. coli serotypes and their AMR genes is of vital importance for the control of the spread of these genes and factors and for the implementation of appropriate prevention and treatment strategies. The objective of this study was to identify the primary source of contamination of E. coli in fresh raw mussels and RTE stuffed mussels.Additionally, this study aimed to detect the presence of stx and eae, as well as other virulence factors and AMR genes, in a survey of fresh raw mussels and RTE stuffed mussels collected from four different companies in four different regions of Turkey. Sample Collection During the period between June 2022 and May 2023, RTE stuffed mussels (n = 25) and fresh raw mussels (n = 25) used in the production of these stuffed mussel samples were collected regularly every month (all samples were taken on the same day and were considered as a single batch) from three different companies harvesting and processing mussels from the Marmara Sea, which is an inner sea surrounded by heavily populated cities and industrial areas.All the selected companies applied the Hazard Analysis and Critical Control Points (HACCP) and Good Manufacturing Practices (GMP) systems in their production.The locations of harvesting for these companies were as follows: R1: Balikesir (40 00 ′ 46.1 ′′ E]) was included in this study to investigate point-by-point possible E. coli contamination points.A total of nine sampling points (SP) were selected as follows: SP1: raw mussel, SP2: swab from knives, SP3: shelling step, SP4: swab from handlers' hands, SP5: stuffing with pre-cooked rice and spices, SP6: cooking with an aromatic blend of rice and spices, SP7: portioning and packaging, SP8: shipment, SP9: ready-to-eat stuffed mussel at selling point.Sterile swabs were used to sample the food handlers' hands (after routine cleaning procedures) and the knives before they were engaged in food preparation.Swabbing from both the handlers' hands and the knives was performed on a 10 cm 2 area.All the samples were packed in sterile bags and transferred to the laboratory in cold chain within three hours.The cooking periods applied at R4 were 65 • C for 17.6 ± 1.8 min for precooking (rice and spices) and 72 • C for 20.6 ± 0.9 min for main cooking (mussels, rice, and spices together).Detailed information about the regions, sampling points, and processes are provided in Supplementary Table S1. Isolation and Enumeration of E. coli A total of 25 samples were collected on a single day and considered a single batch.The inter-shell contents of each batch were mixed, and 10 g of the mixture from each batch was transferred to sterile stomacher bags (Seward Medical, London, UK).Then, 90 mL of sterile Maximum Recovery Diluent (MRD, Oxoid, ThermoFisher, Milano, Italy) was added and homogenized.Subsequently, 10-fold serial dilutions were prepared from the MRD.E. coli isolation and enumeration were conducted using the pour plate method on Tryptone Bile X-Glucuronide Agar (TBX, Oxoid, Basingstoke, UK).Following inoculation from 1 mL of the appropriate dilutions, the plates were incubated at 37 ± 2 • C for 4 h and then at 44 • C for 20 h.Colonies with a blue-green color were identified and enumerated as log colony-forming units per gram (log CFU/g) [7,20].Additionally, suspected E. coli colonies were isolated using the Violet Red Bile Agar (VRB, Oxoid, Hampshire, UK) double-layer pour plate method.The inoculated plates were incubated at 37 • C for 24 h.Following incubation, purple colonies (pinkish red colonies with bile precipitate) were considered to be E. coli [21].The blue-green colonies isolated from the TBX agar and the suspected colonies from the VRB agar were subjected to subculturing in Tryptic Soya Broth (TSB, Oxoid, Thermofisher, Madrid, Spain) and on Tryptic Soya Agar (TSA, Oxoid, Thermofisher, Madrid, Spain) at 37 • C for 24 h.Thereafter, all E. coli isolates were confirmed using standard biochemical tests. Identification of β-Lactamase and Antibacterial Resistance Genes of E. coli Isolates The investigation of major beta-lactamase genes, namely bla CTX-M , bla TEM , bla SHV , and bla OXA within the E. coli isolates was conducted via PCR in accordance with the methodology outlined by Ogutu et al. (2015) [29].Furthermore, an analysis was conducted on the isolates to determine the presence of genes associated with resistance to various antibacterial agents, including tetracycline (tetA and tetB), sulfonamides (sul1, sul2, and sul3), florfenicol/chloramphenicol (floR), and quinolones (qnrA and qnrB) [30,31]. Isolation, Enumeration, and Identification The E. coli counts prior to the cooking process ranged from 1 to 2.89 log CFU/g.The mean E. coli counts in the fresh raw mussel samples from R1, R2, R3, and R4 were 1.10, 1.96, 1.86, and 1.72 log CFU/g, respectively.The highest E. coli count was detected in the fresh raw mussels from R3 in September, followed by 2.83 log CFU/g in the fresh raw mussels from R4. Seasonally, the period with the least contamination was spring, with growth observed only in the fresh raw mussels from R1 in March.No E. coli was detected in any samples during April and May.The highest counts were obtained from samples collected in the autumn period.In December, a single colony from VRB tested positive for E. coli in RTE stuffed mussels from R3. E. coli was not isolated at any stage after the cooking process (SP6-SP9 in R4).The counts of E. coli were found to be 1.85 log CFU/g before the cooking process and <1 log CFU/g in the RTE stuffed mussel samples by the destruction of 0.90 log after the cooking process.The RTE stuffed mussel samples contained no E. coli, and only one isolate from VRB was E. coli positive (60EM, from R3).No E. coli was detected in any sample from the food contact surfaces (SP2 and SP4) of the RTE stuffed mussel production line (Figure 1). Isolation, Enumeration, and Identification The E. coli counts prior to the cooking process ranged from 1 to 2.89 log CFU/g.The mean E. coli counts in the fresh raw mussel samples from R1, R2, R3, and R4 were 1.10, 1.96, 1.86, and 1.72 log CFU/g, respectively.The highest E. coli count was detected in the fresh raw mussels from R3 in September, followed by 2.83 log CFU/g in the fresh raw mussels from R4. Seasonally, the period with the least contamination was spring, with growth observed only in the fresh raw mussels from R1 in March.No E. coli was detected in any samples during April and May.The highest counts were obtained from samples collected in the autumn period.In December, a single colony from VRB tested positive for E. coli in RTE stuffed mussels from R3. E. coli was not isolated at any stage after the cooking process (SP6-SP9 in R4).The counts of E. coli were found to be 1.85 log CFU/g before the cooking process and <1 log CFU/g in the RTE stuffed mussel samples by the destruction of 0.90 log after the cooking process.The RTE stuffed mussel samples contained no E. coli, and only one isolate from VRB was E. coli positive (60EM, from R3).No E. coli was detected in any sample from the food contact surfaces (SP2 and SP4) of the RTE stuffed mussel production line (Figure 1). . The genetic diversity of the isolates we obtained as a result of isolation and PCR identification, which we have provided in Supplementary Table S2, were investigated.To this end, a total of 74 isolates were collected: 44 from fresh raw mussels, 15 from SP3 (shelling step), 14 from SP5 (stuffing with pre-cooked rice and spices), and one from an RTE stuffed mussel sample. From a seasonal perspective, the highest number of E. coli isolates was obtained from samples in October (16 isolates), followed by September (15 isolates).These isolates, used The genetic diversity of the isolates we obtained as a result of isolation and PCR identification, which we have provided in Supplementary Table S2, were investigated.To this end, a total of 74 isolates were collected: 44 from fresh raw mussels, 15 from SP3 (shelling step), 14 from SP5 (stuffing with pre-cooked rice and spices), and one from an RTE stuffed mussel sample. From a seasonal perspective, the highest number of E. coli isolates was obtained from samples in October (16 isolates), followed by September (15 isolates).These isolates, used for phenotypic antibacterial resistance and genetic diversity research, are referred to by a "number and EM" code throughout the remainder of this study. Discussion E. coli species, which serve as an indicator of fecal contamination and are consequently an important food safety indicator, can reach prevalences of up to 30% in shellfish [33].In India, Singh et al. (2020) [34] isolated a total of 150 E. coli isolates from four different types of fresh shellfish.Our results showed that E. coli was enumerated in 35.4% (17/48) of fresh raw samples and none of the stuffed mussels, with a mean count of 2.15 log CFU/g.Raw mussels can harbor many important pathogens such as E. coli through municipal sewage discharge, industrial wastewater loads, rainfall or irrigation water runoff over land, and the release of contaminants into streams, lakes, or coastal waters [17]. In the current study, a comprehensive monthly sampling approach allowed for monitoring and ensuring the mussels' microbiological safety and quality throughout their preparation, handling, and distribution processes.The absence of E. coli in the hand and knife swab samples showed that the food handlers complied with hygiene and sanitation rules.A study conducted in Istanbul Province in Turkey examined the microbiological quality of RTE stuffed mussels according to the Turkish Food Codex.The results revealed that 77% of the samples had coliform bacteria and 22% had E. coli [35].In another study conducted in Ankara Province in Turkey, 30% of the analyzed stuffed mussel samples were not suitable for consumption due to the presence of E. coli [13].Unlike other studies, which detected very high levels of E. coli contamination in RTE stuffed mussel, in our study, only one (2.1%)E. coli isolate was identified in the RTE stuffed mussel samples.This can be due to the inactivation of the bacteria during heating processes, the prevention of contamination, or any other difference in the processing steps.E. coli contamination in RTE stuffed mussels can be significantly reduced under stringent processing conditions.Strict hygiene protocols should be followed, including regular hand washing, wearing gloves, and using sanitized equipment by all food handlers involved in the processing.The processing environment should be controlled, with limited access to prevent external contamination, and with regular cleaning and disinfection conducted.All equipment and utensils used in the processing should be sterilized before use.Mussel products should be cooked at temperatures exceeding 70 • C to ensure the inactivation of E. coli and other potential pathogens.Additionally, the final product should be sold under appropriate conditions to prevent post-processing contamination.It is also important to consider that the levels of contamination could vary due to changes in pollution rates in the marine environments where the mussels are harvested.Implementing these comprehensive measures can help ensure the safety of RTE stuffed mussels [17,36,37].Bazzoni et al. (2019) [38] reported that the presence of E. coli in raw mollusk samples was more pronounced in the fall and winter seasons (270 and 330 MPN/100 g, respectively).They noted that they did not encounter contamination during the summer season.Similarly, Sferlazzo et al. (2018) [39] mentioned that E. coli contamination tended to be lower during the summer season.In a study conducted in Italy, E. coli was detected in all samples of M. galloprovincialis and Ruditapes decussatus [40].In another study conducted in Italy, which examined 600 raw mussels, E. coli was detected in 3.5% of the samples [41].Notably, E. coli was not detected during these months, indicating either successful microbial control measures or seasonal conditions not conducive to E. coli survival. Phylogenetic groups of E. coli have also been associated with different ecological niches, virulence factors, and antibacterial resistance patterns [42,43].Therefore, the presence and diversity of E. coli phylogenetic groups in mussels may vary depending on the origin, season, and treatment of the products.In addition, PCR detection of phylogenetic groups of E. coli from fresh raw mussels and RTE stuffed mussels can provide useful information about the microbial quality and safety of these products.It can also help to identify the possible sources of fecal contamination and to monitor the effectiveness of processing methods.Furthermore, it can contribute to the understanding of the epidemiology and ecology of E. coli in aquatic environments and food chains.E. coli isolates categorized within phylogroup A are primarily commensal [44].In our study, 6.8% and 1.4% of the identified isolates belonged to groups B2 and D, respectively, which are considered pathogenic.Based on the classification, 26 of the isolates belonged to an unknown phylogroup, which requires different typing methods such as MLST [23].Phylogroups B1 and A are the most prevalent across multiple months, suggesting these are the common isolates in the environment studied.Groups B2 and D also appeared but were less frequent.The dominance of Group A and occasional appearances of Group D between December and February suggests that some isolates are more adapted to colder conditions. The emergence of multidrug-resistant (MDR) foodborne pathogens, defined as acquired resistance to at least one antibacterial agent in three or more antibacterial categories, is considered a significant challenge in public health, with MDR E. coli recognized as a prominent issue in ensuring food safety [12,45,46].In the present study, the antibacterial resistance profiles and resistance genes of the E. coli isolates were evaluated using disk diffusion and PCR methods.Our findings revealed that 66.2% (49/74) of the isolates exhibited resistance to at least one antibacterial, with 32.4% having resistance to two or more classes of antibacterials and thus being classified as MDR.The resistance included resistance to CTX, which is utilized as a last-resort option for treating severe infections caused by E. coli and other Gram-negative bacteria.The highest resistance rates were observed for AMP (47.3%),STR (28.4%), and CFX (20.3%).This resistance is particularly alarming due to the role of CTX in treating severe bacterial infections and its implications for the selection of ESβL producers.These antibacterials are widely used in human and veterinary medicine, and their overuse may select for resistant bacteria that can be transmitted through the food chain.The emergence and spread of resistance to these antibacterials may compromise the effectiveness of the available therapeutic options and increase the risk of treatment failure and mortality [47].The resistance patterns exhibited seasonal fluctuations, with a notable increase in AMP and CTX resistance during the cooler months (October and November).This could be attributed to seasonal changes in antibacterial drug usage patterns in agriculture and human medicine, which often influence environmental reservoirs of resistance.Our data also revealed significant resistance to commonly used antibacterials among E. coli isolates, with a marked seasonality in resistance patterns.The high susceptibility to fluoroquinolones offers some therapeutic reprieve, although the emergence of ESβL producers and resistance to critical beta-lactam antibacterials paints a complex picture of the resistance landscape.These findings highlight the importance of tailored antibacterial stewardship and proactive public health strategies to manage and mitigate antibacterial resistance effectively. E. coli species may contain many antibacterial resistance genes and may adversely affect human health.Some of these genes include bla TEM , which encodes a beta-lactamase enzyme that confers resistance to penicillin and some cephalosporins; tetA, which encodes a tetracycline efflux pump that confers resistance to tetracycline and doxycycline; and sul1, a gene that encodes a sulfonamide-resistant dihydropteroate synthase enzyme that confers resistance to sulfonamides [10,11,48].The results of the present study revealed that 11 different resistance genes were contained in the E. coli isolates.The most common resistance genes were bla TEM (10/74; 13.5%), bla OXA (5/74; 6.8%), and bla CTX-M (1/74; 1.4%) which confer resistance to beta-lactams, and tetA (9/74; 12.2%) and tetB (10/74; 13.5%), which confer resistance to tetracyclines.These genes are often located on mobile genetic elements, such as plasmids and transposons, that can facilitate their horizontal transfer among bacteria [49,50].We also detected ESβL in only one isolate with the bla CTX-M gene, as well as the quinolone resistance genes qnrA and qnrB in only one isolate each.These genes confer resistance to third-generation cephalosporins and fluoroquinolones, respectively, and their presence in E. coli isolates from food sources is of great concern for public health.The EsβL-associated genes bla TEM and bla CTX-M were detected in the isolates, underscoring the presence of significant resistance mechanisms that complicate treatment options. In this study, a comprehensive molecular analysis was conducted to assess the presence of virulence and resistance genes in E. coli isolates.Our findings indicate a low prevalence of virulence factors among the isolates, with significant implications for food safety and public health.Strains designated as STEC are characterized by their ability to produce Shiga toxins, which are encoded by the stx1 and stx2 genes.The key distinguishing factor between pathogenic and non-pathogenic E. coli strains lies in the presence of virulenceassociated genes [51].In this study, these virulence-associated genes were not detected in any of the isolates.The non-detection of critical virulence genes such as Shiga-like toxins (stx1 and stx2), the attaching and effacing factor (eae), and the bundle-forming pilus structural gene (bfpA) in the majority of the isolates suggests a reduced potential for causing severe enteropathogenic or enterohemorrhagic infections in consumers.This absence is particularly notable, as these genes are commonly associated with severe gastrointestinal diseases including hemorrhagic colitis (HC) and hemolytic uremic syndrome (HUS).Balière et al. (2015) [52] found the presence of the stx gene in 35% of various shellfish samples they collected, and specifically in mussels, the presence of the stx gene was determined as 36.5%.Martin et al. (2019) [53] identified the virulence genes stx1, stx2, and eae at a lower frequency (7%) in Shiga toxin-producing E. coli isolates obtained from Norwegian bivalves in marine environments.This holds paramount significance for consumers of shellfish, as insufficient heat treatment during the preparation of edible shellfish species can result in foodborne infections [51].The identification of the O111 antigen (wbdl) in only one isolate (23EM) and the flagellar antigen H7 (flicH7) in three isolates (7EM, 10EM, and 12EM) highlights the presence of specific pathogenic isolates that warrant closer attention.While the prevalence of these antigens is low, their presence indicates a potential risk for pathogenicity and necessitates continuous surveillance.The O111 and H7 antigens are particularly concerning due to their association with outbreaks and severe illness in humans [4].The detection of these antigens, even in a small number of isolates, underscores the importance of rigorous food safety protocols in the processing of mussels.RTE mussel products pose a higher risk of contamination, emphasizing the crucial need for rigorous microbial testing and control measures in food production facilities. Our study revealed the presence of E. coli isolates with diverse serotypes, phylogenetic groups, virulence factors, and AMR profiles in fresh raw mussels from the Marmara Sea in Turkey.Some of these isolates may have the potential to cause human infections and pose a challenge for the treatment of these infections.In addition, future studies could focus on comparing various contamination prevention methods to determine the most effective strategies for reducing bacterial counts in RTE mussels.Such research would provide valuable insights into optimizing processing techniques to enhance food safety. Conclusions In conclusion, while the prevalence of E. coli in RTE products appears low, the unauthorized harvesting and sale of mussels in polluted coastal areas of Turkey present significant food safety risks.To address these concerns, adherence to quality and safety standards during cultivation, thorough cleaning, proper cooking, and timely consumption or storage of mussels and stuffed mussels are crucial.Additionally, strict hygiene measures in production, continuous pathogen monitoring, and responsible use of antibacterials are essential to protect public health and prevent antibacterial resistance.Collaborative efforts among stakeholders in the seafood industry, regulatory agencies, and healthcare professionals are essential to effectively mitigate these risks and ensure the safety of seafood consumers.This study demonstrated that, in addition to phenotypic and PCR-based classification, genome-based classification and serotype determination should be included in future studies. Figure 1 . Figure1.Amount of E. coli growth on TBX agar (log CFU/gr).FRM, fresh raw mussel; RTE-SM, ready-to-eat stuffed mussel; SP, sampling point; R, region.A value of 0 on the axis indicates that E. coli was not detected (< 1 log CFU/g) in this region in either the SP samples or the FRM/RTE-SM samples. Figure 1 . Figure1.Amount of E. coli growth on TBX agar (log CFU/gr).FRM, fresh raw mussel; RTE-SM, ready-to-eat stuffed mussel; SP, sampling point; R, region.A value of 0 on the axis indicates that E. coli was not detected (< 1 log CFU/g) in this region in either the SP samples or the FRM/RTE-SM samples. Table 1 . Primers used for identification and phylogenetic classification of E. coli isolates.Determination of Phylogenetic Groups of E. coli Isolates E. coli isolates were classified into phylogenetic groups (A, B1, B2, and D) in accordance with the Clermont's method(2000 and 2013)
2024-06-26T15:26:03.352Z
2024-06-24T00:00:00.000
{ "year": 2024, "sha1": "855a65af5286add991a5c5a7520e764d94f6f31f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-0817/13/7/532/pdf?version=1719203769", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2183d5ed47f8ab800ffe24d895617f0170f23c93", "s2fieldsofstudy": [ "Environmental Science", "Biology", "Medicine" ], "extfieldsofstudy": [] }
215814305
pes2o/s2orc
v3-fos-license
Hypocycloid motion in the Melvin magnetic universe The trajectory of a charged test particle in the Melvin magnetic universe is shown to take the form of hypocycloids in two different regimes, first of which is the class of perturbed circular orbits, and secondly in the weak-field approximation. In the latter case we find a simple relation between the charge of the particle and the number of cusps. These two regimes are within a continuously connected family of deformed hypocycloid-like orbits parametrised by the magnetic flux strength of the Melvin spacetime. Introduction The Melvin universe describes a bundle of parallel magnetic field lines held together under its own gravity in equilibrium [1,2]. The possibility of such a configuration was initially considered by Wheeler [3], and a related solution was obtained by Bonnor [4], though in today's parlance it is typically referred to as the Melvin spacetime [5]. By the duality of electromagnetic fields, a similar solution consisting of parallel electric fields can be obtained. In this paper, we shall mainly be interested in the magnetic version of this solution. The Melvin spacetime has been a solution of interest in various contexts of theoretical high-energy physics. For instsance, the Melvin spacetime provides a background of a strong magnetic field to induce the quantum pair creation of black holes [6,7]. Havrdová and Krtouš showed that the Melvin universe can be constructed by taking the two charged, accelerating black holes and pushing them infinitely far apart [8]. More recently, the generalisation of the solution to include a cosmological constant has been considered in [9][10][11]. Aside from Melvin's initial work [12] and that of Thorne [13], the motion of test particles in a magnetic universe was typically studied in a more general setting of the Ernst spacetime [14], which describes a black hole immersed in the Melvin universe. The motion of particles in this spacetime was studied in [15][16][17][18][19][20][21][22][23][24][25], and also for the magnetised naked singularity in [26]. The study of charged particles in the Ernst spacetime has also informed works in other related areas such as in Refs. [23,27,28]. Of particular relevance to this paper is the interaction between the Lorentz force and the gravitational force acting on an electrically charged test mass. As is well-known in many textbooks of electromagnetism, a particle moving in a field of mutually perpendicular electric and magnetic fields will experience a trajectory in the shape of a cycloid [29]. In this paper, we focus on a similar situation, except that the electric field will be replaced with a gravitational field. The cycloid-like, or more generally, trochoid-like motion was obtained by Frolov et al. [30,31] in the study of charged particles in a weakly-magnetised Schwarzschild spacetime [32]. A similar motion was considered in the Melvin spacetime by the present author in [21]. In this paper, we will extend this idea further to show that the trajectories are more generally deformed hypocycloids, which are curves formed by the locus of a point attached to the rim of a circle that is rolling inside another larger circle. The main result to be presented in this paper is that hypocycloid trajectories occur in two distinct regimes. First is in the class of perturbed circular orbits [21], and the second is the class of solution in the weak field regime. Particularly, in the latter regime, we find that the particle's charge e is related to the number of cusps n of the hypocycloid by the relation e = n − 2 2(n − 1) . The trajectories in the two regimes are continuously connected by a family of deformed trajectories that still retain the features of the hypocycloid, namely its configuration of cusps. The hypocycloid solutions are obtained by perturbing constant-radius solutions in the first regime and by expanding the equations for weak magnetic fields in the second. The family of deformed, intermediate solutions are obtained non-perturbatively via numerical solutions. The rest of this paper is organised as follows. In Sec. 2, we review the essential features of the Melvin spacetime and derive the equations of motion for an electrically charged test mass. Subsequently, in Sec. 3, we consider the perturbation of circular orbits. This was already briefly studied by the author in a short subsection in [21]. Here we will review the earlier results and provide some additional details. In Sec. 4, we show that trajectories in the weak field regime correspond precisely to hypocycloids, as well as the study of numerical solutions beyond the weak field regime. A brief discussion and closing remarks are given in Sec. 5. In Appendix A, we review the basic properties of hypocycloids. Equations of motion The Melvin magnetic universe is described by the metric where the magnetic flux strength is parametrised by B. The gauge potential giving rise to the magnetic field is The spacetime is invariant under the transformation Therefore we can consider B ≥ 0 without loss of generality. We shall describe the motion of a test particle carrying an electric charge e by a parametrised curve x µ (τ ), where τ is an appropriately chosen affine parameter. In this paper, we will mainly be considering time-like trajectories, for which τ can be taken to be the particle's proper time. The trajectory is governed by the Lagrangian L = 1 2 g µνẋ µẋν + eA µẋ µ , where over-dots denote derivatives with respect to τ . In the Melvin spacetime, the Lagrangian is explicitly Since ∂ t , ∂ z , and ∂ φ are Killing vectors of the spacetime, we have the first integralṡ where E, P , and L are constants of motion which we shall refer to as the particle's energy, linear momentum in the z-direction, and angular momentum, respectively. To obtain an equation of motion for r, we use the invariance of the inner product of the four-velocity g µνẋ µẋν = ǫ. For time-like trajectories, one can appropriately rescale the affine parameter such that ǫ = −1. Inserting the components of the metric, this gives where V 2 eff is the effective potential Another equation of motion for r can be obtained by applying the Euler-Lagrange equation d dτ ∂L ∂ṙ = ∂L ∂r , which leads to a second-order differential equation where primes denote derivatives with respect to r. Another useful equation can be obtained by taking dr dφ =ṙφ , which gives dr dφ To obtain the trajectory of the particle, one can solve either Eq. (8) or (6) to obtain r. Along with the integrations of Eq. (5), one completely determines the particle motion. We note that the metric is invariant under Lorentz boosts along the z-direction. Therefore we can always choose a coordinate frame in which the particle is located at z = constant. This is equivalent to fixing P = 0 without loss of generality. For an appropriately chosen range of E and L, the allowed range of r can be specified by the condition thatṙ 2 ≥ 0, or, equivalently, E 2 − V 2 eff ≥ 0. We denote this range by where r ± are two positive real roots of the equation E 2 − V 2 eff = 0. For a given value of B, e, and L, the minima of V 2 eff gives the circular orbit r = r 0 , which is the root of An important quantity for the context of this paper is the value of r whereφ vanishes. Denoting this value as r * , we have, using Eq. (5), where we have denoted Λ * = 1 + 1 4 B 2 r 2 * . Given r * , one can determine L from the above, or vice versa. Substituting Eq. (12) into (11), we find We note that (12) requires e and L to carry the same sign. Hence the above equation shows that r = r * must lie within the range where V 2 eff has a positive slope, which is r * ≥ r 0 . Another quantity of interest is the value of V 2 eff at r = r * . We shall denote this as E 2 * = V 2 eff | r=r * We now briefly explain significance of the quantity r * , using a representative example of B = 0.04, e = 2, and L = 10 shown in Fig. 1. For L = 10, we use Eq. (12) to obtain r * ≃ 1.6667. 1 Now, for different choices of E, the resulting range (10) may or may not contain r * . There are three possible cases. In the first case, we have r * = r + . This occurs when the particle carries an energy E = E * . In this case,φ vanishes the moment it reaches maximum radius whereṙ = 0. The orbit forms a sharp cusp at r = r + , as the one shown in Fig. 1b. In the case r − < r * < r + , the derivativeφ will change sign upon crossing r = r * , and changing again on its return crossing. This results in the orbit curling up into a coil-like structure, shown in Fig. 1c. Finally, for r * > r + , the point r * is not accessible by the particle. Thereforeφ does not vanish. Rather, it oscillates between finite, non-zero values. The resulting orbits have a sinusoidal appearance such as in Fig. 1d. Perturbations of circular orbits The equations of motion can solved by can be solved by r = constant = r 0 , corresponding to circular orbits. In order to satisfy (8) and (6), the energy and angular momentum are required to be Equivalently, Eq. (14) can be obtained by solving (11) for L, then substituting the results along withṙ = 0 into (6) to obtain E 2 . In the following we shall take the lower sign for (14), as this is the case that will be related to hypocycloid motion of interest in this paper. Fig. 1a shows the effective potential as a function of r, and Figs. 1b, 1c, and 1d shows orbits with energies E = E * , E > E * , and E < E * , plotted in Cartesian-like coordinates X = r cos φ, Y = r sin φ. The two black dotted circles are r = ±, the boundaries of the ranges of allowed r whereṙ 2 ≥ 0. The blue dashed circles are r = r * . Next, we perturb about the circular orbits by writing r in the form r(τ ) = r 0 + εr 1 (τ ). (15) Further expressing E and L in terms of e, B and r 0 via (13) and (14), expanding Eq. (8) in ε, we find that the first-order terms gives a harmonic oscillator, where Subsequently, we expand (5) to obtaiṅ where In particular, β 0 is the angular frequency of revolution of the unperturbed circular motion. (The cyclotron frequency.) With this, we find that the solution to Eqs. (15) and (18) are where Neglecting the terms second order in ε and beyond, we have the equation of a family of trochoids parametrised by ζ. Recalling the standard description of trochoids, the case ζ > 1, describes the prolate cycloid, ζ < 1 describes the curtate cycloid, and ζ = 1 corresponds to the common cycloid. Recall that the standard cycloid is formed by the locus of a point on a circle rolling on a flat plane. In the present case, this 'plane' is not flat, but rather a large circle of radius ∼ r 0 , and it only approximates a flat plane for intervals of motion of order ε ≪ r 0 . Fig. 2 shows the zoomed-in sections of perturbed circular orbits about r 0 = 6 5.98 Here, we take λ = 0.01 and the angular momentum of these orbits are exactly equal to its unperturbed case calculated with (14), wheras the energies of the orbits are obtained by solving (6) for E. As in Fig. 1, the black dotted arcs denote the boundaries of the allowed range r − ≤ r ≤ r + and the blue dashed arc indicate the point r = r * whereφ vanishes. for a spacetime of magnetic flux parameter B = 0.1 and various values of e. The circular orbits are perturbed by λ = 0.01 around r 0 = 6. We see that, depending on the charge of the particle, the perturbed orbit can either be a common cycloid (e = 18.068, Fig. 2a), prolate cycloid (e > 18.068, Fig. 2b), or curtate cycloid (e < 18.068, Fig. 2c). As φ evolves across a period of 2π, the number of r-oscillations is approximately We can calculate n for the examples shown in Fig. 2. For the parameters giving the common cycloid in Fig. 2a, we have n ≃ 506.1. For prolate cycloid of Fig. 2b it is n ≃ 749.4. Finally for the curtate cycloid in Fig. 2c, n ≃ 349.4. In the regime of perturbed circular orbits, the quantity n defined in (22) is the number of cusps formed as φ goes through one period of 2π. Of course, the locus of a point on a circle rolling inside a larger circle is also wellknown curve called the hypotrochoid. For the rest of the paper we shall focus on the case of the common hypocycloid, which is the analogue to the common cycloid and the curve is also characterised the occurence of sharp cusps. In the next section, we will show how the hypocycloid can be extracted from the equations of motion beyond the regime of perturbed circular orbits. Hypocycloid-like trajectories Like their analogues in cycloids, the hypocycloids are characterised by their trajectories having sharp cusps at maximum radius. In terms of the equations of motion, this corresponds toṙ andφ being zero simultaneously. In other words, In this case, the required value of E for the orbit to be a common hypocycloid is obtained substituting r = r + = r * into Eq. (6). At this position, the radial velocity is zero. Therefore we putṙ = 0 and solve for E to obtain Having the energy and angular momentum fixed by r * , the equations of motion now become dr dφ When the magnetic field is weak, we will now show that the trajectory can be approximated by hypocycloids. To this end, we take B to be small while keeping e sufficiently large so that the gravitational effects of the magnetic field is reduced whilst keeping the Lorentz interaction on the charged particle significant. Therefore we introduce the parametrisation for some constants g and q, and expand in small λ. Then, Eq. (6) becomeṡ while the equation of motion for φ giveṡ and Eq. (25) becomes dr dφ Threfore, if we ignore the higher order terms, Eqs. In the notation of Appendix A, we recall that the hypocycloid is a curve traced out by a point sitting on a circle of radius b rolling inside a larger circle of radius a = bn. If n is an integer with n ≥ 2, we get a periodic hypocycloid with n cusps. In terms of r ± , we have r − = r + − 2b and r + = bn. Furthermore, as e = q λ , Eq. (30) leads to e = n − 2 2(n − 1) . In the regime of small λ, this gives the the required charge e for the particle to excecute a periodic hypocycloid with n cusps. A straight line segment is technically a 'hypocycloid' with n = 2. By Eq. (32), this will be the trajectory of a neutral particle with zero angular momentum undergoing radial oscillations about axis of symmetry in the Melvin universe, such in Fig. 3a. For n = 3, Eq. (32) tells us that a particle with charge e = 1 2 traces the shape of a hypocycloid with three cusps, called a deltoid. (See Fig. 3b.) For n = 4, we have a particle with charge e = √ 6 3 tracing out an astroid, which is a hypocycloid with four cusps. (See Fig. 3c.) This follows higher n. To summarise, one can obtain hypocycloid trajectories as follows. Given a choice of r * = r + and n, the requisite energy and angular momentum are calculated from Eq. (24) and Eq. (12). The charge of the particle is fixed by Eq. (32). One also has to choose the magnetic field strength B so that terms of order O (e 2 B 2 ) are sufficiently small. In this way, the higher order terms of Eqs. (27), (28) and (29) can be neglected. This ensures that the equations of motion hold up to reasonable precision as hypocycloid equations. We can verify the above arguments by solving the full non-perturbative equations of motion numerically. In other words, for a choice of r * , n, and a small B, we integrate Eqs. (8) and (5) using a fourth-order Runge-Kutta method. In Fig. 3, we obtain the trajectories for B = 0.001 and r * = r + = 6. For these values, the deviation of the trajectory from being a true exact hypocycloid is of O (e 2 B 2 ) ∼ O (10 −5 ). With this relatively small error, the visual appearance of the orbits in Fig. 3 indeed resembles the standard hypocycloid. Next we shall explore the shape of the orbits as we incrase B to beyond the weak-field regime. As B is increased, the higher-order corrections in Eqs. (27), (28), and (29) become important, and will no longer coincide with the hypocycloid equations. Nevertheless, we are still able to solve the non-perturbative equations numerically and explore its behaviour. As demonstrated in Fig. 4, as B increases, the hypocycloids are continuously deformed. The innermost curve curve is the one that most closely approximates hypocycloids with B = 0.001 and e given by Eq. (32). The subsequent curves are obtained by increasing B and tuning e manually until we obtain the periodic orbit with desired number of cusps. We see that as B increases, the segments of curves joining two cusps are deformed from a concave shape into a convex one. Furthermore, the range of allowed radii r − ≤ r ≤ r + becomes narrower as B increases, until we see that the outermost orbit with the largest B, the orbits begin to resemble a circular orbit. In the case of Fig. 4, the outermost orbit depicted is for B = 0.6. In fact, we can check these orbits of large B correspond to the perturbed circular orbits of Sec. 3. We do so by checking that the numerical (non-perturbative) solution matches the perturbed solution of Sec. 3. For instance, let us take the n = 3 case shown in Fig. 4a. The outermost n = 3 orbit at B = 0.6 is formed by a particle carrying charge e ≃ 10.6204. The maximum and minimum radii of the motion are r + = 6.0 = r * and r − ≃ 5.7674, respectively. We shall treat this as a reasonably narrow range such that the orbit is regarded as a perturbed circular orbit about r 0 ≈ r + +r − 2 ≃ 5.8837, and the perturbation parameter is ε ≈ r + −r − 2 ≃ 0.1163. Inserting these values into Eq. (22) and (21), we obtain n ≃ 3.0057, ζ ≃ 1.0214, (33) which is consistent with a common cycloid (ζ = 1) performing 3 oscillations in r within one angular period, thus forming n = 3 cusps. Performing a further check for the n = 4 orbit of Fig. 4b, we have B = 0.6, e ≃ 12.2849. The maximum and minimum radii are r + = 6.0 = r * and r − = 5.8275, for which we take r 0 ≃ 5.9138 and ε ≃ 0.0862 Inserting these into Eq. (22) and (21), we find which is consistent with a common cycloid performing 4 oscillations in r within one angular period, resulting in n = 4 cusps. Similar checks can be performed for higher n. Hence we conclude that the periodic orbits with sharp cusps (r + = r * ) form a family of deformed hypocycloidal curves. One end of this family consists of hypocycloids in the weak field regime, and on the other are common cycloids as the perturbation of circular orbits. Conclusion Hypocycloid trajectories are well-known as solutions to various brachistochrone problems in mechanics. For instance, the path of least time in the interior of a uniform gravitating sphere [33] is a hypocycloid. In this paper, we have shown that hypocycloidal trajectories occur in two different regimes of motion for charged particles in the Melvin spacetime. By considering numerical solutions we see that a generic motion in the non-perturbative case consists of a family of deformed hypocycloids. While the study of charged particles in strong gravitational and magnetic fields are typically candidates of astrophysical interest, the highly-ordered motion with finely-tuned parameters considered in this paper is perhaps more of a mathematical interest instead. To this end, may be interesting to study the mathematical connections between hypocycloids and the equations of motion of the Melvin spacetime. As particle motion are typically studied to reveal the underlying geometry of a spacetime, the fact that the motion here are hypocycloids may yet hint at something about the geometry of the Melvin universe. To speculate even further, it was recently noted that hypocycloids are related to the positions of eigenvalues of SU(n) in the complex plane [34]. It is intriguing to wonder whether this carries any implications in context of the Melvin spacetime. A Parametric equations of the hypocycloid Consider a disk of radius b rolling without slipping inside a larger circle of radius a = bn, where n > 1. Let P be a point on the edge of the disk at distance b from its centre. The curve traced out by P as the disk rolls in the larger circle is a hypocycloid. In Cartesian coordinates, its parametric equations are x = b [(n − 1) cos τ + cos(n − 1)τ ] , y = b [(n − 1) sin τ − sin(n − 1)τ ] , τ ∈ R. (35) Let r − and r + be its minimum and maximum distance from the origin. In terms of these We convert to polar coordinates with x = r cos φ and y = r sin φ. In terms of r and φ, one can show that
2020-04-20T01:00:36.038Z
2020-04-17T00:00:00.000
{ "year": 2020, "sha1": "e8656d5454e42da85420272fabdb4454e2666ffb", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2004.08027", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "e8656d5454e42da85420272fabdb4454e2666ffb", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
152014413
pes2o/s2orc
v3-fos-license
Beginning with the End-User in Mind: Application of Kern’s Six-Step Approach to Design and Create a Literary Journal for Healthcare Students This article was migrated. The article was marked as recommended. Expression through the arts has been shown to improve resilience and enhance patient care amongst healthcare trainees. This is all the more relevant when considering that many healthcare students feel that insitutions lack an outlet for articstic and creative expression. The creation of a student-run literary review is one possible strategy to allow learners to engage in artistic expression and mitigate rising burnout rates. Utilizing the Kern six-step model for curriculum development, we present a novel, replicable, and stepwise approach to designing a forum for artistic engagement in the form of a literary review. Introduction Health professions students suffer from high levels of burnout, disillusionment, deterioration of communication skills, and decreased empathy, [Cohen 2002;Dyrbye et al. 2008], which progressively worsen over time, and are associated with poorer patient care [Shanafelt et al. 2002;Chen et al. 2016].To help students cope with these stressors and enhance professional development, medical schools have sought to identify methods to improve resilience and enhance empathy [Cooke et al. 2010].By building resilience early in their professional education, providers may be able to incorporate wellness strategies during a time of dynamic professional identity formation and continue to build upon these strategies throughout the remainder of their careers. Engagement with the arts, including reflective writing, poetry, prose, and visual art, may be particularly well suited to meet this need.Arts engagement enhances subject's observation, communication, and collaboration skills, improves selfawareness, reduces burnout, and enhances empathy [Reilly et al. 2005; Yang et al. 2011; Chen & Forbes 2014; Blease 2016].In addition to these benefits, engagement with the arts provides an avenue for critical reflection and facilitates the development of leadership skills, professionalism and ethical behavior, each a critical component of provider competence [Branson 2007;Cohen 2006].At the Uniformed Services University (USU), an array of opportunities exists for students to engage with the arts, including open microphone events, expressive mask-making classes, humanities electives, medical improvisation seminars, and narrative mapping workshops.Positive feedback from learners prompted the following questions: (1) what unmet needs for arts engagement exist?(2) how can we increase learner engagement with the arts? In this context, the Kern six-step model for curriculum development provided a framework for an innovative, replicable strategy to answer these questions.Using this model, we explored students' views on the use of arts as a medium for professional development and enhanced empathy.We further identified barriers to student engagement with the arts, identified modalities ideally suited to overcome such barriers, and created a forum for arts engagement in the form of a student-driven, online and print literary review for federal healthcare students. Development Process Our development process was guided by Kern's six steps: [Kern et al. 2009]. (1) Problem Identification and General Needs Assessment In spring 2015, we conducted a focus group of USU students and faculty, asking participants to comment on potential benefits of, and barriers to, opportunities for student engagement with the arts.In addition, participants were asked what, if any, artistic modalities they perceived to be of greatest value for their professional development and wellness. Increased resilience and enhanced professional development were the most commonly cited potential benefits.The lack of a forum in which to share personal artistic expression and where learners could read, observe, and engage with the artistic expression of others was the most commonly identified limitation.The creation of a student-run literary review was identified as one possible strategy to overcome this limitation.This and other information gathered was used to develop a needs assessment for our targeted learners. (2) Needs assessment for targeted learners In summer 2015, we sent a targeted needs assessment to 750 students via electronic survey.This assessment was designed to: (1) identify the degree to which the arts were incorporated into students' curricula; (2) explore students' views regarding use of arts as a medium for professional development; (3) identify barriers to engagement with the arts, and (4) explore whether students felt a healthcare student literary review was likely to overcome such barriers. Results: Among respondents (n = 121, response rate: 16%), 24.8% agreed or strongly agreed that literary or visual arts were included in their curricula.While nearly 55% reported that they had an outlet at their institution where they could express themselves artistically/creatively, fewer than 33% reported regularly engaging in any form of artistic expression. Eighty-two percent of students reported being at least somewhat likely to read a student-run literary review, and greater than 50% agreed or strongly agreed that the presence of a literary and visual arts review would enhance professional development.Further, 67% agreed or strongly agreed that the presence of a literary arts review would contribute to the cultural identity of the federal healthcare student community. Students preferred that the literary review be available in print and online formats, that the online version be delivered quarterly, and that it contain poetry, reflective writing, fiction and visual art. Thereafter, we interviewed eight editors of diverse medical literary journals to identify best practices for creation of a literary review.Interviews were standardized and included questions regarding journal stratification and philosophy, editorial staff composition, roles, and workflow, journal content and formatting, and publication. (3) Goals and objectives Based on needs assessment data, we established the goal of our literary review: To nurture and celebrate the finest art of federal healthcare students, to foster empathy and professional development by encouraging reflection on the human condition, and to cultivate a sense of community among federal healthcare students.Our objectives were: After reading the federal healthcare student literary journal, students, as demonstrated by comments offered during subsequent focus groups and/or survey responses will report: (1) an enhanced sense of community identity; (2) an enhanced sense of professional identity; and (3) enhanced empathy for their fellow human beings. (4) Educational Strategies To achieve our goals, we endeavored to create a student-run literary review, available in print and online formats.An online presence was necessary to meet the stated needs of our users, to reach users who were not in close proximity to USU, and to help solicit content for future volumes.To prepare the student editorial team for its tasks, we organized a series of workshops taught by faculty advisors with the requisite expertise.Please refer to Table 1 for further description of the workshops. (5) Implementation Informed by survey results and editor interviews, we created an organizational structure for the literary review's editorial staff.Please refer to Figure 1 for an overview of the organizational structure.Student editors were recruited for each role and standard operating procedures were established.Content was solicited using email, social media and personal networks.The review was primarily student-run, with faculty members available for guidance.Design and layout were completed by the editorial team, with the help of a professional graphic designer.In May 2016, the literary review, Progress Notes, was produced in print and online versions.(https://goo.gl/FPKSwn) Submissions: We received 71 submissions: 28 visual art pieces, 12 reflections, 22 poems, and 9 fiction stories.Our final publication included 19 visual art pieces, 5 reflections, 11 poems, and 5 fiction stories, for an acceptance rate of 56%. (6) Evaluation and Feedback The impact of this intervention on federal healthcare student empathy, resilience, and professional identity among healthcare students is yet to be determined, as the process of soliciting post-publication feedback is ongoing (see below). Discussion Here we present a novel, stepwise approach to designing a forum for artistic engagement in healthcare trainees.Drawing from the curriculum development literature, we discovered that use of Kern's Six-Step Model offered a blueprint for creation of a forum for arts engagement that is likely to be replicable at other institutions.We have created a model which (1) determines the art engagement needs of learners at a given institution (2) delineates arts engagement outcomes that are most likely to meet these needs (3) matches targeted outcomes to art-based strategies likely to achieve them (4) implements forums for arts engagement that are feasible and sustainable, and (with future projects) (5) evaluates the effectiveness of the implemented program, drawing on community member feedback to modify the forum to better meet those needs. One barrier to broadly adopting this model may be the heavy demands placed on administrators and faculty at schools of healthcare professions, which could result in hesitancy to engage in the creation of activities perceived to make that burden even heavier.This may be especially true for activities more likely to be perceived as extracurricular.Our model addresses this concern as it is driven by students and is based on literature demonstrating the benefits of arts engagement [Reilly et al. 2005;Branson 2007].Further, despite recognition of the importance of the art and science of medicine, there remains a bias towards "hard sciences" that are perceived to be more "measurable" [Cooke et al. 2010].The inertia of this train of thought may impede creation of forums for arts engagement at some institutions.We feel that that use of the data driven Kern Model makes this more agreeable to those who would otherwise be hesitant to adopt an arts-based tool.Finally, given the unique nature of our sampled population, it is also reasonable to question the utility or desirability of a literary review as a generalizable intervention at other institutions.Instead we favor the broad use of this framework to create and adapt a locally desired product, fit to meet the needs of the local population. In the future we aim to use qualitative and quantitative data to evaluate the extent to which our intervention resulted in students meeting our stated goals and objectives.We also hope to identify opportunities for improvements in helping students more effectively achieve these outcomes, and determine additional benefits that Progress Notes may offer students.Data obtained from these investigations will be analyzed and lessons learned will communicated to future leadership as part of continuous quality and process improvement in order to optimally meet the needs of our target population. Take Home Messages Engagement within the arts, including reflective writing, poetry, prose, and visual art may improve resilience and enhance empathy. Students within the health professions may feel their insitutions lack an outlet for articstic and creative expression. The creation of a student-run literary review is one possible strategy to allow learners to engage in artistic expression. The Kern six-step model provides an effective and practical framework for the development and implementation of a literary arts journal within the medical professions curriculum.This review has been migrated.The reviewer awarded 5 stars out of 5 Notes On Contributors Teaching humanism, resilience and other essential skills in the era of multiple commitments and shrinking time, certainly requires inclusion of artistic and creative expressions in curricula.Burn-out is of increasing concern in the medical profession and teaching trainees and faculty resilience is one remedy to promote wellness.I agree with the authors that incorporation of art not only promotes empathy / compassion among healthcare professionals, but also enhances their observation skills which are required clinical skills.What may be novel in this study is that the students lead the curriculum, and it is modelled on a well-known and established curricular framework from the Johns Hopkins group.The paper is well written, well referenced with an appropriate problem statement, and educational framework.It also emphasises the learning outcomes and the importance of program evaluation and based on needs assessment-general and targeted.One limitation is the low response rate in the needs assessment phase, so it is hard to extrapolate the usefulness of this area to all students, especially the non-respondents.I do hope the authors will continue to the next steps of program evaluation as they have indicated.For this subject, sound qualitative outcomes would be very important.I believe this paper would be useful to all those involved in health professions education, regardless of their speciality.Even those not interested in designing a similar curriculum could use these steps in designing other curricula, Competing Interests: No conflicts of interest were disclosed. Figure 1 . Figure1.Healthcare Student Literary Review Organizational Chart.Submissions are received by project manager via email (blue arrow) and sent to appropriate student section editor for edits and recommendations.Managing editors then make additional edits and recommendations before sending pieces to Editor-inchief, who makes third set of recommendations.Editor-in-chief sends submissions (green arrow) back to the managing editors, who together determine the final status of the piece.Project manager communicates all comments to author through email.During this process faculty advisors assist with any questions student may have. Adam Saperstein is an Associate Professor in the Department of Family Medicine at the Uniformed Services University School of Medicine and Daniel K. Inouye Graduate School of Nursing.Donovan Reed is a graduate of the Uniformed Services University of the Health Sciences F. Edward Hebert School of Medicine.He is currently an Intern in General Surgery at the San Antonio Uniformed Services Health Education Consortium, where he has been selected for residency in Ophthalmology.Colin Smith is a graduate of the Uniformed Services University of the Health Sciences F. Edward Hebert School of Medicine.He is currently an Internal Medicine/Psychiatry resident at Duke University.Brian Andrew is a graduate of the Uniformed Services University of the Health Sciences F. Edward Hebert School of Medicine.He is currently an Internal Medicine Intern at Naval Medical Center, San Diego.
2019-05-10T13:09:16.899Z
2017-03-16T00:00:00.000
{ "year": 2017, "sha1": "d7793fe53b001ed0ac38627f4d3ffebc37fa2c03", "oa_license": "CCBY", "oa_url": "https://www.mededpublish.org/MedEdPublish/PDF/910-3778.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2867df6562872668ac53b67540a34c6c493d22eb", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Psychology" ] }
32796340
pes2o/s2orc
v3-fos-license
Randomised trial of azithromycin versus ofloxacin T-7 for the treatment of Tlphoid feVer in adult In recent years multi-resistant (MDR) strains of Salmonella typhi have emerged in many countries including Vietnam. ln Vietnam and the Indian sub-continent isolates o/S. fyphi resistant to nalidixic acid with a reduced sensitivity to fluoroquinolones have also appeared. Third generation cephalosporins and fluoroquinolones are currently used widely for treating typhoidfever (TF). Azithromycin (AZM) has moderate in-vitro activity against S. typhi but achieves high intracellular concentrations and has been shown to be effictive for treating TF when giyen for 7 or more days. There have been no randomised comparisons using shorter courses of AZM. Five blood culture positive Vietnamese adults were enrolled in a pilot study and received AZM lgm orally once a day for 5 days. Blood cultures were repeated on day 6 and stool cuLtures on days 6, 7, 8 and 30 days after the start of therapy. AII five patients were cured with a median (range) fever clearance time of 138 (78-228) hours. No relapses were detected. An open randomised comparison was therefore commenced comparing AZM lgm orally once a day for 5 days versus Ofloxacin 200 mg orally twice a day for 5 days. An interim analysis of the first 26 blood culture positive adults included 12 treated with AZM and 14 with OFL. 17/24 (7IVo) of the isolates were MDR, l/24 (47q) were nalidixic acid resistant but none were resistant to either of the study drugs. AII patients in both groups were curecl. The median (range) fever clearence time was 126 (60-252) hours for Azithromycin and 99 (42-228) hours for Ofloxacin (p=0.08) Cultures of blood and faeces after the end of therapy were negative in all cases. There were mild self limiting gastrointestinal side effects in the AZM treated patients but no other significant side effects attributable 1o either antibiotic. These interim results suggest that afive clay course ofAZM or OFL are both effective for the treatment of the TF in adults due to muhi-resrstant S. typhi. 1 Department of Infectious Diseases, Faculty of Medicine, nUniversily of Medicine and Pharmacy, Ho Chi Minh City; 'The University of Oxford-Wellcome Trust ClinicaL Research -Unit, Centre for Tropical Diseases, Ho Chi Minh City, )Centre for Tropical Diseases, Ho Chi Minh City, Vietnam INTRODUCTION With more than 12.5 million cases occurring each year throughout the world, typhoid fever continues to present a considerable health problem, particularly in developing countries. In recent years, multi-drug resistant strains of Salmonella typhi have emerged in INTRODUCTION With more than 12.5 million cases occurring each year throughout the world, typhoid fever continues to present a considerable health problem, particularly in developing countries.In recent years, multi-drug re- sistant strains of Salmonella typhi have emerged in many countries including Vietnaml.Although fluoroquinolones are used widely for treating these resistant strains their use is relatively contraindicated in children and in pregnancy because of possible adverse effects on cartilage.Furthermore, isolates of S. typhi resistant to nalidixid acid with a reduced sensitivity or resistance to fluoroquinolones have appeared in Vietnam and the Indian sub-continentz,3.Azithromy- cin, the first of a new class of azalides, has moderate in-vitro activity against S. typht+ but achieves high in- tracellular concentrations and has been shown to be effective in a murine typhoid caused by Salmonella typhimuriums and for treating typhoid fever when given for 7 or more days6-9.There have been no ran- domised comparisons using courses of azithromycin shorter than 7 days in typhoid, In a pilot study, five Vietnamese adults with blood culture positive TF re- ceived azithromycin 1 gm orally once a days for 5 days.All five patients were cured with a median (range) fever clearence time of 138 hours. No relapses were detected.A study was therefore commenced to compare the clinical and bacteriologi- cal efficacy of a five day course of azithromycin or ofloxacin for the treatment of typhoid fever in adults. METHODS The study was performed on the adult typhoid ward at the Centre for Tropical Diseases, Ho Chi Minh City.The hospital is a 500 bed referral centre for Ho Chi Minh City and the surrounding provinces.The study had received ethical approval from the Scientific and Ethical Committee of the Centre for Tropi- cal Diseases and all patients gave informed verbal consent.Adults (15 years old) with the clinical fea- tures of enteric fever and who were blood culture positive with S. typhi or S. paratyphi A were enrolled in the study.Patients were excluded if they had evidence of severe or complicated disease (coma, shock, visibly jaundiced, gastrointestinal bleeding, intestinal pedoration, pneumonia), a history of significant un- deriying disease, had a previous history of hypersensitivity to either of the trial drugs and had previous treatment with a quinolone or 3rd generation cepha- losporin or macrolides within one week of hospital admission and were pregnant.Patients were allocated to one of two treatment groups in an open random- ised comparison.The treatment allocation were kept in a seaied envelope which were only opened when the patient had been entered into the study.Patients received either azithromycin 1gm orally once a day for 5 days or ofloxacin 200mg orally twice a day for 5 days. Therapy 203 Blood cultures were obtained before therapy and 24 hours after the end oftherapy (day 6).Faecal cultures (three specimens) and a urine culflrre were performed before therapy and faecal cultures were repeated on days 6, 7, 8 and 30 after the start of therapy.Isolates of Salmonella were identified by standard biochemi- cal test and agglutination with Salmonella antisera. Antimicrobial sensitivities were performed by the modified Bauer-Kirby method with zone size inter- pretation based on NCCLS guidelines.Patients in whom S. typhi with an intermediate sensitivity to azithromycin were still treated with azithromycin if randomised to that drug.A full blood count, SGOT, SGPT, creatinine and urinalysis were performed be- fore therapy and on day 6" If the SGOT, SGPT or creatinine were abnormal they were repeated until they had become normal.Chest X-ray and other ra- diological investigations, including abdominai ultra- sound, were performed as clinically indicated.Pa- tients were examined daily with particular reference to clinical symptoms, fever clearance time, any side effects of the drug and any complication of the dis- ease.The response to treatment was assessed by clinical parameters (resolution of clinical symptoms and signs), fever defervescence (time to first fall bel- low 37.5' C, axillary, and to remain below 37.5" C for 24 hours), time to eradication of bacteraemia, development of complications and evidence of relapse of infection. Treatment failure was defined as the persistence of fever and symptoms for more than five days after the end of treatment or the development of any severe complications.Patients who failed were retreated with ofloxacin lOmg/kg per day for 7 to 10 days or ceftriaxone 2glday for 7 to 10 days.Patients were followed up 4 -6 weeks post treatment.At this time any clinical evidence of relapse was sought, three stool cultures were performed and any abnormal labora- tory investigation was repeated.A fult set of micro- biological cultures were performed if the symptoms and signs suggested further infection" To detect failure rates of IVo 'urd 20Vo for ofloxacin and azithromycin respectively (80Vo power, 5,Xo significance level), 50 patients will need to be recruited in each group.Proportion were compared with the Chi squared test with Yates' correction or the Fisher's exact test.Non-normally distributed data were com- pared using the Mann Whitney U test.Statistical analysis was performed using the Staview software package. RESULTS An interim analysis of the first 26 blood culture posi- tive adults included 12treated with azithromycin and 14 with ofloxacin.S. typhi was isolated from all of the blood cultures and 8126 (31Vo) of the patients had at least one positive pre-treatment faecal culture.All isolates were sensitive to ofloxacin although 1/26 waù resistant to nalidixic acid.None of the isolates were resistant to azithromycin although 8126 (31Vo) (3 randomised to azithromycin) were of intermediate sensitivity on the basis of the zone size.The demo- graphic, clinical and laboratory findings for the cul- ture confirmed cases of typhoid fever are shown in the Table 1.There were no important differences between the admission characteristics of the two groups.There were no treatment failure in either group.Cultures of blood and faeces after the end of therapy were negative in all cases.Mild self limiting gastrointestinal side effects were seen in five of azithromycin treated patients but there were no other significant side effects attributable to either antibiotic.No relapses were detected. DISCUSSION Since 1991 S. typhi resistant to all the conventional first-line antibiotics, ampicillin, cotrimoxazole and chlorarnphenicol, has been reported from Central and South America, the Middle East, the Indian Sub-continent and South East Asia.In Vietnam by 1996 the proportion of multi-resistant strains isolated from blood cultures at this centre increased to over 807o.Furthermore, strains with resistance to nalidixic acid and reduced sensitivity to the fluoroquinolones, have emerged2.Third generation cephalosporins and fluoroquinolones are currently used widely for treat- ing multi-resistant typhoid fever in many countries. In Vietnam randomised comparative studies of fluoroquinolones (fleroxacin and ofloxac!n) and ceftriaxone in adults have shown the fluoroqui- nolones to be superior.Ceftriaxone given for three or five days gave clinical and microbiological cure rates of '12-87Vo and92-93Vo respectivelye'10.V/ith fluoroquinolones (ofloxacin or fleroxacin used for 2,3 or 5 days) the cure rates were 97-100Vo and99-l00Eot9-14. The results of treatment with fluoroquinolones were signifrcantly worse, however, in patients infected with isolates of S. typhi resistant to nalidixic acid and reduced sensitivity to fluoroquinolones, When treated with ofloxacin for 2 to 3 days patients infected with nalidixic acid resistant S. typhi had a significantly longer fever clearance time compared with patients infected with a nalidixic acid sensitive isolated and had 44 fold increased risk of needing a further course of antibiotic2.The study of new drugs for treating multi-resistant S. ty phi is therefore important. Azithromycin, the first of a new class of azalides, has moderate activity against S.typhi.The reported S.ryphi activity of azithromycin against S. typhi (MICqo Smg/L, Range 2-16 mg[-)4 is above the reported peak serum level of azithromycin following a 500 mg dose of 0.4 mg/Lts Azithromycin, however, is concentrated in the tissues 50 to 100 fold compared with the serum levels and achieves high intracellular concentrations.In a murine model of salmonellosis it was found to be highly active4.This discordance between in-vitro susceptibility and in-vivo effectiveness is probably explained by the fact that.S.typhi is pre- dominantly an intracellular pathogen. Azithromycin 500 mg once daily for between 7 and 14 days was found to be effective in adults with ty- phoid fever in Chile.The fever clearance time was 5.4 days for 5 patients treated for 14 days and 4.8 days for 5 patients treated for 7 days.In an open study in Cairo, 14 patients received azithromycin as a single dose 1 g on the first day, followed by 500mg for 6 additional days were cured with a fever clear- ance of 4.3 days6.In these two studies 3124 (I3Vo) of patients were still blood culture positive at day 4.In a study in Bahrain three of four adults failed azithromycin given as 1 gm on day one and then 500 mg a day for the next six daysl6.The three failures had clinically deteriorated by day four or five of ther- apy and one was blood culture positive on day four. Comparative studies in Cairo of azithromycin (l gm on day 1, 500 mg a day for the next 6 days) in 16 adults and ciprofloxacin (500 mg twice daily for 7 days) in 17 adults cured all patients and gave a fever clearance of 4. 1 and 3.6 days respectively7.In a simi- lar study in India comparing azithromycin 500 mg a day for 7 days with chloramphenicol 2-3 g per day for 14 days was 88Vo clinically and 1007o microbi- ologically successful in 42 adrilts treated with azithromycin and 86Vo and 94 70 successful in 35 adults treated with chloramphenicol8. Therapy 205 We wished to use a five day course of azithromycin both to be comparable-to the five day course of ofloxacin which is widely used for nalidixic acid sensitive isolates in Vietnam and to ensure compliance.However we were concerned about the reports of the blood cultures remaining positive after 4 days of treatment with the standard regimen.Furthermore our studies of the localisation of bacteria in the blood of patients with typhoid have shown that many of the bacteria are in fact extracellular [Wain J, unpublished observationsl.Doses of azithromycin higher than the recommended 5-lOmg/kg have been toleratedlT'l8.We therefore investigated the efficacy and tolerability of a short course high dose regimen.This interim analysis has shown that the clinical and microbiological and cure rate with azithromycin was 1007o and the fever clearance time of 5.2 days was comparable to the other studies.Apart from some mild side effect with nausea, vomiting and dianhoea, the azithromycin was well tolerated.In this study three patients randomised to azithromycin had strains with intermediate sensitivity to azithromycin on the basis of disc zone size.All of them had a good response to azithromycin with fever clearances of 60, 138 and 162 hours.Ofloxacin in a course of five days was also 1007o effective with a fe- ver clearance time comparable to a previous study.All of the patients randomised to receive ofloxacin had nalidixic acid sensitive isolate. Our interim results therefore suggest that a five day course of azithromycin or ofloxacin are both effetive for treating typhoid fever in adults in an area with a high incidence of multi drug resistant typhoid fever. Typhoid Feyer and other Salmonellosis Thble 1 Clinical and laboratory features and response to treatment of patients with culture-confirmed of Typhoid fever Med J Indones Features of patients OFL group (n=14)AZMgroup (n=12) No. of males / females Age (year, median [range])Duration of fever before admission(days) Admission temp ('C, median [range])
2017-05-01T20:55:38.539Z
1998-10-01T00:00:00.000
{ "year": 1998, "sha1": "861fcaf2d0badc5621a2691044ad3cf2762783b5", "oa_license": "CCBYNC", "oa_url": "http://mji.ui.ac.id/journal/index.php/mji/article/download/1108/1011", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "861fcaf2d0badc5621a2691044ad3cf2762783b5", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
208569344
pes2o/s2orc
v3-fos-license
ATP-driven separation of liquid phase condensates in bacteria Liquid-liquid phase separated (LLPS) states are key to compartmentalise components in the absence of membranes, however it is unclear whether LLPS condensates are actively and specifically organized in the sub-cellular space and by which mechanisms. Here, we address this question by focusing on the ParABS DNA segregation system, composed of a centromeric-like sequence (parS), a DNA-binding protein (ParB) and a motor (ParA). We show that parS-ParB associate to form nanometer-sized, round condensates. ParB molecules diffuse rapidly within the nucleoid volume, but display confined motions when trapped inside ParB condensates. Single ParB molecules are able to rapidly diffuse between different condensates, and nucleation is strongly favoured by parS. Notably, the ParA motor is required to prevent the fusion of ParB condensates. These results describe a novel active mechanism that splits, segregates and localises non-canonical LLPS condensates in the sub-cellular space. Introduction In the past, bacteria were often viewed as homogeneous systems lacking the complex sub-cellular spatial organization patterns observed in eukaryotes. The advent of powerful labeling and imaging methods has enabled the discovery of a plethora of mechanisms used by bacteria to specifically and precisely localize components, in space and time, within their sub-cellular volume (Surovtsev and Jacobs-Wagner 2018;Shapiro, McAdams, and Losick 2009) . These mechanisms include pole-to-pole oscillatory systems to define the site of cell division (e.g. MinCDE), dynamic, ATP-driven polymerization to drive cell division and cell growth (e.g FtsZ, MreB), recognition of cell curvature to localize chemotaxis complexes (e.g. DivIVA, (Ramamurthi and Losick 2009) ), ATP-powered translocation of membrane-bound machines to power cell motility (e.g. Agl-Glt (Faure et al. 2016) ), or nucleoid-bound oscillatory systems to localize chromosomal loci (e.g. ParABS, (Le Gall et al. 2016) ). More recently, it became apparent that many cellular components (e.g. ribosomes, RNA polymerases, P-granules) (Sanamrad et al. 2014; van Gijtenbeek et al. 2016;Moffitt et al. 2016;Racki et al. 2017) display specific sub-cellular localization patterns, leading to the spatial segregation of biochemical reactions (e.g. translation, transcription, or polyP biosynthesis). Most notably, bacteria are able to achieve this precise sub-cellular compartmentalization without resorting to membrane-enclosed organelles. phase, in which molecules are close enough from each other to experience their mutual attraction, interfaced with a dilute phase. This mechanism provides advantages such as rapid assembly/disassembly, absence of a breakable membrane, and serves fundamental biological processes such as regulation of biochemical reactions, sequestration of toxic factors, or organization of hubs (Shin and Brangwynne 2017) . The first evidence that eukaryotic cells use LLPS came from examination of P granules in C. elegans (Brangwynne et al. 2009) . In this study, Brangwynne et al. observed different key signatures of liquid droplets: P granules formed spherical bodies that could fuse together, drip and wet, and displayed a dynamic internal organisation. Since then, many other processes in eukaryotes were shown to display LLPS properties (Banani et al. 2017) . A few recent examples show that this phenomenon also exists in bacteria, for instance ribonucleoprotein granules (Al-Husini et al. 2018) , the cell division protein FtsZ (Monterroso et al. 2019) , and carboxysomes (Wang et al. 2019;MacCready, Basalla, and Vecchiarelli 2020) . Thus, LLPS seems to be a universal mechanism to compartmentalise components in the absence of membranes. However, it is unclear whether LLPS condensates are actively and specifically organized in the sub-cellular space and by which mechanisms. We addressed this problem by investigating how specific DNA sequences are organized within the sub-cellular volume in space and time. We focused on the ParABS partition system, responsible for chromosome and plasmid segregation in bacteria and archaea (Bouet et al. 2014;Toro and Shapiro 2010;Baxter and Funnell 2014;Schumacher et al. 2015) . This system is composed of three components: (1) a centromeric sequence ( parS ); (2) a dimeric DNA binding protein (ParB) that binds parS; and (3) a Walker A ATPase (ParA). We have previously shown that ParB is very abundant (>850 dimers per cell) (Bouet et al. ParB assembles into spherical nano-condensates Previous studies revealed that the partition complex is made of~250 dimers of ParB (Adachi, Hori, and Hiraga 2006;Bouet et al. 2005) and~10kb of parS -proximal DNA (Rodionov, Lobocka, and Yarmolinsky 1999) . This complex is held together by specific, high-affinity interactions between ParB and parS , and by low-affinity interactions between ParB dimers that are essential for the distribution of ParB around parS sites ( Figure 1A) (Sanchez et al. 2015;Debaugny et al. 2018) . However, technological limitations have thwarted the investigation of the physical mechanisms involved in the formation of partition complexes. We addressed these limitations by first investigating the shape and size of the F-plasmid partition complex, reasoning that the former should inform us on the role of the mechanisms involved in the maintenance of the complex cohesion while the latter would enable an estimation of protein concentration within the partition complex. To this aim, we combined Photo-Activated Localisation Microscopy (PALM) (Marbouty et al. 2015;Fiche et al. 2013) with single-particle reconstruction (Salas et al. 2017) , and used a previously validated functional ParB-mEos2 fusion strain (Sanchez et al. 2015) . In addition, we implemented an efficient and well-controlled fixation protocol ( Figure S1A and Methods) to limit the influence of partition complex dynamics and reach a localization precision of (Figures S1B-C). Most single ParB particles localized to the partition complex, as 4 6 nm 1 ± we previously observed by live PALM (Sanchez et al. 2015) ( Figure 1B). Next, we clustered localizations pertaining to each partition complex using a density-based clusterization algorithm Levet et al. 2015) . Partition complexes were positioned near the quarter cell positions ( Figure S1A), as reported previously (Le Gall et al. 2016) . Partition complex shapes displayed heterogeneity, reflecting the different three dimensional projections and conformational states of the complex ( Figure 1B-C). Thus, we used single-particle analysis to classify them into eight major class averages that contained most partition complexes ( Figure 1C, S1D-E) and applied several quantitative methods to assess particle shape in each class (Figures S1G-I). Direct comparison with simulated spherical particles revealed that experimental particles within each class display highly symmetric shapes with very little roughness. From the first eight class averages, we generated 3D reconstructions reaching nanometer-scale isotropic resolution ( Figure 1D) (Salas et al. 2017) . Next, we estimated the size of the partition complex by calculating the mean full width at half maximum of the reconstructions obtained from each single class average (average of all classes = 43 7 nm) ( Figure S1F). Thus, from the size of the partition ± complex and the average number of ParB in each partition complex (~250 ParB dimers) (Adachi, Hori, and Hiraga 2006;Bouet et al. 2005) , we can estimate a local ParB dimer concentration of the order of~10 mM (see Methods for details). Remarkably, this extremely high concentration is comparable to that of the total protein concentration in the bacterial cytoplasm (~10 mM, for proteins of the same size as ParB) (Elowitz et al. 1999) , suggesting that ParB dimers represent most of the total protein volume within a partition complex. ParB exists in an equilibrium between liquid-and gas-like phases Next, we investigated whether single ParB dimers were able to escape partition complexes by using single-particle tracking PALM (sptPALM) (Le Gall et al. 2016;Sanchez et al. 2015;Manley et al. 2008;Niu and Yu 2008) . We observed two distinct dynamic behaviors reflected by low-mobility (or confined) and high-mobility trajectories with clearly different apparent diffusional properties (Figures 2A-B). The low-mobility species is consistent with immobile ParB dimers bound to DNA specifically, whereas the high-mobility fraction is consistent with ParB dimers interacting non-specifically with chromosomal DNA (see discussion in Figure S2). A recent study explored the dynamic behaviour of LacI in conditions where the number of LacI molecules was in vast excess with respect to the number of specific Lac repressor sites, mirroring the experimental conditions found in the ParB S system (Garza de Leon et al. 2017) . In this case, the authors observed that~80% of trajectories were highly mobile, corresponding to repressors interacting non-specifically with DNA, with only~20% of tracks displaying low mobility and corresponding to repressors bound to their operators. In stark contrast, for the ParAB S system, most ParB particles displayed a low-mobility (~95%), and only a small proportion of particles were highly mobile (<5%) ( Figure 2B). Low-mobility trajectories localized near the quarter cell positions, similarly to ParB complexes ( Figure S2A-B). To determine whether these trajectories clustered in space, we calculated the pair correlation between low-mobility trajectories (see Methods). For a homogeneous distribution we would expect a flat line (see dashed curve in Figure 2C). In contrast, we observed a sharp peak at short distances, indicating that low-mobility ParB particles are spatially confined within partition complexes ( Figure 2C, blue curve). High-mobility trajectories, instead, occupied the whole nucleoid space ( Figure S2A). Thus, from the percentage of high-mobility trajectories detected (~5% of all trajectories), and the estimated number of ParB dimers per cell (~800), one can estimate that only~20 ParB dimers are typically moving between partition complexes. High-mobility ParB dimers move on the nucleoid volume, thus we estimate that this species exists at a concentration of~20 nM, five orders of magnitude smaller than the ParB concentration within partition complexes (~10 mM). Therefore, these results suggest that ParB molecules exist in two well-defined phases: (1) a highly condensed, liquid-like state (ParB condensate) containing a high concentration of low-mobility ParB dimers; and (2) a diluted gas-like phase where single, high-mobility ParB dimers diffuse on the nucleoid by weak non-specific DNA interactions (Sanchez et al. 2013;Taylor et al. 2015;Fisher et al. 2017) . ParB diffuses across phase boundaries This interpretation suggests that single ParB molecules should be able to diffuse between two different partition complexes within the same cell. To test this, we photobleached a single partition complex and monitored the total fluorescence signal in both the bleached and unbleached complex ( Figure 2D). Strikingly, we observed a clear increase in the fluorescence signal of the bleached complex over time with a concomitant decrease of the fluorescence signal of the unbleached complex ( Figures 2D-E). This behaviour can only be explained by the escape of fluorescent proteins from the unbleached complex into the gas-phase, and the entry of proteins from the gas-phase into the bleached complex. This produces a net flux between unbleached and bleached complexes and vice-versa. The symmetry between both curves suggests that both fluxes have similar magnitudes ( Figure 2E, S2C). By modeling the diffusion process, we could fit the experimental curves to a model and estimated the residence time of a ParB molecule in a ParB condensate to be~100 s ( Figures S2D-F), while the typical time in the gas-like phase was~23 s. Note that these typical times provide a good estimate (~90%) of the number of ParB dimers confined within partition complexes (see Figure S2). Eventually, the system reached equilibrium (after~4 min). At this point, the fluorescence intensities of bleached and unbleached complexes became equal, indicating that ParB molecules equipartitioned between the two condensates. We note that equipartition between ParB condensates would only be possible if single ParB dimers were diffusing across the liquid-like and gas-like phase boundaries, therefore equilibrating chemical potentials between these two phases (Hyman, Weber, and Jülicher 2014) . In equilibrium, small condensates in a canonical LLPS system are expected to fuse into a single, large condensate, for instance by Ostwald ripening (Hyman et al., 2014). Thus, the observation that ParB condensates are in mutual stationary equilibrium at times smaller than the cell cycle suggests non-canonical mechanisms to suppress droplet fusion. ParB condensates are able to merge Biomolecular condensates can exhibit different internal architectures. For example, the Balbiani body is a solid-like structure maintained by a network of amyloid-type fibers (Shin and Brangwynne 2017) , while liquid-like behavior of P granules denotes an internal disordered organization . Thus, the ability of two compartments to fuse helps distinguish between these two different internal structures. To study whether ParB condensates were able to fuse over time, we performed live time-lapse fluorescence microscopy experiments. We observed that independent partition complexes were able to fuse into a single partition complex ( Figure 2F). Notably, we observed that fused complexes move in concert during several seconds ( Figure 2F). These fusion events were rapid (~5±3 s) and reversible (14 out of 14 events). To estimate whether the number of ParB molecules was conserved upon fusion, we integrated the fluorescence intensity signal of each ParB condensate prior and after fusion. As expected, the integrated intensity increased upon fusion ( Figure S2G) but only by 1.5±0.1 fold, instead of the two-fold increase expected for canonical LLPS systems. Conversely, splitting of ParB condensates decreased the integrated intensity by 0.7±0.1 fold ( Figure S2H). We note that this ability of ParB complexes to fuse is not unique to the F-plasmid system, and was also observed for P1 plasmids which also segregate using a ParABS system (Sengupta et al. 2010) . parS, and low-and high-affinity ParB interactions are sufficient for phase separation Next, we performed a thermodynamic analysis and Monte-Carlo simulations to find the minimal set of assumptions that would be necessary to reproduce an equilibrium between liquid-and gas-like phases. For this, we considered a minimalistic gas-liquid lattice model that shows in mean field theory a generic LLPS diagram ( Figure S3A). In our experimental conditions, the percentage loss of plasmid in a ΔParAB mutant strain can be used to estimate on average~1 plasmid/ParB condensate. Thus, we simulated 300 ParB particles on a three-dimensional 2x0.5x0.5 µm 3 lattice with a spacing of 5 nm, and represented the F-plasmid by a single, static, high affinity binding site within this lattice containing a repeat of 10 parS sequences. ParB interacted with high affinity with the parS cluster (hereafter parS10 ), with low-affinity with all other sites in the lattice, and with low-affinity with itself. Affinities and concentrations were based on experimentally-determined coefficients ( Figure S3). The system was left to evolve under different initial conditions (pure gas-like or liquid-like states) and in the presence or absence of parS10 . The system remained in a gas-like phase in the absence of parS10 ( Figure 3A, green curve) or when ParB-ParB interactions were too weak ( Figure S3D-E). In contrast, in the presence of parS10 and of physiological ParB-ParB interactions the system sequentially evolved towards states involving parS binding, nucleation, and stable co-existence of liquidand gas-like phases ( Figure 3A, blue curve). The system evolved towards the same endpoint when the initial condition was a pure liquid-like state ( Figure 3A, gray curve). At this endpoint, 80% of the molecules were in a liquid-phase and the remainder in a gas-like state, comparable to the proportion of clustered, low-mobility trajectories observed experimentally (95%, Figure 2B). Thus, this minimalist model reproduces the basic experimental phenomenology and shows that the only elements required to reach a stable coexistence between liquid and gas-like ParB phases are: low-affinity interactions between ParB dimers, and high-affinity interactions between ParB and the nucleation sequence parS . Critically, ParB was unable to form a stable liquid-gas coexistence phase in the absence of parS within physiological timescales ( Figure 3A, green curve), whilst the nucleation of ParB condensates was accelerated by an increasing number of parS sequences ( Figure S3F). Thus, our simulations suggest that efficient nucleation of ParB into condensates requires parS . We experimentally tested this prediction by performing sptPALM experiments in a strain lacking parS . In contrast to our previous results, we frequently observed high-mobility trajectories ( Figure 3B-C). In fact, the proportion of this population increased from~5% in the wild-type to~50% in the absence of parS . The persistence of low-mobility trajectories suggests that a fraction of ParB dimers can remain bound to other chromosomal sequences mimicking parS sites. To test this hypothesis, we first searched the E.coli chromosome for sequences displaying homology to the parS motif (TGGGACCACGGTCCCA) using FIMO and found tens of sequences displaying only two or three mutations in the consensus parS sequence ( Figure S3G). Second, we observed using Monte-Carlo simulations that single, parS -like chromosomal sequences can be partially occupied in the absence of parS ( Figure S3H). Finally, we found using sptPALM that a ParB mutant with reduced parS -binding affinity exhibits a further reduction in the low-mobility population with respect to the strain lacking parS (from 50% to 30%, Figure S3I). Overall, these results indicate that weakening ParB-parS interactions leads to a large shift of ParB dimers towards the gas phase. However, it was not clear whether the remaining low-mobility ParB dimers still formed condensed phases. To address this question, we performed pair-correlation analysis of low-mobility trajectories ( Figure 3D). Clearly, low-mobility trajectories displayed a homogeneous distribution and failed to cluster in both mutant strains, in contrast with the clustering behaviour of low-mobility trajectories in the wild-type ( Figure 3D). All in all, these observations show that formation of ParB condensates requires ParB-parS interactions, and are consistent with previous studies (Erdmann, Petroff, and Funnell 1999;Sanchez et al. 2013) (Sanchez et al. 2013) A second conclusion of our Monte-Carlo model is that ParB dimer interactions are also required for the formation of a condensed phase. To test this prediction experimentally, we used a strain carrying a mutation in the arginine-rich motif of ParB that partially reduces ParB dimer interactions (ParB-3R* mutant). In this mutant, formation of partition complexes is considerably impaired (Debaugny et al. 2018) . By sptPALM, we observed that low-mobility trajectories considerably decrease their ability to cluster ( Figure 3D), consistent with previous studies (Debaugny et al., 2018;Song et al., 2017), and in support of our Monte Carlo simulations ( Figure S3D-E). Taken together, these data indicate that high-affinity ParB-parS interactions, and low-affinity ParB dimer interactions are needed for the assembly of stable liquid-like ParB condensates. Weak interactions between ParB dimers and nsDNA were also shown to be necessary for partition complex assembly (Sanchez et al. 2013;Taylor et al. 2015;Fisher et al. 2017) . Motor proteins drive ParB condensates out-of-equilibrium At equilibrium, a passive system undergoing phase separation displays a single showed that single partition complexes can merge at short time-scales (tens of seconds, Figure 2F). At longer time-scales (tens of minutes), however, ParB condensates are not only kept apart from each other but are also actively segregated concomitantly with the growth of the host cell ( Figure 4A). Previous reports have shown that the ParA motor protein is necessary to ensure faithful separation and segregation of partition complexes (Le Gall et al. 2016;Erdmann, Petroff, and Funnell 1999 Figure 4B). In this strain, we observed a large reduction in ParA levels after 30 min of degron induction ( Figure 4C). In the absence of ParA Page 14 certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. The copyright holder for this preprint (which was not this version posted May 4, 2020. . https://doi.org/10.1101/791368 doi: bioRxiv preprint degradation, this strain displayed wildtype ParB condensate separation dynamics ( Figure 4D, white traces). Strikingly, upon degradation of ParA, we observed the fusion of ParB condensates ( Figure 4E, white traces). Over time, the average number of ParB condensates per cell collapsed to 0.6 ± 0.7 ParB condensates/cell in ParA-degraded cells, but remained at 2.9 ± 1.0 ParB condensates/cell in cells where ParA degradation was not induced ( Figure 4F, S4A). To test whether this reduction was due to the fusion of partition complexes, we quantified the mean intensity of ParB condensates before (0h) and 3h after ParA degradation ( Figure S4B). The marked increase in the mean intensity of ParB condensates over time (from 2.9±0.9 at 0h to 4.9± 2.3 at 3h) further suggests that the observed decrease in the number of ParB condensates was due to condensate fusion. While these results show that ParA is necessary to maintain ParB condensates apart, they do not provide direct evidence for the need of ParA's ATPase activity in this process. For this, we quantified the number of partition complexes in cells where we directly mutated ParA's ATPase nucleotide-binding site (ParA K120Q , a mutant impaired in ATP hydrolysis stimulated by ParB). The mean number of partition complexes was 3.0±1.1 in wild-type cells, and decreased to 1.8±0.9 in the ParA ATPase mutant strain ( Figure 4G), consistent with ParA's ATPase activity being implicated in the separation of ParB condensates. Discussion Here, we provide evidence in support of a new class of droplet-like-forming system with unique properties that ensure the stable coexistence and regulated inheritance of separate liquid-condensates. Interestingly, the three components of the ParABS partition system and an exquisite regulation of their multivalent interactions are required to observe this complex behavior: (1) High-affinity interactions between ParB and the parS centromeric sequence are necessary for the nucleation of the ParB condensate; (2) Weak interactions of ParB with chromosomal DNA and with itself are necessary to produce phase separation; (3) The ParA ATPase is required to counter condensate fusion and to properly position condensates in the cell. Interestingly, this system displays several non-canonical properties discussed below. In vivo , ParB binds parS and an extended genomic region of~10kb around parS (Murray, Ferreira, and Errington 2006;Rodionov, Lobocka, and Yarmolinsky 1999;Sanchez et al. 2015) . Interestingly, the size of this extended genomic region is independent of the intracellular ParB concentration (Debaugny et al. 2018) , suggesting that an increase in the total number of ParB molecules per cell either leads to an increase in the concentration of ParB within droplets, or that an unknown mechanism controls the size of ParB droplets. Here, we found that the intensity of ParB condensates increases by 1.5±0.1 fold upon fusion, instead of doubling as expected for a conservative process. Consistently, the intensity of ParB condensates did not decrease by two-fold but by 0.7±0.1 upon condensate splitting. All in all, these results suggest that droplet fusion/splitting may be Page 16 certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. The copyright holder for this preprint (which was not this version posted May 4, 2020. . https://doi.org/10.1101/791368 doi: bioRxiv preprint semi-conservative, and point to a more complex mechanism regulating ParB number and/or droplet size. Proteins in canonical LLPS systems assemble in compact droplets, such as oil molecules in an oil-in-water emulsion. In this case, an increase in the number of 'oil' molecules leads to an increase in droplet size because the concentration of oil molecules in droplets is at its maximum (i.e the droplet is compact). There are two key differences in the In passive, canonical phase-separation systems, separate liquid droplets grow by taking up material from a supersaturated environment, by Ostwald ripening, or by fusion of droplets. Over time, these processes lead to a decrease in the number of droplets and an increase in their size. This reflects the behaviour we observed for ParB condensates upon depletion of ParA or when its ATPase activity was reduced. Recently, theoretical models have predicted mechanisms to enable the stable coexistence of multiple liquid phases (Zwicker, Hyman, and Jülicher 2015) . These mechanisms require constituents of the phase separating liquids to be converted into each other by nonequilibrium chemical reactions (Zwicker, Hyman, and Jülicher 2015;Zwicker et al. 2016) . Our result that the ATPase activity of ParA is required to maintain ParB condensates well separated indicates that motor proteins can be involved in the active control of droplet number and in their sub-cellular localization. Interestingly, similar mechanisms -yet to be discovered-could control the number, size, and sub-cellular localization of other droplet-forming systems such as P-granules (Brangwynne 2011) , stress granules (Jain et al. 2016) , or heterochromatin domains (Strom et al. 2017) . Data availability statement The data that support the findings of this study are available from the corresponding author upon reasonable request. Page 18 certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. Code availability Most analysis tools and methods used in this study are in the open domain (see Methods). Code developed specifically for this manuscript can be obtained from the corresponding author upon request. sptPALM, PALM, FRAP and time-lapse microscopy (fast dynamics) Microscopy coverslips and glass slides were rinsed with acetone and sonicated in a 1M KOH solution for 20 minutes. Next, they were dried over an open flame to eliminate any remaining fluorescent contamination. A frame of double -sided adhesive tape was placed on a glass slide and a~5mm channel was extruded from its center as previously described (Le Gall, Cattoni, and Nollmann 2017) . Briefly, 20 µl of 2 % melted agarose (diluted in M9 media, melted at 80°C; for the imaging of DLT3469 strain: 1 % melted agarose diluted in M9 + 0.2% arabinose) were spread on the center of the glass slide and covered with a second glass slide to ensure a flat agarose surface. The sandwich slides were kept on a horizontal position for 5 min under external pressure at room temperature (RT) to allow the agarose to solidify. The top slide was then carefully removed when bacteria were ready to be deposited in the agar pad (see below). Cells were harvested during the exponential phase (optical density at 600 nm:~0.3). For PALM experiments, cells were fixed in a 2% PFA solution for 15 min at room temperature and 30 min at 4°C (for the detailed procedure, refer to (Visser, Joshi, and Bates 2017) ). A bacterial aliquot was spun down in a bench centrifuge at RT at 3000g for 3 minutes. The supernatant was then discarded and the pellet re-suspended in 10 µl of minimal media. 1.5 µl of 1/10 fiducial beads (TetraSpeck TM, Life Technologies) were added. 1.7 µl of the resulting bacterial solution were pipetted onto the agar. After deposition of bacteria, the second face of the double -sided tape was removed and the pad was then sealed with a clean coverslip. Time-lapse microscopy (slow dynamics) Long-term ParB-GFP droplet dynamics ( Figure 4) were monitored by following the movement of ParB droplets over several cell cycle times. For this, cells were first harvested during the exponential phase (optical density at 600 nm:~0.3), then diluted to 1:300 to adjust the bacterial density on the field of view, and finally deposed in an ONIX CellAsic microfluidic chamber (B04A plates; Merck-Millipore). Bacteria were immobilized and fresh culture medium (supplemented with 0.2% arabinose) was provided (10.3 kPa) throughout the acquisition. images at 50ms exposure time (8ms for sptPALM) with a 561 nm readout laser (Sapphire 561LP, 100mW, Coherent) and continuous illumination with a 405 nm laser for photo activation (OBIS 405 50, Coherent). Data were acquired using custom-made code written in Labview. The readout laser intensity used was 1.4 kW/cm² at the sample plane. The intensity of the 405nm laser was modified during the course of the experiment to maintain the density of activated fluorophores constant while ensuring single molecule Page 20 certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. The copyright holder for this preprint (which was not this version posted May 4, 2020. . https://doi.org/10.1101/791368 doi: bioRxiv preprint imaging conditions. PALM acquisitions were carried out until all mEos2 proteins were photo activated and bleached. Fluorescent beads (TetraSpeck TM, Life Technologies) were used as fiducial marks to correct for sample drift by post -processing analysis. For cell segmentation, a bright field image was taken on each field of view. Time-lapse microscopy Long-term time-lapse microscopy to monitor of ParB-GFP droplet dynamics was performed on the same optical setup by continuously acquiring images at 50ms exposure time with a 561 nm readout laser (Sapphire 561LP, 100mW, Coherent) until complete photobleaching. FRAP FRAP experiments on ParB-GFP droplets were conducted on a ZEISS LSM 800 by acquiring 318x318x5 (XYZ) images, every 10 seconds, with 106x106x230 nm 3 voxels exposed during 1.08 µs. Data were acquired using Zen Black (Zeiss acquisition suite). The photo-bleaching of ParB-GFP droplets was performed in circular regions with a diameter of 1.2 µm covering single droplets, in cells with exactly two droplets. To be able to detect them and estimate their intensity afterwards, droplets were only partially photo-bleached (50% approximately). To estimate photo-bleaching during fluorescence recovery, fluorescent intensity was also monitored in bacteria in which ParB-GFP droplets were not pre-bleached. The resulting z-stack images were then projected and analyzed. ParA degradation experiments ParA degron experiments (DLT3469 strain) were conducted on an Eclipse TI-E/B wide field epifluorescence microscope with a phase contrast objective. To quantify the number of ParB-GFP droplets, samples were prepared at different time points (t induction , t induction + 1h, t induction + 2h, t induction + 3h, and one night after induction of degradation) by harvesting cells from the solution, and snapshot images were acquired with an exposure time of 0.6 s. Page 21 certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. The copyright holder for this preprint (which was not this version posted May 4, 2020. . https://doi.org/10.1101/791368 doi: bioRxiv preprint Images were then analysed using the MATLAB-based open-source software MicrobeTracker, and the SpotFinderZ tool ( Sliusarenko et al., 2011 ). To monitor the dynamics of ParB-GFP droplets, images were acquired every 42 seconds. PALM Localization of single molecules for PALM experiments was performed using 3D-DAOSTORM (Babcock, Sigal, and Zhuang 2012) . To discard poorly sampled clusters, localization stitching was used to select clusters with more than 10 ParB-mEos2 molecules. Stitching was realized by grouping together localizations appearing in successive frames or separated by up to 10 frames (to account for fluorescent blinking) and that were 50 nm apart. Stitched localizations were clustered using a voronoi-based cluster identification method (Levet et al. 2015;Cattoni et al. 2017) . In total, 990 clusters with an average of localizations 4 57 8 ± (mean std) were analyzed ( Figure S1C). To validate that chemical fixation did not alter the ± localization of ParB droplets, we measured the position of droplets in small cells as these most often contained two droplets ( Figure S1A). ParB-mEos2 droplets observed by PALM displayed the typical localization pattern of ParB-GFP observed in live cells, i.e. a preferential localization near quarter positions along the cell's main axis (Le Gall et al. 2016) . 3D isotropic reconstructions from single-molecule localizations Super-resolved images were reconstructed and classified using our Single-Particle Analysis method (Salas et al. 2017) based on iterations of multi-reference alignment and multivariate statistical analysis classification ( van Heel, Portugal, and Schatz 2016) . N = 990 clusters were sorted into 50 class averages ( Figure S1D): 51% (506/990) of the total clusters fell into only Page 22 certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. The copyright holder for this preprint (which was not this version posted May 4, 2020. . https://doi.org/10.1101/791368 doi: bioRxiv preprint eight classes ( Figure S1E) which were further analysed for particle reconstructions. To measure the size of the partition complex, we reconstructed the three dimensional structures of the eight most represented classes using the angular reconstitution method ( Van Heel 1987) and evaluated their Full-Width at Half Maximum (FWHM, Figure S1F). sptPALM Single-molecules for sptPALM analysis were localized using Multiple Target Tracking (MTT) (Sergé et al. 2008) . Single ParB-mEos2 localizations were linked to single tracks if they appeared in consecutive frames within a spatial window of 500nm. To account for fluorescent protein blinking or missed events, fluorescence from single ParB-mEos2 molecules were allowed to disappear for a maximum of 3 frames. Short tracks of less than four detections were discarded from subsequent analysis. The mean-squared displacement (MSD) was computed as described in (Uphoff, Sherratt, and Kapanidis 2014) . The apparent diffusion coefficient was calculated from the MSD using the following equation: In all cases, the distribution of apparent diffusion coefficients were satisfactorily fitted by a two-Gaussian function. These fits were used to define the proportions of lowand high-mobility species present for wild-type and mutant strains. Trajectories were then classified as low-or high-mobility using a fixed threshold. The value of the threshold was optimized so that the number of trajectories in each species matched that obtained from the two-Gaussian fit. Pairwise distances between ParB molecules were computed as described in . First, the tracks appearing in a spatio-temporal window of 50 nm and 67 frames were linked together to correct for long-time fluorescence blinking of mEos2 that could lead to biased results towards short distances. Then, the distances between the first localizations Page 23 certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. The copyright holder for this preprint (which was not this version posted May 4, 2020. . https://doi.org/10.1101/791368 doi: bioRxiv preprint of each trajectory were computed. Finally, these distances were normalized by a homogeneous distribution obtained from simulating random distributions of 500 emitters in a 1 µm by 0.5 µm nucleoid space. The threshold to define the shaded region in Fig. 2C (200nm) was obtained from the value in the x-axis where the pair correlation surpassed 1. This value is considerably larger than the size of a ParB condensate measured in fixed cells ( Figure 1) because the partition complex moves during acquisition in live sptPALM experiments, leading to an effective confinement region of~150-200 nm (Sanchez et al. 2015) . FRAP ParB-GFP droplets from FRAP experiments were tracked using 3D-DAOSTORM (Babcock, Sigal, and Zhuang 2012) . The fluorescence intensity of each ParB-GFP droplet was computed by integrating the intensity of a 318nm x 318nm region (3x3 pixels) centered on the centroid of the focus. The trace in Figure 2E was built by averaging the time-dependent fluorescent decay/increase curves from photo-bleached/unbleached ParB-GFP droplets. Each curve was first normalized by the initial droplet intensity, estimated by the average intensity in the three images prior to photo-bleaching. To estimate and correct for natural photo-bleaching, we used bacteria present on the same fields of view, but whose ParB-GFP droplet had not been photo-bleached. Calculation of ParB concentrations The concentration of ParB molecules per partition complex was calculated as follows. The total number of ParB dimers per cell is estimated at~850 dimers, with 95% of them residing within a partition complex ( Figure 1B). Therefore, there are in average N = 850/3.5*0.95 = 231 dimers per partition complex, given that there are~3.5 partition complexes per cell in average. The volume of a partition complex can be estimated as V = 4/3*Pi*r³=4.16*10^20 Page 24 certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. The copyright holder for this preprint (which was not this version posted May 4, 2020. . https://doi.org/10.1101/791368 doi: bioRxiv preprint liters, with r being the mean radius of a partition complex (r = 21.5 nm). Thus, the concentration of ParB in a partition complex is estimated as C = N/Na/V = 9.2 mM, with Na being Avogadro's number. The copyright holder for this preprint (which was not this version posted May 4, 2020. . https://doi.org/10.1101/791368 doi: bioRxiv preprint pJYB249. A SsrA tag (SsrA AEAS ; AANDENYSENYAEAS) to specifically induce the degradation of ParA F was introduced in frame after the last codon of ParA, generating pCMD6. The ssrA AEAS sequence carries the wild-type co-translational signals between parA F and parB F . All plasmid constructs were verified by DNA sequencing (Eurofins). certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. The copyright holder for this preprint (which was not this version posted May 4, 2020. . https://doi.org/10.1101/791368 doi: bioRxiv preprint The movement of three hundred ParB dimers (grey disks) within a simulated cell was governed by free diffusion (1 µm²/s), weak ParB-ParB dimer interactions (J=4.5 kT), high-affinity ParB-parS interactions, and low-affinity interactions with the lattice. Regardless of initial conditions, ParB dimers formed a condensate in the presence of parS10 (gray, blue). In contrast, no condensate is formed in the absence of a nucleation sequence (green). Schematic representations are used to show the initial and final configurations (left and right panels). For simplicity, parS+ refers to a simulation with parS10 . (B) Representative single-molecule tracking experiment of ParB in absence of parS . Low-mobility trajectories are shown in blue and high-mobility trajectories in red while the cell contour is schematized as a white line. (C) Distribution of apparent diffusion coefficients for low-mobility (blue, 50%) and high-mobility trajectories (red, 50%). The distribution for a wild-type strain containing parS (same data as in Figure 2B) is shown as a dashed grey line for comparison. n=209 (number of cells). (D) Pairwise distance analysis of low-mobility trajectories in absence of parS (green curve), in a ParB mutant impaired for parS binding (cyan curve) and in a ParB-ParB interaction mutant (blue curve). The expected curve for a homogeneous distribution is shown as a horizontal dashed line. The distribution for a wild-type strain containing parS is shown as a dashed grey line for comparison (same data as in Figure 2C). Schematic representations of single-molecule ParB trajectories in the absence (top) and presence of parS (bottom) are shown in the panels on the right. n=209 (number of cells). Measurements were performed in different biological replicates. . Insets show examples of segmented cells (black) with ParB partition complexes (green). Scale bar is 1µm. Page 38 certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. The copyright holder for this preprint (which was not this version posted May 4, 2020. . https://doi.org/10.1101/791368 doi: bioRxiv preprint S1E Examples of raw clusters for the eight most represented classes, which represent more than 50% of the total number of clusters (N=990). S1F Full width at half maxima (reconstruction size) extracted from the 3D reconstructions obtained from each class average. The average reconstruction size is (mean 2.5 nm .9 4 ± 6 std). ± S1G-I. Quantitative assessment of particles shape. Roundness, sphericity and aspect ratios parameters of individual raw particles were calculated within each class. Particle roundness was computed as the ratio of the average radius of curvature of the corners of the particle to the radius of the maximum inscribed circle, while sphericity was computed as the ratio of the projected area of the particle to the area of the minimum circumscribing circle (Zheng and Hryciw 2015) . In addition, we generated simulated 3D localizations homogeneously distributed within a sphere and reconstructed projected density images to obtain a reference of the shape characteristics we should expect for a perfect sphere taking into account experimental localizations errors. The results of these analyses reveal that raw particles display highly symmetric shapes (Panel H) with very little roughness in their contour (Panel G). The residual deviations of some classes from a perfect sphere likely originate from shape fluctuations over time. Finally, we computed the aspect ratio of each particle in each class (Panel I). In this case, raw particles were aligned by using MRA/MSA (van Heel, Portugal, and Schatz 2016) , their sizes were estimated in x and y by measuring the full width at half maximum, and the aspect ratio was calculated as a ratio between the particle sizes in the x and y dimensions. This analysis shows that the aspect ratio was on average close to one, as expected for spherical particles. Figure S2 S2A Representative images of live cells and single ParB trajectories (low-mobility: blue; high-mobility: red). Processed bright field images are shown in white. Bright field images are not drift-corrected, thus in some fields of view trajectories may appear shifted with respect to cell masks. Low-mobility trajectories tend to cluster while high-mobility trajectories display a dispersed nucleoid localization. Apparent diffusion coefficients for high-and low-mobility species The apparent diffusion coefficient of proteins bound to DNA non-specifically was reported to range between 0.3 and 0.7 μm2/s (Garza de Leon et al. 2017;Stracy and Kapanidis 2017;Stracy et al. 2016) . These values are consistent with the high-mobility species corresponding to ParB particles interacting non-specifically with chromosomal DNA. The measured apparent diffusion coefficient (D*) comprises cell confinement, motion blurring and localization uncertainty (Stracy et al. 2016) . Thus, immobile molecules have a non-zero D* due to the localization uncertainty in each measurement, σ loc , which manifests itself as a positive offset in the D* value of~σ loc 2 /Δt (Δt is the exposure time). By using the localization uncertainty and exposure times in our experimental conditions (σ loc~1 5nm and Δt = 8ms), we estimate a non-zero offset in the apparent diffusion coefficient of~0.02 μm 2 /s, exactly our estimation of D* for the low-mobility species. Previous studies reported an apparent diffusion coefficient of~0.1 μm 2 /s for proteins bound specifically to DNA Stracy et al. 2016;Zawadzki et al. 2015;Thrall et al. 2017) . This D* was similar to that of chromosomal DNA itself (~0.11 μm 2 /s), as reported by the D* of LacI/TetR specifically bound to DNA (Garza de Leon et al. 2017;Normanno et al. 2015) . These studies, however, used different experimental conditions (σ loc~4 0 nm and Δt = 15ms), leading to a D* for immobile particles of~0.1 μm 2 /s (Stracy et al. 2016;Zawadzki et al. 2015;Garza de Leon et al. 2017) . Thus, we conclude that the D* of our low-mobility species is consistent with previous measurements for the apparent diffusion coefficient of proteins bound specifically to their DNA targets. Calculation of residence time in ParB condensates In Figure 2E are shown the fluorescence intensity profiles in time from both ParB condensates after the photo-bleaching of the condensate that we number by 1. We first notice that the foci intensity curves are symmetric (see Figure S2C below) by doing a simple manipulation of the data : is the symmetric curve of according to the (t) 2I (t) I * 1 = ∞ − I 1 I 1 asymptotic intensity value . S2C Fluorescence intensity curves of photo-bleached (orange, ) and unbleached (t) I 1 condensates (green, ). The curve (red) is symmetric to according to the (t) I 2 (t) I * 1 (t) I 1 asymptotic value . It shows the symmetrical behavior I I (t ) I (t ) .7 ∞ = 1 → ∞ = 2 → ∞ ≃ 0 between both condensates which we used in the model developed here. We therefore describe the FRAP experiments by the following simple kinetic model on the distribution of ParB proteins in both condensates and the rest of the cytoplasm. S2D Schematic representation of the model parameters. ( , respectively) is the ratio of the (t) S 1 (t) S 2 average number of fluorescent ParB-GFP proteins in the photo-bleached (un-bleached, resp.) condensate. is the ratio of the average number of fluorescent ParB-GFP proteins outside of the (t) σ condensates. Due to symmetry, we consider : and . k k k 1,in = 2,in = in k k k 1,out = 2,out = out This model is described by the following equations where and are, respectively, the ratios of average number of ParB proteins in (t) S 1 (t) S 2 the condensates (droplets) and after photobleaching and σ(t) the ratio of average F 1 F 2 number of ParB proteins in the cytoplasm after photobleaching. All these quantities were normalized by (see also below) in order to directly compare the experimental (0) S 2 normalized intensities and with the model variables and . Rates ( , resp.) ( ) count for the probability per unit of time to enter (exit, resp.) the k i,out , i = 1 2 condensate . Note that here we assume that the fluorescence intensity is proportional to F i the number of ParB proteins, and that there are no steric/exclusion effects among ParB. Due to the symmetric behavior of the FRAP signals for each condensate ( Figure S2C), we assume that the kinetic rates are the same for each condensate . Therefore the F i equations write: Starting from these assumptions, we first study the asymptotic limit of eq. 2 where we notice from Figure S2C that . In this case it is simple to (t ) S (t ) S .7 S 1 → ∞ = 2 → ∞ = ∞ ≃ 0 notice that : Live data provided an estimation of this ratio ( Figure 2C): . We then write the explicit time dependence of , and . As intensities have (t) S 1 (t) S 2 (t) σ been normalized by the natural photo-bleaching, we can first use the conservation of the total number of proteins just after FRAP photobleaching: It is useful to consider that the same condition is valid all over the acquisition (i.e. we also assume that degradation of ParB is negligible during the 5min-experiment), therefore at long times: The conservation of the total particle number (ratio) in eq. 4 allows us to eliminate the equation on the variable σ from the equations 2 and write the new set of equations : General solutions of the systems of equations (6) is as follows: (7) where equations (6) are still symmetrical for the 1 ↔ 2 exchange. To perform a fit of the date, it is useful to consider the symmetric and antisymmetric variables = (t) S ± (t) S 1 ± (t) S 2 and write : The first equation (8) (in accordance with solutions in eq. 7) gives the simple exponential solution: (9) A simple exponential fit (see Figure S2E below) already provides : s k out = (0.0100±0.0004) −1 S2E Anti-symmetric intensity curve (blue) and associated fitted curve or, reciprocally, a residence time in the focus: and By using the knowledge of the previously estimated parameters, and , one can k out S − (t) still exploit the solution of (in eq. 7) to fit the remaining parameters and S 2 (t) k in S + (t) . We fitted the data with the right side of the following rewritten equation (see fit in Figure S2F): S2F Intensity curve (purple) and associated fitted curve (see eq. 10)(cyan). which provides an estimate (~90%) of the proportion of ParB proteins confined within ParB condensates, in good agreement with experimental results (~95%). The agreement between theoretical and experimental curves can be verified by plotting eqs. (7) together with the experimental datasets ( Figure 2F). S2G Integrated fluorescence intensity of two ParB condensates before and after fusion. Dots represent the integrated ParB condensate intensity before (blue) and after (red) fusion. Black line and the shaded regions represent median and one standard deviation, respectively. The average increase in fluorescence intensity upon fusion was 1.5±0.1. S2H Integrated fluorescence intensity of a ParB condensate before and after splitting. Dots represent the integrated ParB condensate intensity before (blue) and after (red). Black lines and the shaded regions represent median and one standard deviation, respectively. The average decrease in fluorescence intensity upon splitting was 0.7±0.1. Page 46 certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. Figure S3 The Lattice-Gas model The ParB proteins in the nucleoid were modeled by a Lattice Gas (LG), which is the paradigmatic system of particles displaying attractive contact interactions (Binney et al. 1992) . It is important to note that the LG model presents a first-order phase transition between a gas phase, in which the particles diffuse homogeneously throughout the lattice, and a liquid-gas coexistence phase, in which liquid droplets coexist with diffusing particles in the gas phase. An exchange of particles between the two phases maintains a dynamic equilibrium. As we will see below, the metastable region of the gas can account for the experimental observations on ParBS. We use the LG as a qualitative model for ParB in the nucleoid in order to offer a proof of concept of the mechanism of formation of the ParBS complexes. For 1d, 2d square, and 3d simple cubic lattices, the LG coordination number is q = 2d . Recent work (David et al. 2018) shows that a 1D LG model on a fluctuating polymer like DNA can undergo phase separation and that such a model can, after averaging over the polymer conformational fluctuations, be assimilated approximately to a LG with short range (nearest-neighbor) interactions on a lattice with an effective, perhaps fractal, coordination number between 2 and 6 (G. David, PhD Thesis, in preparation). The increase in coordination number from the value of 2 expected for a linear polymer is due to the combined effects of nearest-neighbor (spreading) interactions along the polymer and bridging interactions between proteins widely separated on the polymer, but close in space. Here for simplicity and purposes of illustration of the basic mechanism we adopt a simple cubic (sc) lattice . The ParB proteins were modeled by particles with a diameter , which were 5 nm a = located on the nodes of a simple cubic lattice representing segments of the nucleoid on which ParB is able to bind to or unbind from. The distance between nearest-neighbour sites was as large as the size of a particle, i.e. . The lattice was chosen to display the nm 5 dimension of a bacterial nucleoid: and (equivalent to .5 μm L x = L y = 0 μm L z = 2 and ). The total number of binding sites in the lattice was then 00 a L x = L y = 1 00 a L z = 4 . To match the experimentally determined number of ParB proteins per .10 N s = 4 6 condensate, the particle number was fixed at 300. Thus, the particle density was /N ρ = N s very low ( ), placing the LG model in a configuration where only a first-order phase ρ~10 −4 separation transition can occur. Particles could interact between each other when they are at adjacent sites on the sc lattice (nearest neighbour contact interaction) with a magnitude . For the LG model, the total energy of the system is: , the occupation variable at site i taking the value 1 if the site i is occupied or 0 if not. The sum runs over the pairs of nearest-neighbour sites <i,j> of the lattice. Σ At biological density and room temperature, the homogeneous (possibly ρ~10 −4 meta or unstable) system is in the extreme low-density regime. Although the mean field prediction for the coexistence curve is asymptotically exact in this regime, the same is not true for the mean field spinodal curve. In this limit it is possible, however, to perform an asymptotically exact low density (virial) expansion that leads to a simple expression for the system free energy from which the pressure and chemical potential can be obtained. The coexistence curve is found by equating the pressure and chemical potential in the gas and liquid phases which are in equilibrium. The spinodal curve is determined by the divergence of the isothermal compressibility. At low density the gas branch coexistence (coex) ρ ≪ 1 and spinodal (sp) curves are given by (G. David, in preparation, 2019): We present in Fig. S3B the asymptotically exact low-density results for the coexistence and spinodal curves in the density-coupling plane ( ρ , J ). The above limiting low density forms indicate that the coexistence and spinodal curves become straight lines in a log-linear plot. .42 J up = 7 homogeneous fluid phase is stable below the coexistence curve, metastable between the coexistence and spinodal curves, and unstable above the spinodal curve. computed above (see Fig. S3B). In the dynamic simulations of the main text, we chose the value close to the estimated upper limit in order to be as close as possible to the .5 J = 4 experimental value of 90% of ParB inside the droplets ( (Sanchez et al. 2015) , and here). This value is in semi-quantitative agreement with other simulation works on ParB-ParB (Broedersz et al. 2014;David et al. 2018) . At the gas density on the thermodynamic coexistence curve is very low (see Fig. S3A) and the liquid density was very close to 1, which leads to 98% of ParB inside the droplets. We expect, however, simulations on finite size systems to show quantitative differences with respect to the thermodynamic limit because of boundary effects and enhanced fluctuations. Effect of parS on the nucleation The LG provides a proof of concept of the physical mechanisms at work in the formation of ParB droplets: in the metastable region of the liquid-gas coexistence phase, the nucleation factor parS site catalyzes the formation of a ParB droplet. To show this effect, we first thermalized the system from a random distribution of ParB on the nucleoid (corresponding to the absence of coupling, i.e. ) to the coupling of the simulation J = 0 . At time , we monitored the time evolution of and the number of ParB in .5 J = 4 t = 0 ε the droplet ( Figure 3A). The parameters were the same as before, and we averaged over 250 different thermal histories. We checked two different conditions: (i) without a parS sequence; (ii) with a parS sequence, which were modeled by 10 lattice sites with high affinity for ParB proteins (in practice, 10 ParB were always fixed on these sites after a short transitional time). As a control, we also performed a simulation (iii) with an initial start where all the particles packed in a single droplet centered on a parS sequence. Convergence towards the equilibrium state was achieved in this case by simple relaxation and not by nucleation. The gas phase is characterized by small values of the energy (the particles are 0 ε~ dispersed in the lattice) while in the liquid-gas coexistence phase, the condensation of the particles increases the number of interactions, and therefore the energy . < ε < 0 In the case (i) without parS , the system remained at a small equilibrium value until the final moment of the simulations , i.e., the − 4 .10 ε ≈ 2 3 .10 M C steps t f = 3 7 condensation into a droplet was not observed. In the case (ii) , the system was initially at the small value of as in case (i), but was followed by another plateau between − 4 .10 ε = 2 3 and corresponding to the binding of ParB to the 10 parS sites. The t = 10 4 M C steps 10 5 energy decreased from to , which corresponds to the interactions − 4 ε ≈ 2 0 .10 − 8 experimental phenomenology: without parS , ParB is homogeneously distributed throughout the cell, whereas with parS , ParB forms protein condensates. Case (iii) is the control simulation with the fully packed droplet. It took, at large times, the same value of as case (ii) after a slight decrease corresponding to the escape ε from the cluster of particles in the gas phase coexisting with the droplet. We also monitored the number of particles inside the droplet ( Figure 3A). We observed~80% of the particles in the droplet, which is in semi-quantitative agreement with experiments (~90% in (Sanchez et al. 2015) , and 95% from Figure 2B). In conclusion, the ParB complex behaves similarly to a system in the metastable region of the liquid-gas coexistence phase ( Figure S3A). The addition of parS acting as a nucleator for ParB condensation significantly reduces the time it takes to form a ParB droplet. To assess the effect of interaction strengths of ParB particles on their nucleation, we performed additional simulations with lower or higher values of J for our two initial conditions (a gas-like and liquid-like state). When the system was first in a gas phase with weak interaction strengths (J = 3.5 kT, Figure S3D), particles were unable to form condensates regardless of the presence of parS sites. Without parS sites, particles favored a homogeneous distribution while in presence of parS sites particles equilibrated to a state with only ParB particles bound specifically to the centromeric sequence. In the initial conditions where ParB particles were released from a liquid-like state, the system equilibrated to a final state where we did not observe nucleation but rather ParB particles specifically bound to the centromeric sequence. In the case of strong interaction strengths (J = 6 kT, Figure S3E), with or without parS sites, particles initially in a gas-like state immediately started forming small droplets (hence low energy values at initial conditions) until a single large droplet prevailed over the others. In presence of centromeric sequences, no binding of ParB to parS was visible due to the fast local aggregation of ParBs at short timescales giving rise to energy variations of the same order of magnitude. In the case where ParB particles were released from a liquid-like state, particles remained in this state throughout the simulation timescales used here. Overall, these simulations highlight the range of interaction strengths required for ParB to nucleate specifically at parS . For weak ParB-ParB interactions (low J), ParB particles are unable to form a condensate, regardless of the presence of centromeric sequences. For strong ParB-ParB interactions (high J), ParB particles nucleate spontaneously independently of the presence of parS sequences and therefore of the plasmid to segregate. A balance of ParB-ParB interactions strengths and the presence of the centromeric sequence are therefore both essential for the partition complex to form and to function in plasmid partitioning. The minimalist model presented in this section explains the basic phenomenology of our system, however, it may fail to explain more complex scenarios for which new, more complex models will need to be devised. S3D-E Monte-Carlo simulations of the Lattice-Gas model for weak (left panel) and strong (right panel) interaction strengths between ParB particles . (D) Weak ParB-ParB interactions (J=3.5 kT) and high-affinity ParB-parS interactions are considered here. Regardless of initial conditions (gas or liquid-like state), ParB proteins are unable to form a condensate in the presence of parS10 (gray, blue) and the system equilibrates to a state where only ParBs bind specifically to the centromeric sequences. (E) In contrast, when strong ParB-ParB interactions (J=6 kT) are considered, small droplets immediately appear and nucleate into a single larger condensate independently of the presence or absence of a nucleation sequence (blue and green curves, respectively). Page 54 certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. The copyright holder for this preprint (which was not this version posted May 4, 2020. . https://doi.org/10.1101/791368 doi: bioRxiv preprint Nucleation of the partition complex with decreasing number of parS sites . To study the influence of the number of parS sites in the speed of nucleation, we performed simulations with a variable number of parS sites ( n parS = 1 to 10). We observe nucleation for all values of n parS , with n parS = 1 and n parS = 2 displaying nucleation times (τ) exceeding the range of MC times reported in the Main Text (see Figure S3F below). However, the nucleation time τ( n parS ) follows a power law such that τ( n parS )~1/n parS ² (see inset) that can be used to extrapolate the nucleation times for n parS = 1 and n parS = 2. Overall, these data support the finding of Broedersz et al. (Broedersz et al. 2014) that a single parS site is sufficient to nucleate a ParB condensate, and additionally indicate that the nucleation of ParB condensates is accelerated by the number of parS sequences per plasmid. S3F Energy as a function of Monte Carlo time for Lattice-gas simulations using different numbers of parS sites (n parS ). Nucleation can be detected by a terminal decrease in the energy. Inset: Nucleation time (τ) versus n parS . Red dashed line shows a fit to τ(n parS ) = A/n parS ², where A is a constant. Low-mobility ParB trajectories in the absence of parS . In spt-PALM experiments we observed an unexpected low-mobility ParB population with the strain lacking natural parS sequences. The simplest explanation for this result is that, in the absence of parS , low-mobility ParB tracks correspond to ParB dimers bound to chromosomal sequences mimicking parS (hereby parS *). This interpretation suggests that in wild-type strains ParB molecules can be in either of three states: (1) in a 'gas-like' state where they move through the nucleoid with low-affinity interactions with non-specific DNA; (2) in a condensed state where they interact specifically with parS and with other ParB dimers; (3) in a transient DNA bound state, where they can remain bound to other chromosomal sequences (such as cryptic parS *) but without interacting with other ParB dimers. S3G Results of the FIMO analysis for parS mimicking sequences in the E.coli MG1655 genome . This analysis shows that there are indeed tens of sequences displaying only 2 or 3 mutations in the consensus parS sequence. ParB binds nonspecific DNA with an affinity of 250nM, and parS with an affinity of 2.5nM. Introduction of 2-3 mutations in the consensus parS sequence changes the affinity of ParB from 6 and 250nM, depending on the position of the nucleotide change (Pillet et al. 2011) . Thus, tens of cryptic parS * sequences exist that could be populated when the population of ParB in the gas-like state is abundant. Next, we performed simulations to explore whether cryptic parS * sequences could get occupied in the absence of the ten natural copies of parS in the F-plasmid. For this, we used our Lattice-gas model to measure the occupancy of single copies of parS and varied the the energy of interaction between ParB and parS (see below). S3H Occupation of single parS sites for different interaction energies as a function of Page 56 certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. Monte-Carlo time. ParB-parS interactions with affinities between 6-200nM would have energies between 4.6 and 0.2 kT in our simulations (0 kT representing the baseline for nsDNA interactions). Thus, these simulations show that cryptic parS * sites would be occupied (0-50%) in absence of the 10 wildtype, high-affinity parS sequences, explaining the persistence of low-mobility, unclustered trajectories under these conditions. Finally, we performed sptPALM experiments on a ParB mutant with reduced parS -binding ability to test whether low-mobility trajectories corresponded to ParB molecules bound to cryptic chromosomal parS* sequences. We observed a reduction in the low-mobility population from 50% in the strain lacking parS to 30% in the mutant with reduced parS binding ability ( Figure S3I). Importantly, residual low-mobility trajectories in this ParB mutant failed to cluster ( Figure 3D). S3I Distribution of apparent diffusion coefficients for low-(blue, 30%) and high-mobility trajectories (red, 70%) for a ParB mutant impaired in parS binding (pJYB224/DLT3997). The distribution for a wild-type strain (same data as in Figure 2B) is shown as a dashed grey line for comparison. N=7453 tracks. This new interpretation explains all our experimental observations: (1) in wild-type cells, ParB dimers in the gas-like fraction can stably bind and nucleate at the F-plasmid because it harbours multiple, high-affinity, contiguous parS sequences. This gives rise to low-mobility, clustered tracks observed by spt-PALM. In these conditions, the proportion of ParB dimers in the gas-like state is extremely small, therefore lower-affinity, sparsely distributed parS * sequences do not get populated; (2) In contrast, in the absence of the 10 wildtype parS sequences, a large number of ParB dimers in the gas-like phase can occupy the lower-affinity, sparsely-distributed parS * chromosomal sites, giving rise to low-mobility, unclustered tracks. Page 57 certified by peer review) is the author/funder. All rights reserved. No reuse allowed without permission. The copyright holder for this preprint (which was not this version posted May 4, 2020. . https://doi.org/10.1101/791368 doi: bioRxiv preprint In Figure 2, we used fast time-lapse microscopy (50 ms between frames) to observe fusion of ParB condensates in single cells. From these observations, we found that fusion events occur rapidly (~5±3 s), and perhaps more critically, were in all cases reversible (N>14). With wildtype levels of ParA, the subcellular position of ParB condensates fluctuates rapidly (~few seconds timescale) (Sanchez et al. 2015) . Thus, in wildtype cells the random encounter of rapidly fluctuating ParB condensates leads to condensate fusion, and the presence of endogenous levels of ParA ensures their subsequent separation. In Figure 4, we show that the average number of ParB condensates per cell decreases slowly (over tens of minutes) and irreversibly when ParA is gradually degraded. These measurements were performed every 30 minutes, thus transient and rapid fusion events (as observed in Figure 2) could not be detected due to the sparse and ensemble
2019-10-10T09:22:43.557Z
2019-10-02T00:00:00.000
{ "year": 2020, "sha1": "29909f9b37624d636ccba595bd4a16542ff06a25", "oa_license": "elsevier-specific: oa user license", "oa_url": "http://www.cell.com/article/S1097276520304366/pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "5765ca241aa7dcb12c77a83748b17e3090bc8182", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Chemistry" ] }
265961013
pes2o/s2orc
v3-fos-license
iTRAQ-based quantitative proteomic analysis of the antibacterial mechanism of silver nanoparticles against multidrug-resistant Streptococcus suis Background The increase in antibiotic resistance of bacteria has become a major concern in clinical treatment. Silver nanoparticles (AgNPs) have significant antibacterial effects against Streptococcus suis. Therefore, this study aimed to investigate the antibacterial activity and mechanism of action of AgNPs against multidrug-resistant S. suis. Methods The effect of AgNPs on the morphology of multidrug-resistant S. suis was observed using scanning electron microscopy (SEM). Differentially expressed proteins were analyzed by iTRAQ quantitative proteomics, and the production of reactive oxygen species (ROS) was assayed by H2DCF-DA staining. Results SEM showed that AgNPs disrupted the normal morphology of multidrug-resistant S. suis and the integrity of the biofilm structure. Quantitative proteomic analysis revealed that a large number of cell wall synthesis-related proteins, such as penicillin-binding protein and some cell cycle proteins, such as the cell division protein FtsZ and chromosomal replication initiator protein DnaA, were downregulated after treatment with 25 μg/mL AgNPs. Significant changes were also observed in the expression of the antioxidant enzymes glutathione reductase, alkyl hydroperoxides-like protein, α/β superfamily hydrolases/acyltransferases, and glutathione disulfide reductases. ROS production in S. suis positively correlated with AgNP concentration. Conclusion The potential antibacterial mechanism of AgNPs may involve disrupting the normal morphology of bacteria by inhibiting the synthesis of cell wall peptidoglycans and inhibiting the growth of bacteria by inhibiting the cell division protein FtsZ and Chromosomal replication initiator protein DnaA. High oxidative stress may be a significant cause of bacterial death. The potential mechanism by which AgNPs inhibit S. suis biofilm formation may involve affecting bacterial adhesion and interfering with the quorum sensing system. Introduction Streptococcus suis is an important zoonotic pathogen with worldwide prevalence and is considered to be one of the most important bacterial pathogens causing significant economic losses in the swine industry (Segura, 2020).As with most pathogens, the ability of S. suis to form biofilms plays a significant role in its virulence and drug resistance (Wang et al., 2018).Currently, the treatment of S. suis infections relies on antibiotics; however, drug resistance is a concern.Data suggest that available veterinary drugs, such as ampicillin, cefepime, cefotaxime, ceftiofur, ceftriaxone, chloramphenicol, florfenicol, gentamicin, penicillin, and tiamulin, tend to be less effective in treating S. suis infections (Lunha et al., 2022).S. suis has an extremely high rate of resistance to tetracyclines, lincosamides, and macrolides, and resistance has spread globally (Uruen et al., 2022).Therefore, there is an urgent need to develop efficient alternatives to antibiotics. Silver has strong antimicrobial potential and has been used since ancient times (Rai et al., 2012).AgNPs are now considered a viable alternative to antibiotics and appear to have appreciable potential to address the concern of bacterial multidrug resistance (Franci et al., 2015).AgNPs show distinct antibacterial and antibiofilm formation effects on bacteria.For example, it has been proven that AgNPs could play antimicrobial roles in the multidrugresistant (MDR) Pseudomonas aeruginosa and the main mechanism involves the disequilibrium of oxidation and antioxidation processes and the failure to eliminate the excessive ROS (Liao et al., 2019).Siddique et al. provided evidence of AgNPs being safe antibacterial and antibiofilm compounds against MDR Klebsiella pneumoniae (Siddique et al., 2020).Farouk et al. demonstrated the effective ability of AgNPs to fight MDR Salmonella spp. in vitro and in vivo without adverse effects (Farouk et al., 2020).AgNPs have antibacterial activity against multidrug-resistant bacteria pathogens, such as Vibrio cholerae, Staphylococcus aureus, Streptococcus pyogenes, Escherichia coli, and Klebsiella pneumoniae (Chinnathambi et al., 2023).The mechanism of antimicrobial activity of AgNPs involves four steps: (i) adhesion of AgNPs to cell wall/membrane and their disruption; (ii) intracellular penetration and damage; (iii) oxidative stress; and (iv) modulation of signal transduction pathways (Wahab et al., 2021;Tripathi and Goshisht, 2022).In our previous study, AgNPs showed significant activity against S. suis in vitro (Liu et al., 2022), however, the mechanism of AgNPs against S. suis remains unclear. The occurrence of disease and the therapeutic effect of drugs are always accompanied by fluctuations and changes in numerous proteins.The application of quantitative proteomics can enable visualization of the up-regulation and down-regulation of differential proteins through charts, and provide functional annotations to intuitively analyze the possible mechanism of action of drugs (Saleh et al., 2019).In particular, iTRAQ technology has good applicability in the study of antibacterial mechanisms. In this study, ultrastructural observations using scanning electron microscopy (SEM), fluorescence microscopy, and iTRAQ-based quantitative proteomics were used to investigate the antibacterial mechanism of action of AgNPs against multidrug-resistant S. suis.The findings can provide significant insights into the molecular mechanism of AgNPs against S.suis. A S. suis type 2 strain isolated from a diseased pig was employed and preserved in our laboratory.In our previous study, we identified it as an MDR bacterium that exhibits resistance to various antibiotics, such as tetracycline, doxycycline, penicillin, florfenicol, cefotaxime, kanamycin, and lincomycin, and that the minimum inhibitory concentration (MIC) of AgNPs against it is 25 μg/mL (Liu et al., 2022).S. suis was cultured in trypticase soy broth (TSB) or maintained on trypticase soy agar (TSA) supplemented with 5% bovine serum (Gibco, Auckland, New Zealand) at 37°C. AgNP treatment and preparations for SEM observation To observe the impact of AgNPs on the morphology of S. suis, S. suis bacteria were proliferated to 1 × 10 8 CFU/mL.After being exposed to 6.25, 12.5, 25, 50, and 100 μg/mL AgNPs for 12 h, the culture was precipitated by centrifugation and washed with PBS and then centrifuged; the precipitates were fixed in 2.5% glutaraldehyde for 2 h at 4. Subsequently, the precipitates were centrifuged, and washed with sterile PBS.The precipitates were then sequentially dehydrated through a series of alcohols (30, 50, 70, 90, and 100%) to the critical point.After coating with gold, the samples were examined using SEM (Zeiss Sigma 300). To observe the effect of AgNPs on the biofilms of S. suis, 1 cm × 1 cm coverslips were ultrasonically cleaned for 1 h, and surface impurities and grease were removed and sterilized with high-pressure steam.A 6-well plate was prepared by placing a sterile coverslip in each hole, and then adding 1 mL of bacterial solution to cover the surface of the coverslips.The culture plate was cultured in an incubator at 37°C for 24 h, and the supernatant was discarded.The experimental group was added with TSB containing 25 μg/mL AgNPs and the control group was added with blank TSB.The culture plate was again cultured in an incubator at 37°C for 24 h and the supernatant was discarded.The plates were immersed in 2.5% glutaraldehyde for 3 h, then dehydrated with a series of concentrations of ethanol (30, 50, 70, 90, and 100%), dried, sprayed with gold, and observed with a scanning electron microscope. Protein extraction and iTRAQ labeling The bacterial cells were treated with 25 μg/mL AgNPs and harvested by centrifugation.Proteins were extracted as follows: The appropriate amount of sample was weighed and transferred into a 2 mL centrifuge tube, two steel beads and 1XCocktail with an appropriate amount of SDS L3 and EDTA were added, placed on ice for 5 min, and DTT was added at a final concentration of 10 mM.A Frontiers in Microbiology 03 frontiersin.orggrinder (frequency 60 HZ, duration 2 min) was used to crush the tissue, which was then centrifuged at 25,000 g*4°C for 15 min, and the supernatant was collected.DTT was added at a final concentration of 10 mM, and the mixture was placed in a water bath at 56°C for 1 h.IAM was added at a final concentration of 55 mM and the mixture was placed in a dark room for 45 min.Cold acetone was added to the protein solution at a ratio of 1:5, placed in a refrigerator at −20°C for 30 min, centrifuged at 25,000 g*4°C for 15 min, and the supernatant was discarded.The precipitate was air-dried, lysis buffer without SDS L3 was added, and a grinder (frequency 60HZ, duration 2 min) was used to promote protein solubilization.This was centrifuged for 15 min at 25,000 g*4°C to collect the supernatant; the supernatant is the protein solution.The proteins were digested, and the resultant peptides were labeled using iTRAQ 8-plex kits (AB Sciex).The untreated samples were labeled as 118, 119, and 121, and the samples treated with AgNPs were labeled as 114, 116, and 118. LC-MS/MS The dried peptide samples were reconstituted with mobile phase A (2% ACN, 0.1% FA), centrifuged at 20,000 g for 10 min, and the supernatant was collected for injection.The separation was performed using a Thermo UltiMate 3,000 UHPLC system.The sample was first enriched in a trap column and desalted, and then separated on a selfpacked C18 column (75 μm internal diameter, 3 μm column size, 25 cm column length) at a flow rate of 300 nL/min by the following effective gradient: 0 ~ 5 min, 5% mobile phase B (98% ACN, 0.1% FA); 5 ~ 45 min, mobile phase B linearly increased from 5 to 25%; 45 ~ 50 min, mobile phase B increased from 25 to 35%; 50 ~ 52 min, mobile phase B rose from 35 to 80%; 52 ~ 54 min, 80% mobile phase B; 54 ~ 60 min, 5% mobile phase B. The nanolitre liquid-phase separation end was directly connected to a mass spectrometer. The peptides separated by liquid-phase chromatography were ionized using a Nano ESI source and then passed to a tandem mass spectrometer Q-Exactive HF X (Thermo Fisher Scientific, San Jose, CA) for data-dependent acquisition (DDA) mode detection.The main parameters were set as follows: ion source voltage, 1.9 kV; MS1 scanning range, 350 ~ 1,500 m/z; resolution, 60,000; MS2 starting m/z, 100; and resolution, 15,000.The ion screening conditions for MS2 fragmentation were as follows: charge 2+ to 6+ and the top 20 parent ions with a peak intensity exceeding 10,000.The ion fragmentation mode was HCD and fragment ions were detected using the Orbitrap.The dynamic exclusion time was set to 30 s.The AGC was set to 3E6 in MS1 and 1E5 in MS2. Detection of ROS The bacteria of S.suis proliferated to 1 × 10 8 CFU/mL after exposure to 6.25, 12.5, 25, 50, and 100 μg/mL AgNPs for 6 h.ROS was measured by 2′,7′-dichloro fluorescein diacetate (H 2 DCF-DA) based on the method of Liao et al. (2019).Initially, a 10 mM H 2 DCF-DA stock solution in dimethyl sulfoxide was diluted to 1 mM working solution with a TSB medium.The collected bacteria were washed with PBS and suspended in 1.8 mL of PBS.Then the samples were incubated with 200 μL of working solution at 37°C for 30 min in darkness.Subsequently, the cells were harvested, washed, and resuspended in PBS.This bacterial suspension was dropped on a slide and naturally dried in the darkness at room temperature before fluorescence microscopy (ZEISS, Axio vert.A1) detection.The cultured bacteria were lysed using an alkaline lysis buffer and centrifuged at 3,000 rpm for 5 min.Subsequently, 1 mL of lysate supernatant was prepared for fluorescence detection (Multifunctional microplate reader, Thermo Scientific varios) at excitation and emission wavelengths of 470 and 529 nm, respectively. Statistical analysis All data are expressed as means ± standard deviation with three biological replicates.GraphPad Prism 8.0 was used to perform one-way ANOVA analysis at p ≤ 0.05 and create graphs.Quantification of iTRAQ data was performed using the IQuant software (2.4.0), and the Mascot search engine (v2.3.02,Matrix Sciences, London, United Kingdom) was used to search the UniProt database. AgNPs disrupt the morphology and biofilm structure of Streptococcus suis After treatment with AgNP for 12 h, the bacteria of each multidrug-resistant S. suis strain were collected for morphological examination using SEM.Compared to the control, pits appeared on the surface of the bacterial cells after AgNP treatment, and the cell morphology was distorted (Figure 1).The analysis showed that AgNPs disrupted the morphology of S. suis, and the destruction of cells was aggravated with an increase in AgNP concentration.Further, the structure of the biofilm was destroyed by AgNPs (Figure 2). Analysis of DEPs after AgNPs treatment Quantitative proteomic analysis revealed 1,268 bacterial proteins, and a volcano map of the differentially expressed proteins (DEPs) in S. suis was analyzed (X-axis is log2 fold change) (Figure 3A).In total, 633 upregulated and 635 downregulated genes were significantly altered in response to AgNP exposure at 25 μg/mL.This depicts a volcano plot of log2 fold-change (x-axis) versus the -log10 Q-value (y-axis, representing the probability that the protein is differentially expressed).Q-value <0.05 and fold change >1.2 are considered significant differential expression thresholds.Quantitative repeatability was assessed using CV (CV = SD/mean).The lower the CV value, the better the repeatability.The CV value in this experiment was 0.12, indicating good repeatability (Figure 3B). Functional annotation analysis of DEPs Gene Ontology (GO) analysis classified all the identified proteins and DEPs into three broad categories: molecular function, cellular components, and biological processes (Figure 4A).Molecular functional analysis showed that AgNP treatment significantly affected catalytic activity, binding, transporter activity, structural molecule activity, and transcription regulator activity.Partial GO enrichment analysis of DEPs is shown in Table 1. Proteins related to antioxidant activity The heatmap of antioxidant activity proteins between the AgNPstreated group and the control group Most of these antioxidant proteins exhibited a trend of upregulated expression, except the hydrolases/ acyltransferases of the α/β superfamily (Figure 5). GO analysis of differentially expressed proteins revealed four differentially expressed proteins enriched in antioxidant activity (Figure 4A): glutathione reductase, alkyl hydroperoxides-like protein, α/β superfamily hydrolases/acyltransferases, and glutathione disulfide reductases, of which the α/β superfamily hydrolases/acyltransferases were significantly downregulated.The remaining three proteins were significantly upregulated (Table 2). DEPs related to cell wall and membrane Clusters of Orthologous Groups of proteins (COGs) representing major phylogenetic lineages were delineated by comparing protein sequences encoded in complete genomes.The COG annotation of DEPs is shown in Figure 6, where among all the differentially expressed proteins, we focused on 111 proteins that were related to cell wall/membrane/environment biogenesis. Capsular polysaccharides (CPS) are the main components of the outer capsule of the bacterial cell wall.CPS is an essential virulence factor in the pathogenesis of S. suis 2, and the synthesis of CPS repeating units involves multiple glycosyltransferases (Zhang et al., 2016).This analysis revealed differential expression of many CPS-related proteins (Table 3). Peptidoglycan is an important component of the cell wall of Gram-positive bacteria, necessary for maintenance of cell morphology, size, osmotic pressure, and survival (Egan et al., 2020).Among the 111 DEPs annotated for the biosynthesis of the cell wall, cell membrane, and envelope, we found that 14 peptidoglycan synthesis-related proteins were downregulated (Table 4).Therefore, we speculated that AgNPs inhibited the synthesis of peptidoglycan, thus, affecting the normal structure of the cell wall and destroying bacterial morphology. KEGG pathway analysis of DEPs There were 34 differentially expressed proteins enriched in the quorum sensing pathway, of which 23 were upregulated and 11 were downregulated (Figure 7).The quorum sensing system had a regulatory effect on various life activities of S. suis, and AgNPs interfered with the expression of related proteins in the quorum sensing system, which would affect the regulation of bacterial density of fine communities.This could potentially be one of the mechanisms through which AgNPs inhibit biofilm formation or eliminate the formed biofilms of S. suis. DEPs enriched in the cell cycle are shown in Figure 8. Table 5 describes the significantly downregulated cell division-related proteins.This includes the chromosome replication initiator protein DnaA, cell division proteins FtsZ and DivIB, and the ATP-dependent Clp protease ATP-binding subunit.The results showed that AgNPs affected the expression of cell division protein-related proteins and inhibited the division of S. suis at the initial stage, which may be an important factor affecting bacterial proliferation. AgNPs cause oxidative stress in Streptococcus suis H 2 DCF-DA staining and fluorescence microscopy revealed that, compared to the weak fluorescence of untreated S. suis, the fluorescence intensity of the AgNP-treated bacteria increased with an increase in AgNP concentration within 6 h (Figure 9).The bacteria displayed increased fluorescence intensity when the AgNPs were added (Figure 10), indicating that the AgNPs induced ROS production in a dose-dependent manner. Discussion The lack of an effective vaccine to prevent S. suis disease has led to the widespread use of antibiotics worldwide, with the attendant problem of bacterial resistance.AgNPs exhibit antibacterial, antifungal, antiviral, anti-inflammatory, and antiangiogenic properties owing to their unique physical, chemical, and biological properties (Gurunathan et al., 2014).The antibacterial activity of AgNPs is reportedly due to the production of ROS, malondialdehyde, and leakage of proteins and sugars from bacterial cells (Yuan et al., 2017).It is well known that excessive ROS can lead to oxidative stress in cells; relevant evidence indicates that intensified oxidative stress disturbs energy metabolism and protein metabolism of bacteria (Chen et al., 2022;Zhou et al., 2023a). In our previous study, AgNPs showed significant antibacterial and anti-biofilm effects against S. suis (Liu et al., 2022).In the present study, the mechanism of action of AgNPs against S. suis was explored based on their previously established antibacterial effects.Our results indicate that AgNPs may damage the morphology of multidrugresistant S. suis and the structure of its biofilm (Figures 1, 2).The cell wall and membrane are important structures for maintenance of normal bacteria morphology.We found that many proteins related to peptidoglycan synthesis were downregulated, including penicillinbinding proteins (PBPs), glycosyltransferases, and LytR family transcriptional regulators (Table 4).The normal presence of PBPs is necessary for maintenance of normal bacteria morphology and function (Wei et al., 2003).Therefore, we inferred that the inhibition of PBP expression by AgNPs is an important factor in their antibacterial effect. Quorum sensing (QS) and capsular polysaccharides have important influences on biofilm formation in S.suis biofilms.QS is a Frontiers in Microbiology 07 frontiersin.orgmicrobial cell-to-cell communication process that dynamically regulates various metabolic and physiological activities (Wu et al., 2020).LuxS/AI-2-mediated QS is a key system involved in the formation of biofilms (Wang et al., 2018).Our results identified 34 differentially expressed proteins enriched in the quorum sensing pathway, of which the expression of S-ribosylhomocysteine lyase was The heatmap of antioxidant activity proteins between AgNP-treated and control samples; pink indicates down-regulation, brown indicates upregulation, and gray indicates no detectable expression change. Liu et al. 10.3389/fmicb.2023.1293363Frontiers in Microbiology 08 frontiersin.orgsignificantly up-regulated.S-ribosylhomocysteine lyase is regulated by LuxS, which is involved in the synthesis of autoinducer 2 (AI-2) which is secreted by bacteria and is used to communicate both the cell density and the metabolic potential of the environment.Moreover, upregulation of cps2J may promote the synthesis of capsular polysaccharides (Wang S. et al., 2017), thereby decreasing the formation of S. suis biofilm.These results suggest that AgNPs inhibit the formation of S. suis biofilms by affecting QS and capsular polysaccharide synthesis.Cell division is an important core process in almost all organisms and is regulated by multiple genes and proteins.Bacterial cell division is coordinated by macromolecular protein complexes.DnaA is the most conserved DNA replication initiation protein, which can initiate chromosome replication and acts as a transcription factor (Menikpurage et al., 2021).FtsZ is the initiation structure of division formation and cytokinesis (Silber et al., 2020).It is an essential cell division protein that forms a contraction ring structure (Z ring) at the site of cell division.It is also an important target of antibacterial drugs (Ur et al., 2020).This study found that the expression of the chromosome replication initiation protein DnaA, cell division initiation proteins FtsZ and DivIB, and the ATP-dependent Clp protease proteolytic subunit was downregulated after AgNP treatment.These proteins are essential for bacterial replication and division; therefore, they could be important factors for the effect of AgNPs on bacterial proliferation. ROS is an umbrella term for an array of molecular oxygen derivatives that occur as a normal attribute of aerobic life (Sies and Jones, 2020).Most living organisms possess enzymatic defenses (superoxide dismutase [SOD], glutathione peroxidase [GPx], and glutathione reductase [GR]), non-enzymatic antioxidant defenses (glutathione, thioredoxin, Vitamin C, Vitamin E), and repair systems that protect them against oxidative stress (Wang Y. et al., COG Annotation of DEPs.The x-axis displays the COG term, y-axis displays the corresponding protein count illustrating the protein number of different functions.et al. 10.3389/fmicb.2023.1293363Frontiers in Microbiology 09 frontiersin.org2017).However, excessive ROS causes an imbalance between oxidation and antioxidation, resulting in oxidative stress that damages various cellular components (including proteins, lipids, and DNA) (Schieber and Chandel, 2014;Huang et al., 2021) and ultimately induces bacterial death.Van Acker and Coenye (2017) demonstrated that ROS mediates the bactericidal mechanisms of some antibiotics.ROS levels that exceed the capacity of the cellular antioxidant defense system induce oxidative stress (Niki, 2016).The intensified oxidative stress may also affect the structure and permeability of the cell membrane, subsequently leading to cell death (Zhou et al., 2023b). Liu Our results revealed that the production amount of bacterial ROS was positively correlated with the concentration of added AgNPs to some extent.Based on iTRAQ quantitative proteomic analysis, our results indicate that AgNPs significantly affected the expression of a large number of bacterial proteins, including oxidoreductases.We found that the expression of some oxidoreductases, such as GR, increased significantly, which may have been due to an increase in ROS.We speculate that the action of AgNPs leads to the enhancement of the metabolic activity of cells, which then produces excessive ROS, and the increasing ROS induces the continuous expression of antioxidant enzymes to eliminate excessive ROS.While four antioxidant enzymes were significantly differentially expressed, our results suggest that this is insufficient to effectively eliminate excessive ROS in the bacterial cells.As a result, the cells experience oxidative stress, leading to oxidative damage and, ultimately, death.These findings are consistent with previous research by Liao et al. (2019) and suggest that excessive oxidative stress is a key factor in bacterial mortality.In summary, our findings indicate that AgNPs disturb the natural morphology of S. suis and its biofilm.The results obtained through proteomic analysis suggest that AgNPs may negatively impact the cell wall structure by inhibiting the synthesis of peptidoglycan, thus, leading to bacterial morphology disruption.Additionally, AgNPs can impede bacterial adhesion, interfere with QS system, and inhibit bacterial growth by targeting the cell division protein FtsZ and the chromosomal replication initiator protein DnaA.AgNPs' induction of considerable oxidative stress is a significant contributing factor to bacterial death. These findings contribute to a better understanding of the molecular basis of AgNPs' antibacterial activity, highlighting potential targets for the development of new antimicrobial agents against S. suis infections. Figure Figure 4B illustrates the upregulation and downregulation of differentially expressed proteins in each classification.There were 196 proteins enriched in the membrane, of which 88 were upregulated, and 108 were downregulated. FIGURE 3 FIGURE 3 iTRAQ analysis reveals differentially expressed proteins (DEPs) after AgNPs treatment.(A) Volcano of differentially expressed proteins, (B) CV distribution in replicate.Red and green dots indicate points of interest that display both large-and high-magnitude fold changes, respectively.Red dots indicate significantly upregulated proteins that passed the screening threshold.Blue dots indicate significantly downregulated proteins that passed the screening threshold.Gray dots represent non-significantly differentially expressed proteins. FIGURE 4 GO FIGURE 4 GO functional annotation and enrichment analysis of DEPs between the control and AgNPs-treated group.(A) GO functional annotation.(B) GO enrichment analysis; red: Up-regulation, blue: down-regulation. FIGURE 7 KEGG FIGURE 7KEGG enrichment analysis of DEPs between the control group and the AgNP-treated group.(A) Enrichment at levels 1 and 2, (B) Specific enrichment pathways of DEPs. TABLE 1 GO enrichment analysis of DEPs. TABLE 2 Significant differential expression of antioxidase. TABLE 3 Main DEPs that related to polysaccharide biosynthesis. TABLE 4 Down-regulation of cell wall peptidoglycan-related proteins. TABLE 5 Significantly down-regulated cell cycle-related proteins.
2023-11-17T16:14:46.354Z
2023-11-15T00:00:00.000
{ "year": 2023, "sha1": "8b39c84d111c24ed33a23f1656ca705c4369e88a", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2023.1293363/pdf?isPublishedV2=False", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "635a681b8e7e165a85116c4661d7faae167e0fe4", "s2fieldsofstudy": [ "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
255826964
pes2o/s2orc
v3-fos-license
The emerging role of exosomes in innate immunity, diagnosis and therapy Exosomes, which are nano-sized transport bio-vehicles, play a pivotal role in maintaining homeostasis by exchanging genetic or metabolic information between different cells. Exosomes can also play a vital role in transferring virulent factors between the host and parasite, thereby regulating host gene expression and the immune interphase. The association of inflammation with disease development and the potential of exosomes to enhance or mitigate inflammatory pathways support the notion that exosomes have the potential to alter the course of a disease. Clinical trials exploring the role of exosomes in cancer, osteoporosis, and renal, neurological, and pulmonary disorders are currently underway. Notably, the information available on the signatory efficacy of exosomes in immune-related disorders remains elusive and sporadic. In this review, we discuss immune cell-derived exosomes and their application in immunotherapy, including those against autoimmune connective tissue diseases. Further, we have elucidated our views on the major issues in immune-related pathophysiological processes. Therefore, the information presented in this review highlights the role of exosomes as promising strategies and clinical tools for immune regulation. Introduction Extracellular vesicles encompass a large heterogeneous family of membrane-bound structures, such as microvesicles, exosomes, and apoptotic bodies. The Heine group initially named all vesicles derived from plasma membrane exosomes (1). Earlier studies by the Turbide group reported that reticulocyte culture releases exosomal vesicles that can be harvested by ultracentrifugation. Further analysis of these vesicles revealed the presence of transferrin receptors, nucleoside transporters, glucose transporters, acetylcholinesterase, and Na + -independent amino acid transporter (2). Exosomes are capable of transferring miRNAs and mRNAs between cells (3). Almost all cell types release exosomes with varying molecular compositions and biogenesis pathways. Several bodily fluids, such as saliva, urine, breast milk, amniotic fluid, synovial fluid, serum, and ascites fluid, contain exosomes under physiological and pathological conditions. Exosomes were previously considered cellular waste generated due to cell damage and homeostasis. However, they have been extensively investigated after revealing the antitumor potential of exosomes carrying MHC-I and MHC-II (4). Exosomes range from 30-150 nm in size (5,6). Later research ascertained their genesis to be from the inward budding of endosomes, producing multivesicular bodies with intraluminal vesicles. Subsequently, proteins are incorporated into the invaginating membrane, and cytosolic components are engulfed and enclosed within intraluminal vesicles. The contents of exosomes can differ depending on the source and physiological conditions of the cells that release them (7). The fusion of multivesicular bodies with the plasma membrane results in the release of intraluminal vesicles into the extracellular space, giving rise to "exosomes" (2,8,9). This is primarily an endosomal sorting complex required for transport (ESCRT)-dependent processes. However, it reportedly functions independently of ESCRT complexes (10, 11). Alternatively, they may also be trafficked to the lysosomes for degradation. These exosomes are mini versions of parent cells and have emerged as critical mediators in inter-cell communication, functioning as vehicles delivering a complex cargo of proteins, lipids, and nucleic acids from the parent cells to other distant cells. Upon reaching recipient cells, they interact either via surface ligands or through the delivery of activated receptors or epigenetic modulation of the recipient cells via the cargo of bioactive cells, consequently modulating the physiology of recipient cells. Thus, exosomes play distinct roles in a multitude of physiological processes, such as immune response, cell differentiation, signal transduction, and antigen presentation. Various cells secrete exosomes; the cargo is unique to its cellular origin and thus can be used as a biomarker. Their ability to package biomolecules within them has facilitated their use as drug delivery systems (12)(13)(14)(15). This review focuses on one of the major physiological roles of exosomes: innate immune regulation. To maintain body function and homeostasis, protection against foreign agents is essential. This function is effectively performed by the immune system, a complex network of cells and organs that protects the organism against infectious agents, such as bacteria, viruses, and other pathogens. This is achieved through a combination of innate and adaptive immunity. Innate immunity is an antigen-non-specific, evolutionarily conserved system in most multicellular organisms, whereas lymphocytes mediate adaptive immunity through antigen specificity and memory. The innate immune system forms the first line of defense and comprises a network of cells, including monocytes/macrophages, dendritic cells, neutrophils, and natural killer cells, facilitating the earliest interactions between the host and pathogens. Upon entry of a foreign object, pattern-recognition receptors on immune cells, such as Toll-like receptors (TLRs), RIG-I-like receptors, and certain DNA sensors, such as cGAS, recognize various molecular signatures of invaders called pathogen-associated molecular patterns (PAMPs), which include numerous molecules, such as lipopolysaccharide (LPS) from gramnegative bacteria, peptidoglycans from gram-positive bacteria, and unmethylated CpG DNA from bacteria and viruses. Upon recognition of the entry of an invader, cell-cell communication is critical for swiftly spreading the message of infection and enabling the innate immune system to mount a broad response against the pathogen. Until recently, cytokines and chemokines have been extensively studied for their role as messengers in innate immunity. However, recent research has revealed that exosomes are also vital in this communication (16). This review mainly focuses on the mechanisms by which exosomes mediate the innate immune response of the host to pathogens (viruses, bacteria, and parasites) as well as act against their cells via autoimmunity. We also focus on using exosomes as diagnostic biomarkers and therapeutic agents. We also focus on using exosomes as diagnostic biomarkers and therapeutic agents ( Figure 1). Exosomes in pathogen response Exosomes are released from pathogens and host cells. They modulate stimulatory or suppressive effects on the innate immune system through exosome-mediated intercellular communications. They are crucial in immune regulation, including antigen presentation, immune activation, immune suppression, and immune tolerance. Their immune activator or suppressor role depends mainly on the source of the exosomes and their biomolecular content. Exosomes derived from healthy human plasma samples contain various RNA species, such as mRNAs and noncoding regulatory RNAs, within these circulating vesicles (17)(18)(19). Pathways related to NF-kB activation and TLR cascades differ between exosomal mRNAs from naïve cells compared with those from LPS-stimulated cells, indicating significant changes in adaptive and innate immune processes. Exosomes are also carriers of critical soluble mediators such as cytokines. After LPS stimulation, RAW 264.7 mouse macrophages exhibited increased levels of cytokines, predominantly chemokines. Ten of the 16 cytokines secreted by LPS-stimulated RAW 264.7 cells were from cellderived exosomes (20). Exosomes in viral infections Mounting evidence from several viral infections suggest that exosomes can transfer viral components, including proteins, genomic molecules, and receptors, from infected to healthy cells, thus promoting infection and inflammation. The delivery of viral receptors to target cells by exosomes renders the cells susceptible to viral entry. Nef, the HIV protein, is enclosed within exosomes (21, 22); they are activated during the uptake of these exosomes by latent cells, rendering them susceptible to HIV infection (23). Nef also inhibits the generation of CD4+ exosomes from T cells and induces the death of CD4+ T cells. Consequently, immune cells suppress viral recognition (24-26). Mack et al. demonstrated that the transport of viral receptors such as chemokine receptor 5 (CCR5), the main co-receptor for HIV infection, from peripheral blood mononuclear and CCR5+ ovary cells to CCR5-null cells enhances HIV-1 infection (27). Moreover, exosomes produced by megakaryocytes and platelets harbor the HIV co-receptor C-X-C chemokine receptor type 4 (CXCR4) and increase the susceptibility of CXCR4-null cells to X4-HIV infection (28). Exosomes also promote infection by delivering viral nucleic acids to the uninfected cells. Exosomes from HIV-infected cells reportedly transmit transactivation response elements (29, 30). The transactivation response element at the 5′ ends of the HIV transcript copies and interacts with the Tat protein to produce viral RNAs, consequently generating miRNAs. These miRNAs can inhibit a Bcl-2-interacting protein, which ultimately promotes resistance to apoptosis and promotes virus production (29, 30). Conversely, exosomes from infected cells can suppress viral infection. For instance, human cytidine deaminase apolipoprotein B mRNA editing enzyme, catalytic subunit 3G (APOBEC3G) molecules present in exosomes from infected cells cause the deamination of cytosine residues to uracil in the minus strand of viral DNA during reverse transcription. Consequently, the viral infectivity factor of HIV type-1, which is essential for efficient viral replication, is not sorted into exosomes (31). Additionally, exosomes produced by infected cells contain cyclic guanosine monophosphate-adenosine monophosphate (cGAMP), which can trigger an antiviral response via innate immune responses and interferon upregulation (32, 33). HIVrelated miRNAs, such as miRNA-88 and miRNA-99, induce endosomal nuclear factor-kappa B (NF-kB) and toll-like receptor 8 (TLR8) signaling, which activate the immune response against HIV via tumor necrosis factor-alpha (TNF-a) production from macrophages (34). Although many viruses evade the pathogen-sensing pathway, the immune system adopts alternative pathogensensing strategies that are not challenged by viral evasion mechanisms. For instance, hepatitis C virus-permissive cells can selectively pack immunostimulatory viral RNA into exosomes and deliver them to neighboring plasmacytoid dendritic cells, triggering an antiviral IFN response in vitro (35). Although knowledge of host-pathogen interactions in novel coronavirus SARS-CoV-2 infection remains limited, growing evidence suggest the role of exosomes in this process. In an in vitro study wherein, SARS-CoV was cultured in alveolar epithelial type II cells, viral particles were observed within double-membrane vesicles (36, 37). This immune system evasion could be attributed to the reinfection observed in several discharged and fully recovered patients. Exosomes can also transfer angiotensin-converting enzyme 2 (ACE2), the main viral receptor, to recipient cells (38), thereby rendering them susceptible to virus docking. Non-peer-reviewed observations from El-Shennawy et al. reported that exosomal ACE2 competes with cellular ACE2 to neutralize SARS-CoV-2 infection. The ACE2-expressing (ACE2+) exosomes blocked the binding of viral spike (S) protein RBD to ACE2+ cells in a dose-dependent manner. Notably, this was 400-700-fold more effective than that of vesicle-free recombinant human ACE2 extracellular domain protein (rhACE2). They could also prove that ACE2-containing exosomes could prevent SARS-CoV-2 pseudotype virus attachment and could infect human host cells at an efficacy 50-150-fold higher than that of rhACE2. Thus, ACE2+ exosomes serve as competitive inhibitors to block SARS-CoV-2 infection, suggesting a promising therapeutic role (39). The contribution of exosomes in COVID-19 infection has already been proven to be quite significant, and further research is essential to elucidate the exact mechanisms ( Figure 2 and Table 1). Exosomes in bacterial infections Exosomes can promote further infection by delivering bacterial molecules involved in pathogenesis. For instance, Staphylococcus aureus-derived exosomes harbor the bacterial pore-forming molecule a-toxin (40), and exosomes from Bacillus anthracis-infected cells have been observed to transport the lethal toxin virulence factor to sites distal to the infection (41). The work described (41, 42) sheds new light on how exosomes protect host cells by functioning as cellular decoys. The autophagy protein (ATG16L1) is required for protection against Staphylococcus aureus, which expresses atoxin. This pore-forming toxin binds to the metalloprotease, a disintegrin and metalloproteinase domain-containing protein 10 (ADAM10), on the surface of various target cells and tissues. This study confirmed that ATG16L1 and other ATG proteins mediate protection against a-toxin by releasing ADAM10 on exosomes, which act as scavengers that can bind the toxin and improve host survival (Table 1). Exosomes in protozoan infections Several protozoan parasites, such as Plasmodium sp., Leishmania sp., and Trypanosoma sp., release exosomes. Malarial disease severity correlates with the production of exosomes from Plasmodium-infected cells (49). Plasmodium falciparum-infected red blood cells communicate with parasites using exosome-like vesicles. This also promotes differentiation into sexual forms (43). In Plasmodium yoelii-infected mice, parasite protein-containing exosomes are released from infected reticulocytes and can induce antigen presentation (44). Leishmania donovani (promastigote and amastigote forms) and Leishmania primarily release exosomes that can induce interleukin (IL)-8 secretion from macrophages (45). Subsequently, neutrophils are recruited, and Leishmania can invade these cells and gain access to macrophages during the phagocytosis of infected neutrophils (45, 50). Metacyclic trypomastigote and non-infective (epimastigote) forms of Trypanosoma cruzi parasites release exosomes (46) that contain proteins associated with immune modulation and virulence. Leishmania spp. and Trypanosoma cruzi can also induce exosome release from the infected cells. Studies on Leishmania mexicana-treated macrophages in vitro suggested that exosomes released from infected cells can induce phosphorylation of signaling proteins and significantly upregulate immune-related genes, including adenosine receptor 2a (Adora2a) on macrophages (46, 47). Trypanosoma brucei rhodesiense-derived exosomes can transfer serum resistance-associated proteins to Trypanosoma brucei. The parasite requires serum resistance-associated proteins to circumvent the action of host lytic factors, thereby conferring the ability to evade innate immunity (48). Culturing the intestinal epithelial cells with Cryptosporidium parvum activates TLR4 signaling, leading to an upregulation of exosome secretion from these cells. This process is mediated by synaptosome-associated protein 23 (SNAP23)-associated vesicular exocytosis. These released exosomes contain epithelial antimicrobial peptides that can bind to and decrease the viability and infectivity of C. parvum sporozoites (51), (Table 1). Exosomes in immune tolerance and autoimmunity The immune system is vital for the protection of the body and the maintenance of homeostasis. However, this system must be tightly regulated for the normal physiological functioning of the body to avoid response to non-threatening and beneficial antigens (for instance, microbiota), to distinguish self from non- Roles of exosomes in virus infection. ACE2, Angiotensin converting enzyme 2. Created with BioRender.com. self, and from over-reactivity to antigens (allergy or hyperimmunity). Despite stringent control, the immune system can sometimes react to its cells, tissues, and other components, contributing to autoimmunity. Exosomes also engage in these processes, which are detailed in the subsequent sections. Common auto immune thyroid diseases (AITDs) like Hashimoto's thyroiditis and Graves' diseases are class of immune disorders caused by the irregular infiltrations of lymphocytes and over productions of autoantibodies to cause the hypothyroidism or hyperthyroidism (52) and genetic along with environmental factors were acknowledged for contributing to development of AITDs (53, 54). Irregular activation/ production of autoantibodies like thyroid peroxidase antibodies (TPOAb), thyroglobulin antibodies (TGAb) and thyrotropin receptor antibodies (TRAb) cause thyroid dysfunction, which greatly affect the quality of life (54-56). So, it is clear that over production of autoantibodies and its governance by regulatory T-cell (Tregs) and T-helper-17 cells (Th17) is prime in controlling immune-induced thyroid pathogenesis. The micro communications between T-cells and . However, specific mechanism of transcommunications and molecular switch at receptor level yet to be unraveled. Graves' disease is the most common cause of hyperthyroidism with manifestation of thyrotoxicosis and Graves' ophthalmopathy and highest incidence in 30-50 aged women (63). An interesting studies by Rossi and others found that Dichlorodiphenlytrichloroetahns (DDT) stimulated thyroid follicular cells have active role in exosome mediated transferring the thyrotropin releasing hormones (TSHR) which involved in production of autoantibodies to the TSHR leading to Graves' diseases (64). As mentioned before epigenetic modification (DNA methylation, histone modifications and noncoding RNAs(ncRNAs) interference) also plays a main role in pathogenesis of Graves' diseases (65, 66), recent studies revealed that nucleic acids present in exosomes retain their pathological function in a recipient cell after direct transfer exosomal cargoes (67) and different stages of Graves' diseases patients showed a different noncoding RNAs in exosome compared to healthy (68). Till now, exosomes related AITDs are still in sporadic stage. Till now, understanding of exosomes related AITDs are still in its infancy. Consequently, further research is required to elucidate the snapshot mechanism of exosomes in AITDS thereby delivers the deep understanding of AITDs, diagnosis and prognosis. Drug induced liver injury (DILI) is common and almost all types of medicines. Recovery from DILI may require discontinuation medicines, hospitalization, or even liver transplantation (69,70). Exosome are an attractive therapeutics and drug delivery vehicle, recently the hepatocyte and MSCs derived exosomes/EVs have revealed that they are effective in hepatic regeneration in liver injury models (71)(72)(73). Hepatocyte exosomes shown to have ceramidase and sphingosine kinase 2 (SK2) in their compartment, increased sphingosine-1-phosphate (S1P) in hepatocyte, which promotes proliferation hepatocytes (71). The MSC-EVs enriched in glutathione peroxidase 1 (GPX1) and knockdown of GPX1 reduced the protective effects of MSC-EVs in DILI model (74,75). However, more preclinical experiments are still needed to understand roles of exosomes in DILI which can accelerate successful clinical translation of exosome therapy in DILI. Exosomes in gut immune homeostasis Gut cells are regularly exposed to foreign antigens, such as food molecules and bacteria. Partially digested and undigested food materials putrefy in the intestine through bacterial action, releasing several secondary metabolites of bacterial origin and further challenging the intestinal immune system. Inducing immune tolerance towards harmless food and bacterial antigens is essential to sustain progressive health devoid of routine inflammatory reactions. Defective immune tolerance can lead to inflammatory and autoimmune diseases. During infection, foreign antigens are presented to T cells by antigen-presenting cells. The dendritic cells play a pivotal role in controlling the effector and regulatory mechanisms of the immune response. A subset of dendritic cells called tolerogenic dendritic cells binds to T cells and suppresses the immune response against harmless food antigens or self-antigens, thus inducing immune tolerance (76). The presentation of antigens by tolerogenic dendritic cells results in the activation and proliferation of regulatory T cells (Treg), consequently leading to immune tolerance against a specific antigen (77). Furthermore, these semimature dendritic cells also induce clonal deletion of T cells and cause T cell anergy, resulting in an immunosuppression-dependent peripheral tolerance towards harmless antigens of the gut. Tregs execute their immunosuppressive functions by secreting cytokines (IL-10 and TGF-b) ( Figure 3). Besides tolerogenic dendritic cells and Tregs, exosomes also play a vital role in immune response regulation. Exosomes secreted from intestinal epithelial cells (IECs) play a critical role in intestinal immune homeostasis regulation. IECs divert dendritic cells towards the immune tolerance pathway through TGF-b, retinoic acid, and thymic stromal lymphopoietin (IL-7 family). Exosomes secreted by IECs contain immunoregulatory molecules. During an infection, exosomes loaded with peptide-MHC II are secreted by IECs, taken up by antigen-presenting cells, and induce efficient T-cell activation. This would eventually lead to Th1-and Th17-mediated clearing of the foreign antigen/pathogen. IECs exposed to harmless antigens have an entirely different activation pathway that ensures the suppression of inflammation against harmless antigens. In vitro cell culture experiments simulating digestion using cells exposed to ovalbumin and digestive enzymes indicated that IECs have an increase in integrin avb6 expression with a corresponding increase in integrin avb6 in exosomes when exposed to harmless antigens. The uptake of integrin-loaded exosomes by dendritic cells increases their TGF-b expression and transformation into tolerogenic dendritic cells. This, along with T regulatory cell activation, results in the suppression of the immune response against harmless antigens (78). Additionally, Tregs secrete exosomes that elicit immunosuppressive activity ( Figure 3) through the transfer of miRNA Let-7d to Th1 cells (79). IECs secrete immunomodulatory exosomes, which have increased expression of MHC class II and Fas ligands, into the mesenteric lymph after trauma/hemorrhagic shock. This study also demonstrated that exosomes released after trauma/hemorrhagic shock significantly suppressed lipopolysaccharide-mediated CD80 and CD86 expression on dendritic cells and decreased their antigen-presenting capacity to induce lymphocyte proliferation (80). Exosomes secreted by the intestine after intestinal ischemia/ reperfusion activate microglia, increasing the neuronal apoptotic rate and decreasing synaptic stability, thus leading to memory impairment (81). Exosomes produced by gut tropic T-cells regulate T-cell homing to the gut. These exosomes upregulate integrin a4b7 binding to mucosal address in cell adhesion molecule 1 (MAdCAM-1) expressed on endothelial venules in the gut, suppressing MAdCAM-1 expression in the small intestine, thereby inhibiting T cell homing to the gut (82). Human breast milk produces exosomes that decrease inflammation caused by necrotizing enterocolitis (83). Exosomes engage in activating the neuronal cells when intestinal cells are treated and activated with GABA (Gamma-aminobutyric acid). MicroRNAs in exosomes are responsible for this activation (84). These data collectively indicate a crucial role played by exosomes in the gut and gut-brain axis. Exosomes produced by various cells are also involved in healing various gut-related diseases, such as inflammatory bowel disease (IBD), colorectal cancer, and intestinal barrier dysfunction. Proteasome subunit alpha type 7 was abnormally FIGURE 3 Roles of exosomes in inducing immune tolerance. TGF-b, Transforming growth factor beta; DC, Dendritic cell; IL-10, Interleukin 10; Treg, Regulatory T cells; Th1, T helper type 1; and miRNA, micro Ribonucleic Acid. Created with BioRender.com. expressed in salivary exosomes of IBD patients (85). Moreover, exosomes extracted from the saliva of IBD patients contained approximately 2000 proteins, which were significantly altered compared with that in healthy individuals. Further research will reveal the importance of this variation in IBD pathology. Exosomes produced from Curcuma longa have the potential to inactivate the nuclear NF-kB pathway to ameliorate colitis and promote intestinal wound repair, which in turn alleviates IBD (86). Exosome-like nanoparticles obtained from grapes help induce intestinal stem cells and protect against dextran sulfate sodium (DSS)-induced colitis (87). Another study reported that oxaliplatin resistance in colorectal cancer was reduced with exosomes delivering miR-128-3p, consequently increasing the chemosensitivity of these cells (88). Mesenchymal stem cell (MSC)-derived exosomal miR-34a/c-5p and miR-29b-3p reportedly improve intestinal damage by targeting the Snail/ Claudin signaling pathway. These exosomes increase the expression of Claudin-3, Claudin-2, and ZO-1 (89) ( Table 2). Asthma and exosomes Asthma is a non-contagious disease that can develop owing to allergies. It is characterized by bronchial hyperresponsiveness, mucosal edema, and airflow restriction (94). Asthma progression requires interaction between resident and inflammatory cells (95). Exosomes released from eosinophils and innate immune cells are critical in regulating and enhancing asthma pathophysiology. Pathological changes in asthma are caused by the activation of structural lung cells and airway remodeling, which contain enzymes such as eosinophil cationic protein, eosinophil peroxidase, or major basic proteins (96) that cause epithelial damage (a hallmark of asthma). Apart from this, the exosomes also contain molecules such as nitric oxide (NO), lipid mediators, and ROS, which are responsible for inflammation (97). Previous reports confirm (97, 98) the abundance of exosomes containing these enzymes in individuals with asthma than in healthy individuals. T lymphocytes, key players in the inflammatory response in asthma, are also known to produce exosomes. The exosomes produced by T cells activate and degranulate mast cells and release cytokines, leading to tissue remodeling and airway hyperresponsiveness (99). Similarly, exosomes released by B lymphocytes transport specific molecules of antigen-presenting cells, enabling these exosomes to present the antigen to induce T-cell responses and release IL-5 and IL-13 cytokines (100). The expression of several proteins involved in allergic responses increases owing to the transfer of exosomes from airway epithelial cells to human tracheobronchial cells, which causes aggravation of asthmatic symptoms. Extracellular vesicles aid in disease progression by promoting dendritic cell maturation and increasing T helper cell 2 (Th2) proliferation (101). MSC-derived exosomes have disease-alleviating effects, such as decreased production of Type-2 innate lymphoid cells, Th2 cytokines, and mucus. In contrast, exosomes derived from bronchoalveolar lavage fluid inhibit specific immunoglobulins such as IgE and IgG1 (102). Another study (103) suggested that asthma induction enhanced the levels of CD63 and acetylcholine esterase activity (exosome-associated enzyme), indicating an increase in exosome biogenesis and its secretory pathways. In asthmatic tissue samples, IL-4 levels were increased, triggering the central pro-inflammatory responses that regulate eosinophil transendothelial migration and IgE production. This can further increase mucus secretion and also accelerate Th2 differentiation. In contrast, IL-10 levels were decreased, which inhibited Treg activity (104). TNF-a, a proinflammatory cytokine, was also increased in the tissue samples. This correlates with an increase in exosome secretion and its Sources Major contents Effects References Intestinal epithelial cells MHC-II, Fas ligand Apoptosis of dendritic cells. Intestinal epithelial cells MHC-II, Fas ligand Microglial activation, a decline in synaptic stability, neuronal apoptosis, and cognitive impairment. Intestinal epithelial cells integrin avb6 TGF-b upregulation in dendritic cells. NSC-34 motor neurons miR-124 Reduced phagocytic abilities and induced senescence in microglia cells. Serum of patients with polycystic ovary syndrome miR-424-5p Inhibited granulosa cell proliferation and induced cell senescence. Dendritic cells MHC-I and MHC-II T-cell-dependent anti-tumor response. (4) participation in inflammatory pathways that function in asthma (100). Arthritis and exosomes Arthritis is the inflammation of the joints caused mainly by immune system dysfunction. It is an umbrella term for several diseases, such as rheumatoid arthritis and osteoarthritis. Osteoarthritis is a degenerative disease that damages the synovial joints of the hands, feet, knees, and hips, which in many instances, are disabled (105-108). The different tissues around the joints have various effects on the pathology of osteoarthritis, especially inflammatory cytokines from the synovial fibroblasts of inflammatory cells. IL-1b, a proinflammatory mediator, and bone-regulatory factors, including BMP-2 (bone morphogenetic protein-2) (67), promote articular cartilage damage and hasten osteoarthritis development by facilitating the release of various proteolytic enzymes (109). Bone homeostasis is balanced by resorption and formation, and exosomes produced by osteoblasts regulate this process. These exosomes contain osteogenic signals, such as molecules of the eukaryotic initiation factor-2 pathway, which help bone formation (110). Exosomes produced from specialized cells, such as mineralized osteoblasts, promote the differentiation of bone marrow stromal cells into osteoblasts (111). MSC-derived exosomes also help maintain bone homeostasis by preventing cells from undergoing apoptosis during stress conditions, such as hypoxia and serum starvation (112). They also assist in healing fractures via the sprout-related EVH1 domain containing 1 (SPRED1)/Ras/Erk signaling pathway (113, 114). IL-1b treatment reduced the expression of the antiinflammatory gene TGF-b. Notably, exposure to exosomes derived from bone MSCs recovered TGFb expression. This phenomenon was again confirmed by the observation that abnormally high expression of several pro-inflammatory genes, such as NF-kB and TNF-a, can be reduced by co-culture with exosomes (115). By exploiting this function of exosomes, their clinical applications for the treatment of bone disorders are being explored. As the cargo in the exosome is highly dependent on the release of host cells, changes, if any, in the tissue microenvironment are reflected in the exosomal cargo. Psoriasis and exosomes Psoriasis is a non-contagious, immune-mediated disease that causes scaly, raised patches on the skin owing to systemic inflammation. The immunopathology of psoriasis is characterized by an increase in the CD4+ and CD8+ T cells, neutrophils, natural killer cells, mast cells, and macrophages (116)(117)(118). Discoveries in this field have shed light on the increased numbers of IL-17-secreting T cells and elevated levels of the Th17-polarizing cytokine IL-23 in psoriatic lesions (119). Furthermore, small extracellular vesicles released from psoriatic keratinocytes transfer miR-381-3p to CD4+ T cells, which induces the polarization of Th1 and Th17 cells and eventually contributes to psoriasis development (120). Patients with generalized pustular psoriasis have a higher neutrophil-tolymphocyte ratio than in healthy individuals. These neutrophils consequently induce a higher expression of inflammatory genes, including IL-1b, IL-18, and TNF-a. This occurs via the increased secretion of exosomes from neutrophils, which are then rapidly internalized by keratinocytes, thus increasing the expression of these inflammatory molecules via activating the NF-kB and MAPK signaling pathways (121). Langerhans cells, prominently present in the epidermal layer of the skin, play a pivotal role in the pathogenesis of psoriasis. Unlike MHC receptors that present only peptides, the CD1a receptor of Langerhans cells presents a broad spectrum of lipid antigens to Tcells, which amplify inflammation in psoriasis mouse models (122). The T-cell-mediated inflammatory response in the skin is mediated by increased phospholipase A2 activity, and phospholipase A2 levels were reportedly high in patients with psoriasis. IFN-a increases the release of exosomes from mast cells. T cells associated with psoriasis identify exosomes carrying lipid antigens/phospholipases originating from mast cells. This can lead to increased IL-22 and IL-17A production by CD1aautoreactive T cells (123). Neutrophils (NETs) released during NETosis are a major source of increased IL-17 levels in psoriasis (124,125). TNF-a, IFN-g, IL-2, IL-6, IL-8, IL-18, and IL-22 levels were higher in patients with psoriasis than in healthy individuals. However, this was not the case for IL-1b, IL-4, IL-10, IL-12, IL-17A, IL-21, and IL-23 (126). Another prominent feature of patients with psoriasis is that the levels of iron, hepcidin, and total iron-binding capacity of the exosomes were significantly lower. In contrast, the soluble transferrin receptor and heme oxygenase-1 levels were significantly overexpressed (127). Luteolin could be used in treating psoriasis as it heals skin lesions and alleviates psoriatic symptoms by reducing the effects of IFN-g, inhibits the expression and exosome secretion of HSP90, and modulates the proportion of T-cells in the plasma (128). Furthermore, topical application of MSC exosomes, known for their immunomodulatory properties, reduced IL-17 and terminal complement activation complex C5b-9 (129). Another study used MSC-derived exosomes to alleviate psoriasis-like skin inflammation via the IL-23/IL-17 axis. Exposure to exosomes reduces the levels of STAT3/p-STAT3, IL-17, IL-23, and CCL20, suggesting that exosomes are a potential therapeutic candidate (130). However, studies have reported that it does not affect the function of pancreatic glucagon-producing a-cells. The intracellular b-cell autoantigens in type 1 diabetes mellitus, namely, GAD65, IA-2, and proinsulin present in exosomes, are taken up by dendritic cells and, consequently, activated. Thus, the exosomal release of intracellular autoantigens and immunostimulatory chaperones induced by stress may initiate autoimmune responses in type1 diabetes mellitus (139). Pancreatic b-cells transfer secretory vesicles to phagocytic cells for presentation to T cells. The criterion for transfer requires the positioning of beta cells in close interaction with phagocytes under low-glucose conditions. This transfer is promoted by increased glucose concentration, as it increases cytosolic Ca 2+ levels. Secretory vesicles from the pancreas transfer two sets of vesicles, one with insulin and another containing its catabolites (140), eventually facilitating their access to the immune system. Exosomes and exosome-like vesicles are used to diagnose and treat insulin resistance in type2 diabetes mellitus (141). Exosome-loaded immunomodulatory biomaterials have been used to alleviate the immune response in immunocompetent diabetic mice after islet xenotransplantation (141,142). However, the comprehensive characterization of exosome-like vesicles is challenging, and their effects are not specific, which can lead to undesired consequences (141). Roles of exosomes in organ transplant For end-stage organ diseases, transplantation becomes the only therapeutic option left with patients (143). Regardless of advancement in the field of medicine and surgery, substantial portion of patients continue to suffer various post-transplant complications and rejection of transplants. In recent years, EVs (including exosomes) have fascinated boundless attention of the scientist and surgeons as exosomes becoming a potential tool for understanding the transplanted organ physiology and diseases. Numerous studies have shown that exosomes released by various antigen presenting cells, which involves in humoral and cellular immune system (144). The transplanted organ releases exosomes (allograft-exo) with donor major histocompatibility complex (MHC) molecules and they go out from the graft to the recipient's lymphoid system (145, 146). These allograft-exo are internalized by recipient antigen-presenting cells (APCs), which leads to presence of donor MHC molecules on recipient APCs' surface. This is called as semi-direct pathways in T cell-mediated rejection without direct contact between donor cells with recipient APCs (146)(147)(148). In a skin graft mice model, the donor MHC molecules are transported by EVs to the host's lymphoid organs, then dendritic cells/B cells take up this donor MHC molecules (MHC-I and MHC-II) and presented to T cells, which leads to a direct inflammatory alloresponse in mice (146,149). In case of solid organ transplantation, for example the heart transplant, recipient's blood vessels are connected to the transplant during surgery. This leading to donor passenger leukocytes from graft may move to recipient, reported that donor dendritic cells was present with in the spleen of allogeneic heart transplanted mice (150), this was further reported in other studies that allo-MHC cross-dressed cells in lymphoid organs in vivo after cardiac transplantation (151, 152), allograft-exo may have been involved as well. Some studies show that allogeneic exosomes under certain circumstances involved in tolerance rather than rejection of allografts (153,154). However, exosome involvement in organ transplantation still has not been elucidated and extensively studies are needed. Cross talk on exosomes and cell receptor Inborn or innate immunity is inherited immunity consists of cellular and hormonal arms. The intercellular communication during the human genesis is first drafted by cell "language" communication system which is mediated by exosomes to maintain the rigidity of the cell. The effective immune response to foreign materials, like pathogen associated molecular patterns (PAMPS), danger associate molecular pattern (DAMPS), or life style associated molecular pattern (LAMPS), is controlled by exosome communication language. PAMPS and DAMPS are structurally diverse external molecules from pathogenic bacteria which includes, lipids, proteins, carbohydrates and material by products. Mega receptors in defense cell like leukocytes, first will have defense mechanism during the bacteria, virus, parasites and fungi invasion. Cellular recognition patterns such as Toll like receptors (TLRs) are expressed in various immune cells like leukocytes, macrophages, dendritic cells, B cells, T cells, and even nonimmune cells like fibroblasts and epithelial cells (155). TLR first binds to PAMPS, and DAMPS thereby ignites the cell signaling cascades as combat mechanism. Under sterile inflammation condition, parenchymal and stromal cells communicate via extracellular vesicles through innate system. For example, during noxious gas inhalation, in lung, epithelial cell produces more exosomes with excess loading of microRNA17/221 (miRNA17/221) thereby promoting the local inflammation through the recruitment of M0 macrophage (156). Type 1 transmembrane protein, TLR has two domains like leucinerich repeats (LRR) motif and Toll/LI-1R (TRR) as cytoplasmic domain in all immune cells and having different sub types like TLR-1, 2, 4, 5, 6 as cell surface members and TLR-3, 7, 8 and 9 as intracellular components (157). Among all these TLR-2 recognizes pathogens metabolites like lipoproteins/ lipopeptides, peptidoglycan, glycosyphosphatidylinositol, phenol soluble modulin, zymosan and glycolipids (158). Double stranded RNA (dsRNA) of viruses will be recognized by TLR-3, while TLR-7 perceive GU rich single stranded RNA (ssRNA) but TLR-5 spots the bacterial flagellin (159-161). In tumor development and subsequent metastatic microenvironment, series of coordinated events are taking place between exosomes and TLR, thereby it plays an important role in the cancer development process. In cancer microenvironment scenario, TLR-8 plays important role by transmitting ECmiRNAs like miRNA 21 and miRNA 29a because of highest binding capacity with TLR-8 receptors thereby activates the receptors in immune cells and subsequently leads to the premetastatic inflammatory cytokines (162). Despite the evidence that exosomes do play important role in the intercellular communications, the production and transport rely on the donor cells physiological conditions and receptor activations of receiver cells. TLR is known to be an importance factor in acute kidney diseases like urinary tract infections, ischemia-reperfusion (IR) injury, lupus nephritis and diabetic nephropathy (163). Micromanagement of exosomes in renal physiology have been studied both in vitro and in vivo condition. Miyazawa et al. first reported the role of exosome mediated aquaporin 1 and 2 (AQP-1, AQP-2) transport in collecting duct cells (164). Lipopolysaccharide (LPA) induced renal tubular epithelial cells showed the elevated level of exosome miRNA 19b-3p thereby induce the macrophage dependent tubulointerstitial inflammation (165). Extensive investigation on exosome mediated urolithiasis (kidney stone disease) remains under investigated, indeed, decreased the level of miRNA-21and Let-7 was noticed in lupus nephritis patients (166). However, genetic manipulations of specific transcriptomes aided with exosome engineering would shed better understanding of kidney stone disease (167). Exosomes in hematological complications Potential role of exosomes in immunomodulation disorder in blood cancer or the induction of hematological diseases in polycythemia vera (PV), essential thrombocythemia (ET) myelofibrosis (MF) (168). Chronic Myeloid leukemia (CML) cells (K562) derived exosomes promoted the angiogenesis, which was inhibited by dasatinib through inhibition of exosome release from K562 CML cells and their microenvironments (169). This in vitro study strongly supports hypothesis that microenvironment driven by K562 CML cells derived exosomes governs the clinical manifestation. These exosomes also contains angiogeneic miRNAs such as mir-92 which helps in tumor cell migration during hypoxic condition (170). Disturbed HIF-1 pathways with upstream and downstream associated gene regulations by miR-1555, mir210, and mir135b also plays important role in migration of the CML cells by suppress or activation of factor inhibiting hypoxia inducible factor 1 (FIH-1). Taken altogether, cumulative factors such as EVs, growth factors cytokines and miRNAs play a fascinating role in the leukemogenesis, and cross-talks between various cell populations via each of the indicated components promote the formation of different types of hematological malignancies, including acute myeloid leukemia (AML), acute lymphoblastic leukemia (ALL), chronic lymphocytic leukemia (CLL), chronic myeloid leukemia (CML), lymphoma, and multiple myeloma (MM) (171). Autophagy and exosomes relationship in cancer The body uses autophagy to eliminate unhealthy cells so that it can replace them with new, healthier ones. In normal conditions, long-lived proteins and old organelles were degraded by autophagy for restoration of cellular contents. But under stressful conditions, for example hypoxic (lower oxygen) conditions, autophagy is activated to recycle molecules for providing energy and nutrients (172-174). Studies on exosomes have enlightened our understanding of their biological functions and recently several studies have shown that cancer exosomes play a great role in tumor progression and metastasis (175,176). The studies have reported that knockdown of both autophagy related 16 Like 1 and autophagy related 5 in breast cancer cells shown to produce and release lesser exosome, this leads to decrease tumor metastasis (177, 178). Studies on G alpha interacting protein (GAIP) and GAIP interacting protein C-terminus have been shown involves in stimulation of biogenesis of exosome and autophagy flux in pancreatic tumor cells (179,180). Inhibition of autophagy related 12-autophagy related 3, a complex plays important role in late step of autophagosome formation, reduce the exosome biogenesis by disrupting late endosome trafficking in multivesicular body (MVB) through interacting with Alix (172, 181). Several studies have well documented that autophagy, exosome plays important role in tumor progression (172, 182, 183). Autophagy and exosome release are robustly activated in tumors, which strongly showing that both these pathways are play a interplay in between and they are as part of hallmark of cancer cells. Recently the scientists are focusing on inhibition of tumors progression and metastasis by targeting the autophagy and exosome biogenesis. A study reported that sulfisoxazole targeting the endothelin receptor A, which lead to inhibition of exosome biogenesis through increased degradation of MVB via the autophagy-lysosome pathway (184). A Study suggested that an inhibitor of ULKs (a serine/threonine protein kinase) leads to accumulation of immature early autophagosomal structures (185); ULK1 are involved in the trafficking of autophagy related 9, which is main player in intraluminal vesicle formation within amphisomes and autolysosomes (186,187). Likewise other several inhibitors have been studied to inhibit biogenesis of exosomes (185,188). Still several preclinical and clinical studies are required to understand the role of cancer exosome and autophagy and their targeting in cancer treatments. Exosomes in diagnostics Exosomes are released by cells, and their components provide a brief story about the microenvironment in and around the cell. These could be signals, exosomes presenting antigens, or even cancer-promoting factors. This property has previously been exploited for diagnostic purposes. The increased and highly stable exosome expression in patients with cancer renders it an up-andcoming player for cancer diagnosis. Various groups have used exosomes present in the plasma for diagnostic purposes. Different immunological markers of exosomes have been characterized in patients with chronic lung allograft dysfunction. Lung transplant recipients who displayed both the phenotypes of obstructive bronchiolitis obliterans syndrome and restrictive allograft syndrome had exosomes with distinct molecular and immunological profiles. Upon further testing, restrictive allograft syndrome samples were observed to have a higher concentration of pro-inflammatory factors, indicating severe allograft injury (189). Exosomes have emerged as prominent biomarkers for cancer diagnoses. Another group investigated the prominence of plasma-derived exosomal miR-19b in the diagnosis of pancreatic cancer. The results suggested that the levels of Exo-miR-19b normalized using miR-1228 were comparatively lower in patients with pancreatic cancer than in healthy individuals. This indicates that exosomes are promising candidates as biomarkers (190). This property of exosomes is utilized to diagnose various types of cancers, such as non-small cell lung cancer (NSCLC), prostate cancer, osteosarcoma, and cervical cancer. Lower expression of circulating miR-651 was observed in patients with cancer than in healthy individuals. Moreover, exosomes collected from HeLa cells were rich in CD63, CD9, and CD81 proteins, which are cancer cell hallmarks (191). Long non-coding RNA (lncRNA) expression in urinary exosomes of NSCLC patients and healthy individuals as a potential lung cancer diagnosis has been explored. The results indicated that differential lncRNAs in urinary exosomes are NSCLC biomarkers because lncRNAs are enriched in specific pathways that might be involved in tumor cell proliferation and other processes associated with NSCLC pathogenesis (192). Liquid biopsy of exosomes isolated from patients with prostate cancer revealed that exosomes are enriched for genes that are hallmarks of prostate cancer, such as androgen receptor, kallikreins (KLK2), cyclin-dependent kinase inhibitor 1A (CDKN1A), KLK10, JUN, and B2M (b 2 microglobulin). These observations indicate that exosomes transport critical disease RNA transcripts and can be used as non-invasive diagnostic biomarkers (193). Exosomes of patients with prostate cancer had higher amounts of survivin, an apoptosis inhibitor, which is another biomarker for early detection of prostate cancer (194). These examples suggest that this is an untapped niche for early cancer diagnosis and has potential. One of the first examples of exosome biodistribution in the body by in vivo positron emission tomography (PET) for non-invasive monitoring of copper-64 (64Cu)-radiolabeled polyethylene glycol (PEG)-modified exosomes was demonstrated with high imaging quality and quantitative measurements of blood residence and tumor retention. A fluorescent dye, amine-reactive Alexa Fluor 488 (NHS-Alexa Fluor 488), was conjugated to exosomes to form Alexa Fluor 488−exosome and Alexa Fluor 488−exosome−PEG. PEGylation confers an enhanced pharmacokinetic profile and higher tumor accumulation in exosomes compared with that of native exosomes. However, it also reduces premature hepatic sequestration and clearance of exosomes, thereby highlighting their improved therapeutic delivery efficacy and safety (195). Notably, this study provides crucial guidelines for obtaining precise and quantitative information on the biodistribution of exosomes via surface engineering, radiochemistry, and molecular imaging, which may aid future exosome research. A technique was devised for in vivo neuroimaging and exosome tracking using gold nanoparticle labeling (196). Exosomes may be directly labeled with glucose-coated nanoparticles (GNP) without the need to label parent cells, and this labeling occurs through an active mechanism linked to the GLUT-1 (glucose transporter-1). Intranasal delivery is more effective than intravenous injection for brain accumulation. The noninvasive intranasal delivery route has multiple benefits and is a potential and effective treatment option for various CNS disorders. In contrast to the non-lesioned brain, the detection and accumulation of MSC-derived, GNP-labeled exosome labeling in the stroke region of the brain up to 24 h after administration was observed. Multifunctional exosomes for cancer theranostics were developed by electroporating urinary exosomes and ultra-small gold nanoparticles with Ce6 (chlorin e6) to obtain real-time fluorescence imaging and improve photodynamic therapy (197). The nanocomposites cloaked with exosomes exhibited enhanced long-term retention, biocompatibility, and penetrating behavior than that of free Ce6. A comprehensive study of the theranostic application of exosomes in cancer has been described by Ailuno et al. (198). Circulating tumor exosomes with specific biomolecules have been recently used for detecting cancers and assessing therapy response and they are becoming more common as diagnostic targets (199)(200)(201). In recent times an ExoChip (with an anti-CD63 antibody), microfluidics device constructed to capture and collect specific exosomes, it showed that exosomes were significantly higher from individuals with cancerous diseases than healthy controls (202). Liquid biopsy with exosomal IGF-1R expression is used in individuals with lung cancers instead of invasive traditional tissue biopsies, which was achieved by microfluidics exosome analysis platform (203). A microfluidics chip is used for evaluation of circulating EpCAM-positive exosomes with plasma exosomes. EpCAM-positive exosomes was higher in individuals with breast cancer than healthy individuals (204). These studies and several other studies suggest that microfluidics-based exosome isolation and detection methods are more reliable (205) but still further discoveries are needed for its routine clinical translation. Exosomes in therapeutics The role of exosomes for therapeutic purposes requires further elucidation; nonetheless, their potential has been wellestablished. Exosomes did not exhibit toxicity upon injection, and nano-sized membrane-bound vesicles were well tolerated by the body. These properties make them excellent candidates for contact delivery because they do not undergo degradation. Both targeted and non-targeted exosome deliveries successfully alter protein expression in cancer cells (206). There are several methods for isolating and purifying exosomes. Based on yield and purity, acoustic nanofilter (207), ExoSearch chip (208), immunoprecipitation (209), density gradient (210), precipitation kits, and ultracentrifugation methods are in use. A recent study demonstrated a separation method based on acoustofluidics by integrating acoustics and microfluidics approaches to isolate exosomes from blood directly [140]. This approach combines cell removal with exosome isolation. By integrating these modules into a single chip, exosomes were isolated with a blood cell removal rate of over 99.999% (211). Another study described a novel approach using microfluidic devices to isolate exosomes from whole blood, directly facilitating translation to the clinic (212). The choice of the device has a significant impact on the yield and purity of exosomes in the case of cell lines and complex biological fluids. The isolation of exosomes can be validated using nanosight tracking analysis, transmission electron microscopy, or western blotting. The revolutionized development of different approaches for exosome isolation and purification has made procedures more feasible for early diagnosis and therapeutics. The off-target binding of drugs leads to several adverse effects (213). Several research groups are investigating targeted drug delivery to obtain high efficacy and low toxicity. Exosomes play a significant role in the targeted delivery of drugs and drug-like molecules. Similarly, paclitaxel (PTX)-loaded AS1411-chol exosomes (AS1411: a nucleolin-targeting aptamer) were delivered to target cancer cells with high efficiency in a recent study (214). Surface engineering, genetic engineering, chemical modification, and membrane fusion are some approaches adopted for the targeted delivery of exosomes (12). By using targeted surface engineering delivery and increasing exosome concentration at disease sites, this approach can reduce toxicity and increase therapeutic efficacy (215). In genetic engineering, ligands or homing peptides can be fused with TMP (5,10,15,20-tetrakis (1methyl-4-pyridinio)), which is expressed on the surface of exosomes (216). In chemical modification, any functional group can be modified by another; for example, alkynes can be modified by an amine group. The exosomal lipid bilayer membrane can spontaneously fuse with other membranes using a membrane fusion approach. Targeting exosomes to deliver chemotherapeutics, such as doxorubicin, to breast cancer tumor tissues in mice has been evaluated. The results were substantial as the treatment caused enhanced and rapid tumor regression without toxicity than that with standard therapy (217). Another study successfully used exosomes as a delivery vector to transport PLK-1 siRNA to bladder cancer cells in vitro, resulting in selective gene silencing of PLK-1 (218). ACTN4 is highly expressed in exosomes from patients with castration-resistant prostate cancer (CRPC). RNA interference-mediated ACTN4 downregulation significantly attenuates cell proliferation and tumor invasion, thereby confirming its role in prostate cancer development (219). Exosomes produced by adipose-derived stromal cells (ASCs) can also be used in prostate cancer therapy. ASC-derived exosomal miR-145 could reduce Bcl-xL activity and promote prostate cancer cell apoptosis via the caspase-3/7 pathway (220). Phosphatidylserine (PS) expressed on the surface of exosomes is linked to T-cell immune suppression. Recently, a novel PS-binding molecule was developed that successfully blocked the immunosuppressive activity of human ovarian tumors and melanoma-associated exosomes. Upon treatment with ExoBlock, T-cell-mediated tumor suppression was significantly enhanced along with an increase in the number and function of CD4 and CD8 T-cells, which are involved in reducing tumor metastasis, thus providing promising antitumor therapy (221). Exosomes isolated from human umbilical cord MSCs were transfected with miR-3182. Exosomal miR-3182 significantly reduced cell migration and proliferation in triplenegative breast cancer (TNBC) cells. We also observed that miR-3182 loaded exosomes induced apoptosis in TNBC cells by downregulating mTOR and S6KB1 expression. Thus, this treatment can reduce the invasiveness of TNBC cells (222). Exosome therapy has also been used to treat other diseases. In psoriasis, phospholipases on exosomes are associated with disease progression. Inhibition of phospholipases suppresses psoriasis progression (223). Inflammatory cytokines in the IL-23/Th17 axis and TNF-a signaling can be targeted. However, these can cause toxic side effects over an extended period (224). Alternatively, exosomes derived from MSCs have been used as potential treatments. This was tested in a mouse skin psoriasis model. The levels of pathology-associated IL-17, IL-23, terminal complement complex, and C5b-9 complex were reduced in the skin of mice treated with exosomes relative to that in the control (129). Exosomes derived from adipose-derived MSCs (ADSCs) have therapeutic applications in hepatic I/R injury. ADSC-derived exosomes improved liver function by maintaining mitochondrial homeostasis through mitochondrial fission inhibition and promoting mitochondrial fusion and biogenesis. This may be attributed to the exosome-induced increase in the expression of mitochondrial fusion proteins such as OPA-1, MFN-1, and MFN-2. In contrast, they decreased DRP-1 and Fis-1 mRNA and protein expression related to mitochondrial fission. Additionally, exosomes significantly increase the expression of PGC-1a, NRF-1, and TFAM genes and proteins related to mitochondrial biogenesis (225). Another study in hindlimb ischemic animal models reported that MSC-derived extracellular vesicles activated VEGF receptors in endothelial cells, increased neovascularization at the site of ischemia, and accelerated recovery (226). In an associated study, extracellular vesicles derived from RAW 264.7, macrophages could also induce angiogenesis in vitro and in vivo (227). Exosomes derived from MSCs can target endogenous MSCs to enhance osteogenic differentiation and reduce adipogenic differentiation, which is a promising therapy for osteoporosis (228). Other possible treatment methods could be effective in initiating bone repair, regulating the immune response, and preventing bone resorption. Similarly, MSC-derived exosomes have also been used to treat inflammatory disorders, such as neurological disorders. This is accomplished using mRNAs, miRNAs, and immunosuppressive factors sourced from MSCs (229). Exosomes produced by MSCs stimulated by IFNg reduced demyelination, decreased neuroinflammation, and upregulated the number of CD4, CD25, FOXP3, and regulatory T cells within the spinal cord. Moreover, exosomes reduce the levels of pro-inflammatory Th1 and Th17 cytokines; hence, exosomes can also serve as therapeutic targets for neurodegenerative disorders (230). Another promising study in this direction is from the Shetty group, which demonstrated that extracellular vesicles isolated from induced pluripotent stem cells effectively induce neurogenesis and treat neurodegenerative disorders in animal models (231). The hallmark of patients with pulmonary embolism is that pulmonary epithelial cells (PECs) are resistant to apoptosis. Mao et al. used exosomal miR-28-3p, which upregulated miR-28-3p expression and increased apoptosis by targeting apoptosis inhibitor 5 (API5) in PECs (229). Milk-derived exosomes (mExosomes) are one of the most economical and promising drug delivery systems. Milk, an easily accessible raw material, can be prepared in substantially large amounts. Uniform particle size of~100 nm, unique phospholipid layer (232), low immunogenicity, and inflammatory response (233,234) make mExosomes an attractive research topic and an ideal vehicle for drug delivery in the future. The anticancer properties of drug-encapsulated mExosomes demonstrated in vitro growth inhibitory action in breast (T47D and MDA-MB-231) and human lung (A549 and H 1299) cancer cell lines (235). Animal studies on anti-tumor activity in mice bearing xenografts of A549 lung tumors exhibited low cytotoxicity and high bioavailability of mExosomes in the organs of the experimental model. A similar study reported that curcumin-loaded mExosomes exhibit enhanced anti-inflammatory and antitumor activities. This study also revealed that curcumin-loaded mExosomes have higher bioavailability than free curcumin (236). RNA-based therapy has recently gained attention, but the challenge involves the delivery of highly unstable RNA molecules to biological systems. Gene expression studies have demonstrated the compatibility of mExosomes in delivering miR-148a-3p to hepatic (HepG2) and intestinal (Caco-2) cell lines (236), suggesting that mExosomes can be utilized as carriers of functional microRNAs. In a recent study (237), the mucus penetrability of siRNA-loaded exosomes was enhanced by adding a hydrophilic surface coating of PEG. PEGylated mExosomes exhibit improved penetrability and stability in an acidic gut environment. Bovine mExosomes are reportedly considered therapeutic agents against arthritis. Studies have demonstrated that spontaneous polyarthritis in IL-1Ra-deficient mice and collagen-induced arthritis is diminished by oral administration of mExosomes (238). However, the mechanism underlying the delayed disease onset remains unknown. Bovine mExosomes containing immunoregulatory miRNAs and antiinflammatory proteins might have targeted inflammatory pathways. These are just a few examples of using exosomes for therapeutic purposes; further discoveries and breakthroughs have yet to be made. Conclusion Research on exosomes suggest that they are representative of the cellular condition and play a crucial role in upregulating, stabilizing, or down-regulating that condition. Exosomes secreted from a diseased cell would support disease progression, such as promoting angiogenesis for cancer or enhancing receptor expression to support viral infection or enhance inflammation. Exosomes secreted from stem cells reflect their purpose of supporting regeneration, healing, and growth. This information indicates that exosomes could be the next-generation technology that could receive large-scale public acceptance owing to their success in non-invasive diagnostics and non-toxic, target-specific drug delivery techniques used in clinical trials. Author contributions PG, HM and B-CA contributed to the conception, writing, and discussion of this manuscript. All authors wrote the initial draft of the manuscript. All authors Contributed to the critical conclusion of the manuscript. All authors contributed to the article and approved the submitted version. Conflict of interest The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Publisher's note All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
2023-01-16T14:25:53.640Z
2023-01-16T00:00:00.000
{ "year": 2023, "sha1": "481595b1de9a3f94bcb8461dea5ba6115809241c", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "481595b1de9a3f94bcb8461dea5ba6115809241c", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
1495591
pes2o/s2orc
v3-fos-license
Apoptosis, autophagy and unfolded protein response pathways in Arbovirus replication and pathogenesis Arboviruses are pathogens that widely affect the health of people in different communities around the world. Recently, a few successful approaches toward production of effective vaccines against some of these pathogens have been developed, but treatment and prevention of the resulting diseases remain a major health and research concern. The arbovirus infection and replication processes are complex, and many factors are involved in their regulation. Apoptosis, autophagy and the unfolded protein response (UPR) are three mechanisms that are involved in pathogenesis of many viruses. In this review, we focus on the importance of these pathways in the arbovirus replication and infection processes. We provide a brief introduction on how apoptosis, autophagy and the UPR are initiated and regulated, and then discuss the involvement of these pathways in regulation of arbovirus pathogenesis. There are currently 534 viruses listed in the International Catalogue of Arboviruses, of which 214 are known to be, or are probably associated with arthropods, 287 viruses are reported to be possible arboviruses and 33 are considered to probably not be, or definitely not be, arboviruses. In total, 134 of the 534 arboviruses have been reported to cause illness in humans ( Refs 7,8). Arboviruses have a global distribution but the majority circulate in tropical areas where climatic conditions are favourable for year-round transmission. Arboviruses usually circulate within enzootic cycles involving wild or domestic animals with relatively few human infections (Ref. 9). Birds and rodents are the main reservoir hosts and mosquitoes and ticks are most often the vectors for the most important arboviruses (Table 1). 'Spill-over' of arboviruses from enzootic cycles to humans by enzootic or 'bridge vectors' can occur, under the appropriate ecological conditions. For most arboviruses, humans are deadend or incidental hosts; however, there are several viruses such as dengue, yellow fever and chikungunya that primarily infect people during outbreaks and then begin to use humans as amplification sources (Ref. 9). Figure 1 illustrates the various mechanisms by which humans are infected by zoonotic and nonzoonotic arboviruses (Ref. 10). †These authors contributed equally to this work. Arboviruses have been causing human disease for at least a thousand years but during recent decades some have newly emerged or re-emerged and a few have increased in importance because of human population expansion and increased urbanization, increased trade or travel and global climate change ( Refs 2,9,36). Arthropod-borne viruses have been a serious public health concern, with viruses such as dengue (DEN) and yellow fever viruses causing millions of infections annually, while emerging arboviruses, such as West Nile, Japanese encephalitis (JE) and Chikungunya viruses (CHIKV) have significantly increased their geographical ranges in recent years (Refs 9,37,38,39). From a public health point of view, those arboviruses that produce viremia in humans and cause major mosquito-borne epidemics are most important (Ref. 40). Figure 2 shows world geographical distribution of the most important vector-born arboviruses. In the following section we will discuss some of the most common arbovirus-induced diseases. Common arbovirus-induced diseases Dengue/dengue haemorrhagic fever The dengue viruses (DENV) are the only arboviruses that are fully adapted to the human host and its environment, thus eliminating the need for an enzootic transmission cycle (Refs 52,53). Consequently, in recent years, transmission has increased in urban and semiurban areas and has caused a major international public health concern (Refs 54, 55). DEN is now endemic in more than 100 countries in Africa, the Routes of transmission and human exposure to zoonotic arboviruses. Infectious agents may be transmitted to humans by direct contact with infected animals, mechanical vectors or intermediate hosts. Arboviruses are maintained in mosquito-monkey, mosquito-rodent, mosquitobird, mosquito-pig, mosquito-horse and mosquito-human cycles. The enzootic cycle occurs in the region where humans intrude into the natural foci of infection. The rural epizootic cycle is involved among domestic animals and mosquitos, and amplified in the presence of intermediate hosts, which result in representing a large reservoir of viruses and severe spillover effect to dead-end hosts. In urban settings, viruses are transmitted between humans and the mosquito vectors in an urban epidemic cycle, using humans for amplification (Ref. 10). There are no specific anti-viral treatments for Yellow fever, and the primary interventions are supportive care. Vaccination is the most important strategy to prevent Yellow fever. The current vaccine is highly effective and provides immunity within 30 Viruses and autophagy, apoptosis and unfolded protein response (UPR) Many viruses hijack host cell responses for their own benefit and use them as complementary mechanisms for replication and infection. Some of the most important host mechanisms that are usually affected by viral infection are pathways involved in cell death and cellular responses against environmental stress. These mechanisms include apoptosis (i.e. programmed cell death I), autophagy (programmed cell death II) and UPR. These mechanisms play essential functions in regulating cell fate and are important for normal cellular functions. In addition, these mechanisms are tightly regulated and can affect each other. They are usually interconnected and also 'cross-talk' with each other. We will briefly review the general concepts of apoptosis, autophagy and UPR and explain their cross-talk and regulatory mechanisms. We will then focus on the role of apoptosis, autophagy and UPR in arbovirus replication and infection and then describe different possible therapeutic approaches for arboviruses by discussing the involvement of apoptosis, autophagy and how they may determine therapeutic strategies. An overview of autophagy, apoptosis and UPR 110,111). However, macroautophagy is the major regulated catabolic mechanism by which the bulk of damaged cytoplasmic proteins and organelles are sequestered within an autophagosome ( Refs 112,113,114). In this review, we will focus on macroautophagy (referred to herein as 'autophagy'). The first step in autophagy (see Fig. 3) involves formation and expansion of a double-membrane structure, which is called the 'isolation membrane' or 'phagophore'. The edges of this membrane eventually fuse to form a new double membrane-bound vacuole, known as the autophagosome that sequesters the cytoplasmic cargo. The autophagolysosome is formed by fusion of the autophagosome with a lysosome and lysosomal contents are degraded by hydrolytic enzymes ( Refs 115,116,117,118). As a result of degradation, nucleotides, amino acids and free fatty acids (FFAs) are generated and then reused for energy metabolism, macromolecular production and biosynthesis (Refs 119, 120). It is assumed that the different steps in macroautophagy are mediated by autophagy-related genes (ATG), which encode proteins involved in autophagy (Refs 121, 122). These proteins have been classified into five different functional categories: (i) a protein serine/threonine kinase complex that responds to upstream events such as target of rapamycin (TOR) kinase (Atg1/ULK1, Atg13 and Atg17); (ii) a lipid kinase group that controls vesicle nucleation (Atg6/Beclin1, Atg14, Vps34/ PI3KC3 and Vps15); (iii) two ubiquitin-like conjugation pathways that stimulate vesicle expansion (the Atg8 and Atg12 conjugation systems); (iv) a recycling pathway that is required for disassembly of Atg proteins (Atg2, Atg9, Atg18); and (v) vacuolar permeases that permit the efflux of amino acids from the degradative compartment (Atg22) (Refs 93, 119). The mammalian TOR (mTOR) kinase acts as a negative regulator of autophagy and is a central controller of cell growth, aging and proliferation ( Refs 123,124). Under starvation conditions, inhibited mTOR induces autophagy through Graphic representation of autophagy. Autophagy is a process for the degradation and recycling of damaged or unnecessary cellular compartments, which has several tightly regulated steps including induction, nucleation, expansion and completion, fusion and degradation. The mTOR is known as the key regulator of autophagy induction and can be suppressed by ULK1, leading to trigger VPS34-Beclin 1-class III PI3-kinase complex. Several different membrane pools contribute to the formation of the phagophore. The Atg proteins (Atg2, Atg9, Atg18) are essential for phagophore formation. The ATG and LC3 conjugation system also contribute in autophagosome membrane formation and elongation. The autophagolysosome then is formed by fusion of the autophagosome with a lysosome to degrade and reuse the compounds. ATG, autophagyrelated genes; mTOR, mammalian target of rapamycin. phosphorylation of the Ulk1-Atg13-FIP200-Atg101 complex (Refs 125,126), leading to localization of Ulk1/2 and Atg13 to the autophagic isolation membrane ( Refs 127,128). During the initiation step of autophagy, Beclin 1 interacts with Vps34, which contributes to Atg protein recruitment and autophagosome nucleation (Refs 129,130). Interaction with various Beclin-1-interacting proteins facilitates the coordination of these events (Ref. 131). LC3, the mammalian ortholog of Atg8, is cleaved by Atg4 and then conjugated to the polar head of phosphatidylethanolamine (PE) to generate LC3-II, which is necessary during the elongation step of autophagy ( Refs 132,133). Hence, the autophagosome is regulated in response to the Beclin-1/Vps34/UVRAG complex, known as the maturation step ( Refs 134,135,136). An overview of autophagy is summarised in Figure 3. Apoptosis There are two main functionally distinct pathways for apoptosis induction (Fig. 4): the extrinsic and the intrinsic mitochondrial pathways ( Refs 137,138,139). Caspases are involved in most of the apoptotic processes and are activated by ligation of death receptors [tumour necrosis factor receptor (TNFR), Fas, TNF-related apoptosis-inducing ligand (TRAIL)] or release of specific proteins from the mitochondria (Refs 140, 141). However, accumulating evidence suggests that the two pathways are intimately intertwined (Refs 138, 142), which will be described in the next sections. The extrinsic apoptosis cascade is stimulated after the binding of cell surface receptors to their ligands, resulting in Fas-associated protein with death domain (FADD)-dependent activation of initiator caspases, namely caspase-8, and subsequently caspase-3 and -7 (Refs 143, 144). As a consequence, effector caspases (i.e. caspase-3 and caspase-7) are dimerized and activated and, once active they can cause apoptosis (Refs 141, 145). The mitochondrial apoptotic death mechanism integrates various extracellular stimuli including drugs, nutrients and radiation and also different intracellular stimuli such as oxidative stress, oncogene expression, endoplasmic reticulum (ER) stress and DNA damage ( Refs 146,147). The apoptotic signals in this pathway converge on the mitochondria to release apoptogenic proteins such as cytochrome c, apoptosisinducing factor (AIF), Smac/DIABLO, Omi/HtrA2 and mitochondrial endonuclease G ( Refs 148,149,150,151). The Bcl-2 family of proteins serve as important regulators of the release of these mitochondrial proteins that can be divided into two classes: (i) antiapoptotic members (e.g. Bcl-2 and Bcl-xL); and (ii) proapoptotic members (e.g. Bax, Bak, Bid, Bad, Noxa, Puma and others) ( Refs 152,153). Up-regulation of proapoptotic proteins or down-regulation of antiapoptotic proteins can cause an increase in permeability of the mitochondrial membrane, which later promotes release of cytochrome c and other proteins into the cytosol ( Refs 151,154,155,156). In the presence of deoxyadenosine triphosphate (dATP), the released cytochrome c interacts with Apaf-1 and caspase-9 and forms a ternary complex, leading to activation of caspase-3 and then apoptosis ( Refs 142,157,158). In addition, p53 plays a stimulating role in intrinsic apoptosis induction ( Refs 159,160,161). Thus, the two direct p53 transcriptional targets Noxa and Puma can mediate the pro-apoptotic activity of Bax and Bak, and thereby promote apoptosis ( Refs 162,163). It is widely accepted that there is cross-talk between the two extrinsic and intrinsic pathways, such that activity in one pathway interferes with signalling steps in the other pathway (Ref. 141). The pro-apoptotic cytochrome c-releasing factor Bid is positioned to serve as a link between the extrinsic death receptor pathway and the intrinsic pathway (Ref. 154). Cleavage of the BID protein in the cytoplasm by caspase-8 causes Bid to localise in the cytosol while truncated Bid translocates to the mitochondria and activates the mitochondrial pathway after apoptosis induction through death receptors, and can be used to amplify the apoptotic signal (Ref. 164). Although Bid is a downstream target of caspase-8 in the extrinsic apoptotic pathway, it also acts as ligand for Bax and Bak, causing caspase-9 activation (Refs 154, 165). Caspase-9 activation proteolytically activates downstream caspases (e.g. caspases-3,-6,-7), which, in turn, can result in apoptosis ( Refs 166,167). UPR The ER contains an extensive network of tubules, sacs and cisternae, which extend from the cell plasma membrane through the cytoplasm and to the nuclear envelop through a continuous connected network ( Refs 168,169). The ER is the main sub-cellular compartment involved in proper folding of proteins and their maturation. Approximately one-third of the total proteins are synthesised in the ER. Many different perturbations can alter the function of the ER leading to unfolding or misfolding of proteins in the ER. This condition is referred to as ER stress ( Refs 169,170). The ER creates a series of adaptive mechanisms to prevent cell death complications and these together are referred to as the UPR ( Refs 170,171). The UPR can be involved in the secretory pathway leading to restoration of protein folding homeostasis. However, if there is too much stress on the ER, and the ER cannot cope with this stress, it will eventually lead to cell death (Ref. 172). The UPR also plays an important role in maintaining cellular homeostasis of specialised secretory cells such as pancreatic beta cells, salivary glands and plasma B cells (Ref. 170). It is becoming increasingly evident from animal models that UPR has several functions that are not directly linked to protein folding including inflammation, energy control and lipid and cholesterol metabolism (Ref. 170). The existence of UPR was first reported by Kozatsumi et al. more than 25 years ago (Ref. 173). They showed that glucose regulated proteins (GRPs) that are associated with the ER are upregulated upon sensing the presence of unfolded or misfolded proteins in the ER (Ref. 173). While the mechanisms and signalling events behind it were not known at the time, today we have a much better understanding of the UPR and how these events are regulated in the ER at the molecular level. ER stress response signals are constantly monitored by three main classes of sensors. These include inositol requiring enzyme 1 alpha (IRE-1α) and IRE-1β, protein kinase RNA like ER kinase (PERK) and activating transcription factor 6 (ATF6; both α and β isoforms) (Fig. 5). In normal healthy cells these sensors are in an inactive state. IRE1. This is a type I transmembrane protein receptor having an N-terminal ER luminal-sensing domain. The cytoplasmic C-terminal region contains both an endoribonuclease domain and a Ser/Thr kinase domain (Ref. 169). There are two homologues of IRE1: IRE1α and IRE1β. Activation of IRE1 involves dissociation from Grp78, followed by dimerization, oligomerization and trans-autophosphorylation, which leads to conformational changes and activation of its FIGURE 5. Graphic representation of ER stress and virus replication. ER stress is enhanced in the viral infected cells and activates UPR proteins (e.g. PERK, ATF6, and IRE1). Activated PERK leads to induce ATF4 via phosphorylation of eIF2α, causing attenuation of translation and inducing genes encoding CHOP. Upon IRE1 activation, TRAF2 and sXBPmRNA1 splicing are initiated in the cytoplasm, subsequently leading to activation of UPR target genes. The degradation of ATF6 is increased through recruitment of ATF6, a UPR sensor. ATF6 translocates to the Golgi and is cleaved to a nucleus targeting form that promotes expression of UPR-responsive genes. The consequences of UPR activation are necessary for viral replication and pathogenesis. ATF, activating transcription factor; CHOP, C/EBP homologous protein; ER, endoplasmic reticulum; IRE1, inositol-requiring enzyme; PERK, protein kinase RNA like ER kinase; UPR, unfolded protein response. 177 (Refs 169, 183). The role of Nrf2 as a pro-survival factor is further shown by the fact that cells devoid of Nrf2 display increased sensitivity to cell death via apoptosis after ER stress (Refs 169, 183). The overall UPR signalling pathway is shown in Figure 5. The role of autophagy in arbovirus replication Although autophagy was initially proposed as a physiological cellular response to environmental stress followed by virus amplification, increasing evidence now indicates that several viruses may use autophagy as a survival strategy to support their life cycle, which is known as 'pro-viral autophagy' (Refs 131, 138, 184, 185) (Fig. 6). Virus-induced induction of autophagy seems to be associated with replication/ translation of many arboviruses like DENV, JEV, CHIKV, rotavirus, and epizootic haemorrhagic disease virus (EHDV, an orbivirus) ( Refs 186,187,188,189,190,191). The results that were obtained by monitoring LC3 lipidation in JEV-infected NT-2 cells, a pluripotent human testicular embryonal carcinoma cell line treated with Rapamycin and 3-methyladenine, revealed that there was a direct relationship between autophagy and viral replication The results were confirmed using an Atg5 Although the function or functions of autophagy in promoting virus replication are not completely understood, experimental evidence suggests that there are multiple autophagy pro-viral mechanisms, including serving as a scaffold for viral replication, contributing to viral entry, regulation of lipid metabolism, suppressing innate immune responses and preventing cell death (Ref. 196). A group of arboviruses including DENV, and JEV may need to invoke autophagy components such as the autophagosome, amphisome and autolysosome to: (i) serve as a scaffold for viral replication; and (ii) escape from the immune system ( Refs 187,197,198,199). The amphisomes play major roles in DENV entry and localisation of viral translation/replication constituents (Ref. 199). DENV-2 needs pre-lysosomal fusion vacuoles (amphisomes) while DENV-3 interacts with both amphisomes and autophagolysosomes as the sites for their viral translation/replication complexes (composed of viral RNA and proteins) (Ref. 199). Poliovirus and CHIKV also stimulate autophagosome formation as a site for aggregation of viral translation/replication complexes ( Refs 188,189,200). After DENV and JEV induce autophagy, the presence of viral replication/translation complexes in both the autophagosome and the endosome suggests an auxiliary role for autophagosome-endosome fusion in viral entry (Refs 187,201). Autophagy can regulate lipid metabolism (lipophagy) through modulating the degradation of triglycerides that have accumulated in cytosolic lipid droplets (Ref. 202). Lipid droplet usage as an energy source is another autophagy-mediated pro-viral mechanism that is used for DENV replication (Ref. 203). Thus, lipid droplets are sequestered in autophagosomes and delivered to lysosomes for degradation to generate FFAs from triglycerides (Ref. 203). The released FFAs are imported to mitochondria and they undergo β-oxidation to produce ATP for viral replication (Ref. 203). The innate antiviral immune response The innate antiviral response is initiated by binding of pattern recognition receptors (PRRs), retinoic acidinducible gene (RIG) and Melanoma differentiationassociated protein 5 (MDA5) to intracellular viral pathogen associated molecular patterns (PAMPs) Graphic proviral functions of autophagy. There are five possible mechanisms for modulating viral replication by autophagy. Amphisome formation is thought to be beneficial for viral cellular entry and replication. Induction of autophagosome formation is also important for some virus' replication. Furthermore, viruses initiate autophagy to benefit from lipid droplets as an energy source during viral replication. Free fatty acids are liberated from lipid droplets during autophagy to produce ATP. Viruses also stimulate autophagy to subvert immune responses by selectively degrading key regulatory molecules. Another mechanism is that viruses promote their replication by prolonging cell survival and suppressing cell death. The mechanistic details related to proviral functions of autophagy are discussed in the text. With attention to the cross-talk between autophagy and apoptosis, it is becoming apparent that autophagy postpones apoptosis and promotes CHIKV propagation by inducing the IRE1α-XBP-1 pathway in conjunction with ROS-mediated mTOR inhibition (Ref. 211). A schematic representation of autophagy and arbovirus replication is summarised in Figure 6. The role of apoptosis in arbovirus replication To date, several investigations have been carried out on the importance of apoptosis in different virus infections, pathogenesis and replication, but many issues are still unclear and under debate ( Refs 212,213,214). As summarised in Figure 7, a number of arboviruses such as Sindbis virus, WNV and JEV seem to use apoptosis as a virulence factor to promote their own pathogenesis (215,216,217). Each of these viruses has specific targets and biochemical-induced mechanisms during virus-induced programmed cell death. The observations suggest that Sindbis virusinduced apoptosis plays an important role in Sindbis virus pathogenesis and mortality (Ref. 215). After entry of Sindbis virus into the host cell and subsequent formation of Sindbis virus double-stranded RNA intermediates, dsRNA-dependent protein kinase (PKR) recognises these particles (Refs 218, 219, 220). PKR blocks cellular translation through eIF2a phosphorylation, which later can inhibit Mcl-1 (anti-apoptotic Bcl2 family protein) biosynthesis (Ref. 221). PKR also controls c-Jun N-terminal kinases (JNK) through IRS1 phosphorylation and later activates 14-3-3 (Ref. 222). Thus, 14-3-3 affects the accessibility of substrates (e.g., Bad) to kinases and serves to localise kinases to their substrates, thereby leading to release of Bad and disruption of the complex between antiapoptotic Bcl2 family proteins, Bcl-xl and Bak. Both Bad and Bik can displace Bak from Mcl-1, which results in Bak oligomerization and cytochrome c release, and subsequent induction of apoptosis (Ref. 222). CHIKV triggers the apoptosis machinery and uses apoptotic blebs to evade immune responses and facilitate its dissemination by infecting neighboring cells (Ref. 223). CHIKV infection can induce apoptotic cell death via at least two apoptotic pathways: the intrinsic pathway, which has been reported to be involved in virus replication and results in activation of caspase-9, and the extracellular pathway, which is dependent on the induction of cell surface or soluble death effector ligands that activate caspase-8. Thus, both pathways activate caspase-3 and finally induce cell death, and this facilitates virus release and spread (Ref. 211). The replication of Crimean-Congo haemorrhagic fever virus (CCHFV), an arbovirus from the family Bunyaviridae, is associated with the death receptor pathway of apoptosis. Up-regulation of proapoptotic proteins (i.e. BAX and HRK) and novel components of the ER stress-induced apoptotic pathways (i.e. PUMA and Noxa) have also been shown in a CCHFV-infected hepatocyte cell line, which suggests a link between CCHFV replication, ER stress and apoptotic pathways. Notably, differential high levels of transcription factors, such as CHOP, which are activated through ER stress, are present in hepatocytes following CCHFV replication (Ref. 224). In this study, it was shown that the over-expression of IL-8, an apoptosis inhibitor, during CCHFV infection was independent from apoptotic pathways. However, in other studies, a positive correlation was detected between IL-8 induction and DENV infection ( Refs 224,225,226). In contrast to Sindbis virus, CHIKV and CCHFV replication in infected cells have been proposed to be necessary for apoptosis induction, as demonstrated by the use of UV-inactivated viral particles ( Refs 227,228,229). The replication of Flaviviruses (e.g. WNV, JEV and DENV) can be limited by virus-induced programmed cell death at the early stage of virus infection. These viruses might block or delay apoptosis via activating several cell survival pathways, such as PI3K/ Akt signalling, to improve their replication rate ( Refs 227,230). Blocking PI3K (using LY294002 and wortmannin) showed that the induction of apoptosis might be a result of p38 MAPK activation and did not affect JEV and DENV viral particle production (Ref. WNV can initiate apoptosis through caspases-3 and -12 and p53 after several rounds of replication and it is noteworthy that initial viral dose exerts an influence on kinetics of WNV-induced cell death ( Refs 228,233,234,235). After some RNA virus infections, expression of multiple miRNAs in host cells might have either a positive or negative effect on virus replication. One such cellular miRNA, Hs_154, limits WNV replication by inducing apoptosis through inhibition of two anti-apoptotic proteins like CCCTC binding factor (CTCF) and EGFR-co-amplified and overexpressed protein (ECOP) (Refs 227, 236). JEV, an RNA virus, may induce ROS-mediated ASK1-ERK/p38 MAPK activation and thus lead to initiation of apoptosis (Ref. 237). In mouse neuroblastoma cells (line N18) infected with ultraviolet-inactivated JEV (UV-JEV), replication-incompetent JEV virions induced cell death through a ROS-dependent and NF-kB-mediated pathway (Ref. 238). Initial suppression of UV-JEVinduced cell death, followed by co-infection with active or inactive JEV, showed that JEV may trigger cell survival signalling to modify the cell environment for timely virus production (Ref. 238). NS1 ′ protein, a neuroinvasiveness factor that is only produced by the JEV serogroup of Flaviviruses during their replication, was introduced as a caspase substrate in virusinduced apoptosis; however, use of a caspase inhibitor had no effect on virus replication (Ref. 239). Empirical evidence showed that JEV can affect Bcl-2 expression to increase anti-apoptotic response rather than anti-viral effect to enhance virus persistence and reach equilibrium between replication and cell death (Ref. 240). Numerous in vitro studies have confirmed that DENV can induce apoptosis in a wide variety of mammalian cells including endothelial cells, hepatocytes, mast cells, monocytes, dendritic cells and neuroblastoma cells, but the mechanisms are not completely understood. Dendritic cells are believed to be the primary DENV targets that play central roles in supporting active replication during virus pathogenesis. However, a recent study reported that DENV replication in monocyte-derived dendritic cells (mdDCs) was positively correlated with pro-inflammatory cytokine secretion such as TNFα and apoptosis (Ref. 241). To achieve high replication in macrophages, hepatoma and dendritic cells, DENV may subvert apoptosis by inhibiting NF-kB in response to TNFα stimulation ( Refs 242,243). Interaction between DENV capsid protein and the hepatoma cell line (Huh7) Apoptosis has also been extensively linked to reovirus replication. BTV induces apoptosis in three mammalian cell lines but not in insect cell lines that were tested. BTV-mediated apoptosis involved activation of NF-kB and required virus uncoating and exposure to both outer capsid proteins VP2 and VP5 (Ref. 249). Apoptosis was mediated by both intrinsic and caspase-dependent extrinsic pathways (Ref. 250). African horse sickness virus (AHSV), another orbivirus, also induced apoptosis in mammalian BHK-21 cells but not in insect KC cells, through activation of caspase-3 (Ref. 251). When apoptotic programmed cell death acts as a barrier against viral replication, previous research has revealed that some arboviruses can delay or block apoptosis to elevate their replication and dissemination. Moreover, viral replication of some arboviruses occurs following the presence of viral-induced apoptosis. However, the exact mechanisms whereby viruses modulate apoptosis in different mammalian cells need to be more extensively studied. Arboviruses and UPR The scientific literature related to the role of UPR in arbovirus pathogenesis is limited. Here, we review some of the arboviruses and the UPR pathways they elicit to aid replication. WNV is a neurotropic arbovirus that emerged as a pathogen of serious concern in the North American population. People infected with WNV are affected by severe neurological diseases such as meningitis, encephalitis and poliomyelitis (Ref. 233). WNV activates multiple UPR pathways leading to transcriptional and translational activation of several UPR target genes (Ref. 233). Of the three UPR pathways, the XBP1 pathway was shown to be non-essential for WNV replication and it was replaced by other pathways. ATF6 was degraded by the proteasome and PERK transiently phosphorylated eIF2α and induced the pro-apoptotic protein CHOP (Ref. 233). WNV-infected cells showed signs of apoptotic cell death including induction of growth arrest, activation of caspase-3 and activation of poly (ADPribose) polymerase (PARP). WNV titer levels were also significantly increased when grown in a CHOP −/− deficient mouse embryo fibroblast (MEF) cell line but not in wild type MEF cells (Ref. 233). This evidence showed that WNV activates the UPR, and a host mechanism to counteract WNV infection involved activation of CHOP-dependent cell death (Ref. 233). In another study, the WNV Kunjin strain activated UPR signalling upon infection in mammalian cells (Ref. 252). UPR ATF6/IRE1 pathways were activated by this strain. However, there was no significant phosphorylation of eIF2α indicating that the UPR PERK pathway was not activated (Ref. 252). The Kunjin strain nonstructural proteins, NS4A and NA4B, were potent inducers of UPR. Moreover, sequential removal of NS4A hydrophobic domains decreased UPR activation but increased interferon gamma-mediated signalling (Ref. 252). These results show that WNV Kunjin strain activates UPR signalling and hydrophobic residues of WNV nonstructural proteins regulate the UPR signalling cascade. The role of ATF6 signalling in WNV replication is poorly understood. Results from the same group showed that ATF6 signalling is required for WNV replication by promoting cell survival and inhibition of the innate immune response (Ref. 253). ATF6-deficient cells showed a decrease in protein and virion production when infected with WNV Kunjin strain. These cells also demonstrated increased eIF2a phosphorylation and CHOP transcription, but these events were absent in infected control cells (Ref. 253). In contrast, IRE I-deficient cells do not show any discernible differences when compared with IRE I-positive cells upon infection (Ref. 253). These results also demonstrate that, in the absence of ATF6, other UPR signalling cascades such as PERK and IRE1 pathways cannot activate or enhance virus production, indicating that ATF6 is required for viral replication. However, it has also been shown that both ATF6 and IRE I are required for signal transducer and activator of transcription (STAT) I phosphorylation, showing that ATF6 is required for inhibition of innate immune response (Ref. 253). The arboviruses CHIKV and Sindbis also cause frequent epidemics of febrile illness and long-term arthralgic sequelae that affect the lives of millions of people each year (Ref. 254). These viruses replicate in infected patients and also in mammalian cells indicating that they have certain control over the UPR of the host system. Analysis of these viral infections in mammalian cells shows that CHIKV specifically activates the ATF-6 and IRE-1 branches of the UPR pathway and suppresses the PERK pathway (Ref. 254). CHIKV nonstructural protein 4 (nsp4) expression in mammalian cells suppresses eIF2α phosphorylation that regulates the PERK pathway (Ref. 254). These results provide insight on the replication of CHIKV in mammalian cells by regulating the host UPR mechanism. However, experimental findings with Sindbis virus show that it induced uncontrolled UPR, which is reflected by failure to induce synthesis of ER chaperones, followed by increased phosphorylation of eIF2α and activation of CHOP leading to premature cell death (Ref. 254). In another study, it was reported that the UPR XBP1 pathway was activated when neuoroblastoma N18 cells were infected with the arboviruses JEV and DENV (Ref. 255). This was evidenced by splicing of XBP1 mRNA and activation of downstream genes ERDJ4, EDEM1 and p58. Reduction of XBP1 by small interfering RNA had no effect on cellular susceptibility to the two viruses but enhanced cellular apoptosis (Ref. 255). Overall, these results suggest that both encephalitis and DENV trigger the XBPI signalling pathway and take advantage of this cellular response to alleviate virus induced cytotoxicity (Ref. 255). According to another group, DENV infection of A547 ovarian cancer cells elicited the UPR signalling response (Ref. 256). This was demonstrated by phosphorylation of eIF2α. It was also shown that different serotypes of DENV, such as ATF6 and IRE1, activate other UPR pathways. These results show that different DENV serotypes have the capacity to modulate different UPR pathways. They also demonstrated that de-phosphorylation of eIF2α by a drug called solubrinal reduced virus infection. This unique report showed that the same virus could activate all three UPR pathways ( Refs 256,257). Initiation of UPR signalling is critical for cell survival and also for viral replication. All the above results show that arboviruses induce UPR signalling upon infection in mammalian cells. However, the UPR pathways that are activated upon infection with various arboviruses are not the same. Even different strains of the same virus activate different UPR pathways. These results suggest that specific virusinduced UPR pathway usage depends on the type of viral strain used. In vitro studies using ectopicallyexpressed arbovirus nonstructural proteins alone in mammalian cells showed that the proteins themselves can elicit the UPR response. Mutations of certain hydrophobic residues in nonstructural proteins reduced the UPR signalling response. These results indicate that composition of viral nonstructural proteins can determine the type of UPR pathway to be elicited and the extent of UPR response. Viral nonstructural proteins often undergo mutation; thus, more studies are needed to understand the role of arbovirus nonstructural proteins in inducing UPR. The role these viruses may play in UPR in the invertebrate insect cells is even less defined. Thus, induction of UPR signalling by viruses is one important facet, and equally important is how these viruses respond to anti-viral therapy. Do these viruses use the UPR pathways to decrease the effectiveness of anti-viral therapies? This is also one of the main questions to be answered. Thus, in conclusion, a significant amount of research is needed to investigate the pathogenesis of arboviruses and their relationship with UPR signalling. These studies can provide us with better antiviral therapeutics to control arbovirus replication by addressing various mechanisms of virus propagation. Conclusion Arbovirus infections lead to serious health issues in many parts of the world. To date, there is no treatment for most arbovirus infections and vaccines have been recently developed for only a few of these arboviruses. Therefore, finding a way to increase the efficiency of current therapeutic approaches to arbovirus infections will improve health conditions in many areas of the world. As has been discussed in this review, arbovirus infection can stimulate apoptosis, autophagy, or UPR in infected cells or organs. Activation of these pathways usually interferes with arbovirus replication and infection processes. Therefore, modulating these pathways may be a part of future strategies to combat arbovirus infections. Apoptosis, autophagy and UPR have been widely investigated in many diseases including cancer, cardiovascular diseases and pulmonary diseases. Many inhibitors and inducers of these pathways have been developed to improve treatment protocols in these diseases. Since apoptosis, autophagy and UPR are tightly interconnected with each other and usually affect each other, it is critical to find out which pathway is the dominant one in the arbovirus infection process and how it regulates viral infection and replication in the infected cells. It is very important to identify the extent of apoptosis, autophagy, and UPR alterations in virus infected cells. After identifying these changes it would be very important to address how induction/inhibition of these pathways would modulate virus replication, and production of active viral particle in infected cells. As an example, we can modulate UPR using inducers (thapsigargin) or inhibitors (PERK GSK inhibitor, IRE1 inhibitor) and find out how these treatments effect arbovirus replication. These findings would provide better opportunities to use the modulation of these pathways for better designing therapeutic strategies and controlling viral infection. If this question can be clearly answered, induction or inhibition of these pathways may represent a novel enhanced treatment or prevention strategy against arbovirus infections.
2016-05-12T22:15:10.714Z
2016-01-19T00:00:00.000
{ "year": 2016, "sha1": "ba5cf44455b6772288494f15fdd7676940bea67f", "oa_license": "CCBY", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/952CC92BFC9C78E3FF7A11D7B1369ADD/S1462399415000198a.pdf/div-class-title-apoptosis-autophagy-and-unfolded-protein-response-pathways-in-arbovirus-replication-and-pathogenesis-div.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "a9cb2a371570d508be2c4de18deb2cc039c01263", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
44792909
pes2o/s2orc
v3-fos-license
A Variational Characterization of Fluid Sloshing with Surface Tension We consider the sloshing problem for an incompressible, inviscid, irrotational fluid in an open container, including effects due to surface tension on the free surface. We restrict ourselves to a constant contact angle and seek time-harmonic solutions of the linearized problem, which describes the time-evolution of the fluid due to a small initial disturbance of the surface at rest. As opposed to the zero surface tension case, where the problem reduces to a partial differential equation for the velocity potential, we obtain a coupled system for the velocity potential and the free surface displacement. We derive a new variational formulation of the coupled problem and establish the existence of solutions using the direct method from the calculus of variations. We prove a domain monotonicity result for the fundamental sloshing eigenvalue. In the limit of zero surface tension, we recover the variational formulation of the mixed Steklov-Neumann eigenvalue problem and give the first-order perturbation formula for a simple eigenvalue. Introduction In fluid dynamics, sloshing refers to the motion of the free surface of a liquid inside a container. Spilling and splashing of a fluid is possible if the sloshing amplitude is large enough. Indeed, sloshing of a cup of coffee can devastate a perfectly good day [41]. Examples of more significant consequences due to sloshing include the free surface effect in ships and trucks transporting oil and liquified natural gas (LNG) [2,19] and sloshing of liquid propellant in spacecraft tanks and rockets [3,27]. LNG carriers usually operate either fully loaded or nearly empty, but there has been a growing demand for membrane-type LNG carriers that can operate with cargo loaded to any filling level. Experimental and numerical studies show that the coupling effect between sloshing dynamics inside tanks and ship motions can be significant at certain frequencies of partially filled tanks, where violent sloshing generates high impact pressure on the tank surfaces and compromises structural safety. As such, prediciting and understanding the natural sloshing frequencies, modes, and impact load at partially filled levels are of great concern to the safety and operability of LNG carriers close to an LNG terminal and remain one of the most crucial design aspects in LNG cargo containment system. Since Robert Goddard's first launch of a liquid propellant rocket in 1926, scientists and engineers have worked to better understand the sloshing behavior of propellants in their tanks. This is important not only in terms of reducing costs and increasing efficiency of future spacecraft designs but also in minimizing potential impacts especially on flight safety, since violent sloshing fuels can, for example, produce highly localized impact loads and pressure on tank walls or affect the spacecraft's guidance system. There are many instances where space missions were either deemed a failure or could not be completed due to sloshing [36,48,28,59,55]. For instance, in March 2007, the SpaceX Falcon 1 vehicle tumbled out of control, when an oscillation appeared in the upper stage control system approximately 90 seconds into the burn and instability grew in pitch. It was verified by third party industry experts that cryogenic liquid oxygen (LOX) sloshing was the primary contributor to this instability [1]. Recent advances in computational fluid dynamics (CFD) tools have made accurate numerical modeling of sloshing dynamics and extraction of mechanical parameters such as sloshing frequency and sloshing mass center possible [51]. However, it requires extensive experimental validation and verification in microgravity or zero gravity environment, since fluid behaves in an unpredictable manner due to the absence of gravity. To benchmark and expand CFD tools to characterize sloshing dynamics, engineers with NASA together with researchers from the Florida Institutute of Technology and the Massachusetts Institute of Technology designed the SPHERES-Slosh experiment (SSE), carried aboard at the International Space Station. This investigation is planned to collect valuable data and information on how liquids move around inside of a container in the presence of external force. A description of design details of the SSE can be found in [11,38]. In this paper, we study the linearized sloshing problem of an incompressible, inviscid, irrotational fluid in containers, including surface tension effects on the free surface. 1.1. Surface tension effects. Surface tension is present at all fluid interfaces, and it manifests itself in nature, most commonly in capillary phenomena such as the rise of water up a capillary tube. Surface tension, defined as force per unit length, can be explained in terms of surface force or surface energy [39,10]. Roughly speaking, it is the intermolecular force required to contract the liquid surface to its minimal surface area. Geometrically, including surface tension forces is equivalent to considering the curvature of the interface. If we denote by ρ, g, T, l the density of the fluid, gravitational acceleration, surface tension, and some characteristic length scale of the system respectively, and assume that ρ, T are constant, then the dimensionless parameter Bo = ρgl 2 /T , known as the Bond-Eötvös number [27], measures the importance of the surface tension force relative to the gravitational force. For Bo ≫ 1, surface tension is assumed to be negligible and this is often the case for fluids in large containers under a regular gravity field. However, if Bo ≪ 1, then surface tension is not negligible anymore; this occurs when one is examining sloshing behavior in a microgravity environment or if the characteristic length of the interface is much smaller compared to the capillary length l 2 c = T /(ρg). Closely related to the concept of surface tension is that of the contact angle, in other words the angle of contact between the solid and the liquid-air interface along the line of intersection between the container's wall and the fluid free surface, known as the contact line [17]. On one hand, the contact angle is a geometrical quantity uniquely defined as a dot product, while on the other hand it is a physical quantity which quantifies the wettability of a solid surface. In the static case, the resulting condition is known as the Young's equation and can be derived from an energy minimization argument on the contact line [47,20]. In the dynamic case, accurately describing the contact angle remains poorly understood, mainly due to contact angle hysteresis. We will further discuss the contact angle in Section 2. 1.2. Sloshing problem with surface tension. Consider an irrotational flow of an incompressible, inviscid fluid occupying a bounded region D T ⊂ R 3 in a simply connected container. The Cartesian coordinatesx = (x,ỹ,z) are chosen in such a way that the static free surface (or static meniscus), denoted by F , lies in thex-ỹ plane and thez-axis is directed upward. Here, D T is a bounded simply connected Lipschitz domain; in particular its boundary ∂D T has no cusps. ∂D T consists of two parts: the (evolving) free surface F T defined by F T = {(x,ỹ,z) ∈ R 3 :z =η(x,ỹ,t)}, whereη is the free surface displacement, together with the wetted boundary B = ∂D T \ F T . Moreover, the container's wall over which the contact line moves is vertical. The subscripts on D T , F T are used to denote time-dependence. The static meniscus F is assumed to intersect the vertical container wall orthogonally and this corresponds to a 90 • (static) contact angle and, together with the assumption that the wall is vertical near the free surface, impliesn B (x) =n ∂F (x) for all x ∈ ∂F ; see Figure 1. Another consequence is that F is a flat interface on the plane {z = 0}; this will be proved in Section 2. One can think of F T as a small perturbation of F . We give a brief description of the water waves equations describing fluid motion in D T ; details of the derivation can be found in Appendix A. We denote byũ(x,t) the velocity field of the fluid. Incompressibility and irrotationality imply the existence of a velocity potential, denotedφ =φ(x,t), satisfying Laplace's equation in D T . The Neumann boundary condition is imposed on B, while the classical kinematic and dynamic boundary conditions are imposed on F T , the latter of which can be expressed in terms ofφ using Bernoulli's principle for an ideal fluid with unsteady irrotational flow. Nondimensionalizing the system with dimensionless variables where a > 0 is some characteristic length scale of the system, we obtain the following system of dimensionless nonlinear partial differential equations: x + ∂ 2 y + ∂ 2 z ,n B ,n F T are the outward unit normal to the boundary B and the free surface F T , respectively. By ∂ n we mean the normal derivative of a function. We discuss in details the contact line boundary condition (2e) in Section 2. We further simplify (2) as follows. Consider an equilibrium solution (φ 0 , η 0 ) = (c, 0) of (2), where c is any constant scalar function (which gives zero velocity field). Assuming the free surface displacement η is a small perturbation of {z = 0}, we look for solutions of the form φ(x, y, z, t) = c + εφ(x, y, z, t), η(x, y, t) = εη(x, y, t), where ε > 0 is some small parameter and collect O(ε) terms. Next, we Taylor expandφ and its derivatives around z = 0. This transforms the boundary conditions, (2c) and (2d), from F T to F . where Φ(x, y, z) and ξ(x, y) are the sloshing velocity potential and height respectively. We obtain the linearized eigenvalue problem for (ω, Φ, ξ), which we refer to as the sloshing problem with surface tension: Here, ∇ F := (∂ x , ∂ y ), ∆ F := ∇ F · ∇ F = ∂ xx + ∂ yy , Φ z = ∂ z Φ and D is the fixed reference domain, with boundary ∂D = F ∪B; see Figure 1. This problem must also be complemented with the condition F ξ dA = 0, which amounts to mass conservation of the fluid. Since we are only interested in nontrivial solutions of (3), we exclude the trivial solution (ω 0 , Φ 0 , ξ 0 ) = (0, 1, 0) by imposing the orthogonality condition F Φ dA = 0. Interestingly, the spectral parameter, ω, appears in the boundary condition on the free surfaces, (3c) and (3d). When Bo = ∞, we see from (3d) that the free surface height ξ is proportional to the sloshing mode Φ restricted to the free surface F and can be eliminated from (3). This yields the greatly simplified eigenvalue problem for (ω, Φ) which is commonly referred to as the mixed Steklov-Neumann eigenvalue problem or the sloshing problem. It is known [46,29] that, if D and F are Lipschitz domains, then (4) has a discrete sequence of eigenvalues 0 = ω 2 0 < ω 2 1 ≤ ω 2 2 ≤ · · · with ω 2 n −→ ∞ as n −→ ∞. The corresponding eigenfunctions {Φ n } ∞ n=0 belong to the Sobolev space H 1 (D) and when restricted to the free surface F form a complete orthogonal set in L 2 (F ). The eigenvalues, ω 2 n , can be characterized by means of a variational principle [46,61]: where H n is defined by where Φ j is the j-th eigenfunction of (4). Here Φ 0 is the constant solution corresponding to ω 0 = 0. It is worth mentioning that the fundamental eigenfunction Φ 1 corresponding to the fundamental (first nontrivial) eigenvalue ω 2 1 can be used to determined the "high spot", the maximal elevation of the free surface of the sloshing fluid. See, for example, [37] for such relation. Several results about the location of high spots for different container geometries in two and three dimensions were obtained in [32,33,34]. Moreover, it was shown in [32] that for vertical-walled containers with constant depth, the question about high spots is equivalent to the hot spots conjecture formulated by Rauch. See [9] for a recent review. 1.4. Main results. Modeling irrotational water waves using variational principles has been investigated recently in [12]. There are mainly two variational principles: the Hamiltonian of Petrov-Zakharov [50,69] and the Lagrangian of Luke [66,40,67]. In this paper, we derive a variational principle similar to Luke, in the sense that it is of free boundary type [15, pp 208]. Let H be the direct sum of Sobolev spaces defined by Define the Dirichlet energy of Φ ∈ H 1 (D) and the free surface energy of ξ ∈ H 1 (F ) by respectively. Our main result is the following theorem giving a variational characterization of the fundamental eigenvalue of (3). Theorem 1.1. There exists a minimizer (Φ 1 , ξ 1 ) to the following minimization problem: Moreover, (Φ 1 , ξ 1 ) is an eigenfunction of (3) in the weak sense with corresponding eigenvalue We also prove a Rayleigh-Ritz generalization of Theorem 1.1 for higher eigenvalues; see Theorem 4.3. An interesting feature of both variational characterizations are the constraints involving the L 2 inner product on the free surface F , requiring the sloshing mode and the free surface height to have unit inner product and be orthogonal to lower modes; see Lemma 3.1. Remark 1. It is not difficult to show that if (φ(x, y, z, t), ξ(x, y, t)) satisfies the time-dependent linear sloshing problem (43), then the quantity In Theorem 4.2, we prove a domain monotonicity result, analogous to a result in [46], for the fundamental eigenvalue of (3). In Section 4.1, we describe the variational formulation for the sloshing problem (3) of Kopachevsky and Krein [29] and compare to the present work. In Corollary 5.1, we establish that in the limit of zero surface tension, (Bo = ∞), the variational principle in Theorem 1.1 reduces to the mixed Steklov-Neumann variational principle (5). In Theorem 5.2, we give the first-order perturbation formula for a simple eigenvalue satisfying (3) in the limit where the Bond number is large. Finally, we illustrate Theorem 5.2 with a cylindrical container, where the exact solution is known. 1.5. Outline. This paper is structured as follows. We begin by discussing the contact angle and its role in contact line boundary condition (2e) in Section 2. In Section 3, we prove preparatory results for Theorem 1.1. We prove Theorem 1.1 in Section 4 and provide a Rayleigh-Ritz generalization of Theorem 1.1 for higher eigenvalues. Section 5 describes the asymptotic behavior of the eigenvalue ω in the limit where Bo is large. We conclude in Section 6 with a discussion. In Appendix A, we give a physical derivation of the sloshing problem with surface tension, (3). Contact angle and its relation with contact line boundary condition It can be seen in Appendix A that including surface tension effects on the free surface F T introduces additional terms involving second derivatives of η onto the dynamic boundary condition (2d) on F T . It is thus deemed necessary to impose a boundary condition on ∂F T so that the sloshing problem (3) is well-posed. Such a boundary condition, commonly referred to as the contact-line boundary condition, controls the free surface height at the contact point, i.e. the point at which the contact line intersects the container's wall [25]. The contact angle, defined in Subsection 1.1, plays an important role in describing the contact line behavior. As first described by Young in his celebrated essay [68], the static contact angle θ s (also called Young's angle) is characterized by the following equation LG , T SG , T SL represents the liquid-gas, solid-gas, and solidliquid surface tension, respectively. Once the contact line is in motion, one should expect the contact angle to be different from θ s ; such contact angle is then called the dynamic contact angle θ d . Accordingly, the static contact angle should remain unchanged in static conditions; however, experimental evidence [17,14,13] demonstrates that this is false in general. In fact, the static contact angle lies between a range θ R ≤ θ s ≤ θ A , where θ R and θ A are the so-called receding and advancing contact angle respectively. Such behavior is known as the contact angle hysteresis, and surface roughness and/or heterogeneity of the container wall seem to be the reason behind this. It is therefore extremely difficult to derive boundary conditions that takes into account both the contact angle hysteresis and the dynamic behavior of the contact line. We list three 6 contact-line boundary conditions proposed in the study of capillary-gravity waves, each of which works under different assumptions. See [52] for a recent review. (1) Free-end edge constraint (Neumann-type), which has the form ∂nη = 0 on ∂F T , wheren is the normal to the solid boundary drawn into the fluid. This is a standard approach in studying capillary-gravity waves. This occurs if one assumes that the contact line can freely slip across the container's wall and θ d ≈ θ s . Reynolds and Satterlee consider such a special case in [53]. (2) Pinned-end edge constraint (Dirichlet-type), which has the form η t = 0 on ∂F T . This corresponds to fixing the contact line at the contact point (hence the word pinned) and assuming the dynamic contact angle θ d lies within the interval (θ R , θ A ). This was first suggested by Benjamin and Scott [8] and investigated in [22,23,7,24]; however, these are all restricted to flat static interface or θ s = π/2. The case of curved static interface or θ s = π/2 was recently investigated by [56]. It is worth mentioning that while this boundary condition makes the theoretical analysis much more difficult but still possible, it is not compatible with the kinematic condition at the container's wall [57]. (3) Wetting boundary condition (Robin-type), which has the form η t = λ∂nη on ∂F T , where λ is some constant measuring the ratio of the contact line velocity to the change in contact angle. Observe that this model includes, as limiting cases, both the free-end (λ = ∞) and the pinned-end (λ = 0) edge conditions. This was first proposed by Hocking [25,26] and investigated by Miles [42,43,44,45] and Shen and Yeh [58]. The assumptions needed here are that the contact angle hysteresis θ A − θ R is negligibly small, θ s = π/2, and θ d is an linear function of the contact line velocity. In this paper, we assume that the static contact angle is θ s = π/2 and the contact angle hysteresis is negligibly small; this is physically achieved by a container with smooth walls and a fluid that is free of contamination. It can then be shown [57] that the contact angle remains unchanged, i.e. θ d = π/2. Assuming that the contact line slips freely, we can write down the boundary condition (2e) Another consequence of this assumption is that the static meniscus F is flat everywhere. Assuming constant surface tension T LG = T , its shape, which we denote by S(x,ỹ), is governed by the Young-Laplace equation [20,10]: Sincen B =n ∂F on ∂F and θ s = π/2, the contact line boundary condition becomes ∂nS = 0 on ∂F . Next, assuming S 2 xx + S 2 yỹ ≪ 1 (small slope approximation), we can linearize the Young-Laplace equation; upon nondimensionalizing the system, we obtain the dimensionless linearized Young-Laplace equation The trivial solution s(x, y) ≡ 0 exists for problem (8) but since Bo is assumed to be positive, an energy argument shows that there is no nontrivial solution. Preliminary results In this section we collect a range of auxiliary results that are required in the proof of Theorems 1.1 and 4.3. (3). Properties of Solutions to If |ω j | = |ω k |, the following orthogonality condition holds: Proof. Part (a) is obtained by simply integrating (3) over respective domains and applying divergence theorem. Part (b) is an easy consequence of the divergence theorem. We now prove part (c) using part (b). First, substituting (3c) for both Φ j , Φ k into (9a) yields Similarly, substituting (3d) for both ξ j , ξ k into (9b) yields Rearranging these equations gives which can be written as a linear system A nontrivial solution exists for the linear system if and only if det(A) = 0, i.e. Next, integrating (3a) against Φ over D and applying divergence theorem gives Integrating (3c) against Φ over F and using the equation above gives: Next, integrating (3d) against ξ over F and applying the divergence theorem gives The result follows from summing (14), (15) and rearranging terms. Direct Method from the Calculus of Variations. This subsection establishes results for the functional in (6) so that we may apply the direct method from the calculus of variations [16,18] to prove Theorem 1.1. We begin by reminding the reader that D ⊂ R 3 is assumed to be a bounded Lipschitz domain, and the Sobolev space H 1 (D) admits a natural inner product, given by v, w H 1 (D) = v, w L 2 (D) + ∇v, ∇w L 2 (D) for any v, w ∈ H 1 (D) with induced norm v 2 H 1 (D) = v 2 L 2 (D) + ∇v 2 L 2 (D) . For any Φ ∈ H 1 (D) and ξ ∈ H 1 (F ), we denote by [Φ] F , [ξ] F the average value (mean) of Φ, ξ over F , respectively. That is, where |F | denotes the two-dimensional Lebesgue measure of F ; here [Φ] F is understood in the sense of trace [18,Chapter 5.5]. The first result shows that the space of functions in Theorem 1.1 is a Hilbert space. Proof. Define the following function spaces: We first show that X D , X F are closed subspaces of H 1 (D), H 1 (F ), respectively. It is clear that both X D , X F are subspaces. Consider any Φ ∈X D , the closure of X D . There exists a sequence (Φ j ) ∈ X D such that Φ j −→ Φ in H 1 (D). Using the continuity of the trace operator Γ D : H 1 (D) −→ L 2 (∂D) [18], This shows that X D is closed in H 1 (D) since A similar argument using only the Cauchy-Schwarz inequality shows that X F is closed in H 1 (F ). Finally, since the closed subspace of a Hilbert space is also a Hilbert space, the direct sum of X D and X F , which is H, is a Hilbert space, with its inner product defined by To apply the direct method, one needs to verify that D where |Ω| and |Σ| are the n and (n − 1) Lebesgue measure of Ω and Σ, respectively, and C Γ Ω , C p positive constants that depends only on Ω. Lemma 3.5. The integral functional F (v) = D[Φ] + S[ξ] is weakly lower semicontinuous in H and satisfies the coercivity condition The result follows since H is a subspace of H 1 (D) × H 1 (F ). Since [Φ] F = 0, Theorem 3.4 yields . On the other hand, It follows that Having established coercivity and sequentially weakly lower-semicontinuity, we prove the final ingredient, which essentially says that the minimizing sequence will "preserve" the integral constraint in the variational problem (6) Proof. Consider any (non-renumbered) subsequence of a weakly convergent sequence First, the Rellich-Kondrachov theorem implies that there exists a subsubsequence (ξ j k ) ∈ H 1 (F ) such that ξ j k −→ ξ strongly in L 2 (F ). Recall that, since Γ D is a compact linear operator, it maps weakly convergent sequences into strongly convergent sequences. Thus, Γ D (Φ j ) −→ Γ D (Φ) strongly in L 2 (∂D). For this subsubsequence (Φ j k , ξ j k ), the Cauchy-Schwarz inequality gives where we used the fact that ξ j k L 2 (F ) is bounded since (ξ j k ) is a convergent sequence in L 2 (F ). This shows that Since this is true for any subsequence of (Φ j , ξ j ), the result follows. Proof of Theorem 1.1, a Rayleigh-Ritz generalization, and other results We are now ready to prove Theorem 1.1. An immediate consequence is the domain monotonicity property for the fundamental eigenvalue of (3). We also prove a variational characterization of the higher eigenvalues of (3). In the following theorem, we prove a domain monotonicity result about the fundamental eigenvalue of (3), stating that if two containers have an identical free surface and both container walls are vertical at the free surface, then the larger container has a higher fundamental sloshing frequency. A similar result for the mixed Steklov-Neumann problem is given in [46]. 4.1. Comparison to the variational formulation of the sloshing problem with surface tension of Kopachevsky and Krein. In [29, pp.207], a variational formulation for the eigenvalues (sloshing frequencies) of the sloshing problem with surface tension is given. It is worth noting that the authors work in a more general setting. (1) The static contact angle satisfies θ s = π/2, which means that F is a curved surface. Upon linearization, this introduces additional coupled terms in the kinematic boundary condition on F . To compensate for this, a curvilinear coordinate system is introduced. (2) The dynamic contact angle θ d is shown to remain unchanged, and the contact-line boundary condition on ∂F is of Robin-type, having the form ∂nη = −χη, where χ is a dimensionless constant depending on θ s and curvature on ∂F . In the present work, for simplicity, we have assumed a contact angle of θ s = π/2 and used Cartesian coordinates. Moreover, we assume χ = 0 so that the Neumann boundary condition ∂nη = 0 on ∂F is recovered. Below, we discuss the results in [29] in this setting. While seeking time-harmonic solutions for the sloshing problem with surface tension, the authors use the same ansatz as ours for the free surface heightη but a slightly different one for the velocity potentialφ. They chooseφ(x, y, z, t) = ωϕ(x, y, z) cos(ωt). The sloshing problem with surface tension takes the form One can show by integrating (22a) against ϕ and using divergence theorem that The system (22) is studied as follows. Define the following spaces of functions Define the Neumann-to-Dirichlet operator C : where C is restricted to L 2 F (F ). Physically, the operators C and B correspond to the kinetic energy and potential energy operator respectively. It is proved that the fundamental eigenvalue ω 2 1 in (24) has the variational characterization, Comparing the variational formulations (25) and (6), we make the following observations. Both variational formulations share the constraint that the velocity potential and surface height have zero mean over F . In (6), (Φ, ξ) need only satisfy the single integral constraint Φ, ξ L 2 (F ) = 1. However, in (25) each ϕ must satisfy the constraint that Cξ = ϕ| F , i.e. they are solutions of the Laplace problem with Neumann data on F equal to ξ. Asymptotics In this section, we consider the asymptotic limit where the Bond number, Bo, is large for the sloshing problem with surface tension (3). We first show that in the limit Bo → ∞, i.e. zero surface tension, we recover the variational characterization for the mixed Steklov-Neumann problem or sloshing problem (4), as derived by Troesch [61]. Proof. From Theorem 1.1, we know that (Φ 1 , ξ 1 ) satisfies the constraint Φ 1 , ξ 1 L 2 (F ) = 1 and the following equation in the weak sense Integrating (26) against ξ 1 over F , together with the constraint yields S[ξ 1 ] = ω 1 /2; this also implies D[Φ 1 ] = ω 1 /2. DefiningΦ = √ ω 1 Φ 1 , integrating (26) against Φ 1 over F , and using the constraint again yields We now investigate the asymptotic behavior of the eigenvalues of (3) in the limit where the Bond number is large. Let ε = Bo −1 and ω(ε) be any eigenvalue satisfying (3) for a fixed ε. The previous result shows that ω(0) is an eigenvalue to the mixed Steklov-Neumann problem (4). The following result gives the first perturbation for a simple eigenvalue, ω(ε) for ε ≪ 1, i.e. Bo ≫ 1. For n = 0, it is not difficult to verify that the first-order term in the expansion of (32b), agrees with Theorem 5.2, 19 Discussion We have considered the small-amplitude fluid sloshing problem for an incompressible, inviscid, irrotational fluid in a container, including effects due to surface tension on the free surface. As opposed to the zero surface tension case, where the problem reduces to a partial differential equation for the velocity potential, we obtain a coupled system for the velocity potential and the free surface displacement (3). In Section 4, we derived a new variational formulation of the coupled problem and establish the existence of solutions using the direct method from the calculus of variations. In the limit of zero surface tension, we recover the variational formulation of the classical mixed Steklov-Neumann eigenvalue problem (4), as derived by Troesch, and obtain the first-order perturbation formula for a simple eigenvalue. As mentioned in Subsection 1.3, the location of high spots for the sloshing problem (4) has been investigated in two and three dimensions. Some results for specific container geometries are summarized as follows. If the wetted boundary, B, is the graph of a negative C 2 -function given on F and B intersects F at a nonzero angle, then the trace Φ 1 (x, y, 0) attains its extrema only on the boundary of the rectangular free surface of the trough ∂F [33]. A similar result for the two-dimensional cross section is given in [32] (2) Consider a bounded Lipschitz domain D which is axisymmetric and convex, such that D ⊂ F × {z ∈ (−∞, 0)}. The boundary ∂D consists of the free surface F which is a disc of radius a > 0 and the wetted boundary B. If Φ 1 (x, y, z) is odd in the x-variable, then the free surface height attains its extrema at (±a, 0, 0) [34]. (3) Consider the ice fishing problem, where D = R 3 − = {(x, y) ∈ R 2 , z ∈ (−∞, 0)} with free surface F = {x 2 + y 2 < b 2 , z = 0} and wetted boundary B = ∂R 3 − \F . It was shown in [32] that Φ 1 attains its extrema on the interior of F . Motivated by the ice fishing problem, in [35], axisymmetric, bulbous (D ⊂ F × {z ∈ (−∞, 0)}) containers are studied using finite element methods. It is observed that such domains have fundamental eigenfunctions with high spots which are on the interior of F . However, for this container geometry,n ∂F =n B on ∂F as is assumed in the physical derivation of the contact line boundary condition; see Section 2. Because Bo → ∞ is a singular limit, including the physical effects due to surface tension could result in qualitative changes in the sloshing modes near ∂F , including the location of high spots. These questions will be addressed in forthcoming work using computational methods by investigating eigenfunctions near ∂F for large but finite Bo. In [61], the variational formulation (5) is used to find the shape of the axisymmetric container with fixed volume that maximizes the fundamental eigenvalue. In this work, it is assumed that (i) the container is very shallow and (ii) effects due to surface tension are neglected. It would be of interest to extend this work by addressing these two assumptions. In [62] it is shown that there exist vessel geometries, referred to as isochronous containers, with the remarkable property that the fundamental sloshing frequency of a fluid is independent of the level to which the container is filled. Such geometries are shown to exist not only for the fundamental mode but for higher modes as well. In this work, and recent papers which significantly extend this work [64,63,65], axisymmetric isochronous containers are 20 found by using the inverse method of solution. It would be interesting to include the effect of surface tension in this work. Next, we Taylor expandφ and its derivatives around z = 0. This transforms the boundary conditions, (2c) and (2d), from F T to F . Consequently, the time-dependent linearized problem for (2) has the form ∆φ = 0 in D, (43a) where φ t , η t denotes the partial derivative of φ, η with respect to time t.
2017-06-01T01:18:24.000Z
2017-06-01T00:00:00.000
{ "year": 2017, "sha1": "dede0d7a1a98ffa281e356b7c0b8931d595824e6", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1706.00142", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "dede0d7a1a98ffa281e356b7c0b8931d595824e6", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics", "Computer Science" ] }
232245775
pes2o/s2orc
v3-fos-license
Circular RNA–MicroRNA–MRNA interaction predictions in SARS-CoV-2 infection Abstract Different types of noncoding RNAs like microRNAs (miRNAs) and circular RNAs (circRNAs) have been shown to take part in various cellular processes including post-transcriptional gene regulation during infection. MiRNAs are expressed by more than 200 organisms ranging from viruses to higher eukaryotes. Since miRNAs seem to be involved in host–pathogen interactions, many studies attempted to identify whether human miRNAs could target severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) mRNAs as an antiviral defence mechanism. In this work, a machine learning based miRNA analysis workflow was developed to predict differential expression patterns of human miRNAs during SARS-CoV-2 infection. In order to obtain the graphical representation of miRNA hairpins, 36 features were defined based on the secondary structures. Moreover, potential targeting interactions between human circRNAs and miRNAs as well as human miRNAs and viral mRNAs were investigated. Introduction MicroRNAs (miRNAs) are noncoding RNAs involved in post-transcriptional gene regulation. The precursor miRNAs (pre-miRNAs) fold into characteristic hairpin structures that are used as the primary feature source in many bioinformatics approaches [1]. Another class of noncoding and endogenous RNAs is circular RNAs (circRNAs) that are generated by a unique splicing reaction known as back-splicing [2]. CircRNAs seem to be expressed in a widespread manner and they have important functions in regulation especially as sponges providing binding sites for miRNAs and RNA binding proteins [3] and a player in the regulation of alternative splicing [4]. According to the competitive endogenous RNA (ceRNA) hypothesis, RNA transcripts such as circRNAs, messenger RNAs (mRNAs), and long non-coding RNAs, include miRNA response elements and these are in competition among themselves for miRNA binding to be able to regulate the expression of each other [5]. Previous studies showed that not only miRNA but also circRNA expressions were changed during infections of both DNA and RNA viruses [6]. Although there is not much information about circRNAs' roles during infection of emerging Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), another member of coronaviruses, Middle East respiratory syndrome coronavirus (MERS-CoV) infection resulted in expression changes of host circRNAs [3]. In this study, we used available differentially expressed miRNA information of SARS-CoV-2 infected cells to build a machine learning based model for prediction. In addition, a comprehensive circRNA-miRNA-mRNA targeting network analysis is performed to identify biologically significant processes in SARS-CoV-2 infection. Our results show that various cellular processes including apoptosis might be affected by the competition of cellular and viral RNAs. These findings could increase the perceptions of infection through RNA-mediated host-virus interactions and lead to development of new strategies for antiviral agents. Related works Various studies attempted to identify human miRNAs that could target viruses [7][8][9][10]. Although there are not many experimentally validated examples of miRNAs encoded by RNA viruses, computational predictions show that SARS-CoV-2 genome could produce miRNAs that could target human mRNAs [11]. Currently there is not much information about the differences in expression levels of miRNAs during SARS-CoV-2 infection. It has been shown that, highly pathogenic MERS-CoV infection causes substantial changes in the expression of many host cell circRNAs, miRNAs, and mRNAs [3]. Architecture/implementation/workflow All data analysis, machine learning and prediction workflows were generated by using the Konstanz information miner (KNIME) platform [12]. MiRNA -target predictions were performed by using psRNATarget tool [13]. Graphical representation of RNA secondary structures An RNA sequence could include four bases (A, G, C, and U) that can form base pairs such as A-U, G-C, and G-U. RNAfold software from the Vienna package was used with default setting to create secondary structures [14]. For better representation, the nucleotides involved in base pairs are shown as A, G, C, and U in Figure 1, while non-base paired ones are shown as A ′ , G ′ , C ′ , and U ′ , respectively. The workflow generated in KNIME uses RNA sequence and dot-bracket representations of secondary structure to modify bases of the sequence as uppercase and lowercase characters [15]. Zhang We used the same base grouping scheme and defined three maps 1, 2 and 3 (Figure 1), where n is the length of the hairpin sequence and i is the index of base in the sequence. In order to represent miRNA hairpin secondary structure as vectors, based on the definitions from Figure 1, 36-dimensional vector was calculated as shown in Figure 2. Data sets Human miRNA sequences were obtained from MiRBase (Release 22.1) [17], human circRNA data set was downloaded from circAtlas 2.0 [18], SARS-CoV-2 CDS were based on RefSeq_NC_045512.2 from NCBI. Differentially expressed miRNA list was based on the results of Chow and Salmena [19] with some changes, since their list is composed of mature miRNAs, we used the hairpin sequences of those available (Table 1). The list of miRNAs used for training of differential expression prediction. Results The differential expression prediction workflow was created by using 70% learning and 30% testing ratios and three different classifiers; random forest (RF), support vector machine (SVM) and multilayer perceptron (MLP) were trained with 100-fold MCCV [20] (Figure 3). Among 2654 mature human miRNAs available in miRBase, 2498 were involved in 272,822 total targeting events with 18,950 human genes; 2498 were involved in 393,877 total targeting events with 208,642 circRNAs and 484 miRNAs targeted 11 SARS-CoV-2 genes. Some of the miRNAs reported as differentially expressed in Calu3 cells infected with SARS-CoV-2 or mock from GSE148729 did not have any predicted targets ( Table 2). Upregulated human miRNA hsa-miR-6891-5p might target not only human genes and circRNAs but also ORF3a gene of SARS-CoV-2 (Table 2). PANTHER Gene Ontology analysis [21] of human gene targets showed that various biological processes could potentially be affected by the actions of this miRNA (Figure 4). Discussion Inter-kingdom communication mechanisms mediated by RNAs have been investigated for several organisms including a variety of viruses, Toxoplasma gondii (protozoan eukaryotic parasite) [22], Histoplasma capsulatum (infectious fungus) [23]. Viruses are parasites that depend on their host for many of their processes. Usually viral infections result in alterations of cellular pathways to modulate viral gene expression and/or accommodate virus in a favourable environment. In some cases, e.g. SARS-CoV-2 infection, host post-transcriptional gene regulation elements like miRNAs might also show differential expression levels during infection [19]. In this study, we analysed such human miRNAs (Table 1) to build a machine learning based workflow that might be used for prediction of expression changes of miRNAs during SARS-CoV-2 infection. Among the 300 models generated, the highest accuracy value was observed with RF classifier (Figure 3). While applying machine learning approaches to miRNA datasets, there are various elements that would affect the overall performance [24]. Among them, feature sets [25,26] and the quality of data [27] might be the most important parts. When there are more datasets available, the workflow can be easily updated to include them and it is also possible to use this workflow for any kind of differentially expressed miRNAs. There is not much known about the individual functions of circRNAs but they are acknowledged as sponges providing binding sites for miRNAs and some RNA-binding proteins [28]. The activities of host circRNAs have been investigated in Hepatitis C virus-infected cells [6] and MERS-CoV infection [3]. We performed a comprehensive target prediction analysis for human miRNAs to measure their capacity to bind human mRNAs, human circRNAs and SARS-CoV-2 genes. Based on the results represented in Table 2, SARS-CoV-2 ORF3a is the only viral target for upregulated human miRNAs. Since ORF3a protein is associated with apoptosis which is an essential mechanism for host antiviral defence to control viral infection [29], upregulation of hsa-miR-6891-5p might be crucial to decrease ORF3a expression during certain stages of infection. Out of 2498 miRNAs that have predicted targets, 2448 had more targets in circRNAs, 27 had more in mRNAs and 23 miRNAs had equal number of targets in both groups. If the mRNA and circRNA targets of specific miRNAs are coexpressed there might be a competition for miRNA binding and considering the wide range of biological processes of a single miRNA's targets (Figure 4) circRNA-miRNA-mRNA network could play important roles in overall gene expression especially when there is a new set of genes as target candidates during viral infections.
2021-03-17T13:15:36.585Z
2021-03-01T00:00:00.000
{ "year": 2021, "sha1": "ca6b4100a085236a727d425029ef311b7ca3008b", "oa_license": "CCBY", "oa_url": "https://www.degruyter.com/document/doi/10.1515/jib-2020-0047/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7d941b1f26886db21b6fc3e324dd639d91f271e9", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
152283210
pes2o/s2orc
v3-fos-license
Status and predictors of planning ability in adult long-term survivors of CNS tumors and other types of childhood cancer Long-term childhood cancer survivors’ (CCS) quality of life can be impacted by late effects such as cognitive difficulties. Especially survivors of CNS tumors are assumed to be at risk, but reports of cognitive tests in CCS with survival times >25 years are scarce. We assessed planning ability, a capacity closely related to fluid intelligence, using the Tower of London. We compared 122 CNS tumor survivors, 829 survivors of other cancers (drawn from a register-based sample of adult long-term CCS), and 215 healthy controls (using sex-specific one-way ANOVAs and t-tests). Associations of CCS’ planning ability with medical and psychosocial factors were investigated with a hierarchical linear regression analysis. Mean planning ability did not differ between CCS and controls. However, female CNS tumor survivors performed worse than female survivors of other cancers and female controls. CNS tumor survivors of both sexes had a lower socioeconomic status, and fewer of them had achieved high education than other survivors. In the regression analysis, lower status and anxiety symptoms were associated with poor planning, suggesting possible mediators of effects of disease and treatment. The results indicate the necessity to contextualize test results, and to include cognitive and psychological assessments into aftercare. As medical advances have greatly increased survival rates, more than 80% of children diagnosed with cancer will become long-term survivors. However, long-term childhood cancer survivors (CCS) run the risk of suffering from late effects related to disease and treatment such as cardiovascular disease, cognitive impairments, and emotion regulation difficulties [1][2][3] . Thus, important aims of medical and psychological research are the identification of vulnerable subgroups of long-term survivors and the characterization of their challenges 4,5 . Previous research suggests that survivors of tumors of the central nervous system (CNS) are an especially jeopardized group of CCS 6,7 . CNS tumors are the most common solid malignancies in childhood. Tumor growth and multimodal treatment can pose risks for debilitating decreases in cognitive functioning since children's neuraxis and CNS structure are still developing 8 . Neurocognitive abilities warrant investigation as they play an important role for individuals' developmental trajectories, e.g. academic achievement and quality of life 9 . As the first large cohorts of long-term survivors now reach middle adulthood, studying their situation can inform long-term survivorship care programs which aim to target those who are especially in need. However, (2019) 9:7290 | https://doi.org/10.1038/s41598-019-43874-4 www.nature.com/scientificreports www.nature.com/scientificreports/ research has yielded inconclusive results regarding the cognitive capacities of extremely long-term CCS in general and CNS tumor survivors in particular. Estimates of CCS' cognitive impairments vary between and within studies: Based on a questionnaire 10 , more than 20% of 1426 adult childhood cancer survivors reported difficulties pertaining to task efficiency, emotional regulation, organization, and memory 11 . According to treatment-exposure based medical assessments within a large American cohort, up to 60% of 1713 long-term survivors of all cancer types were deemed at risk for neurocognitive shortfalls 12 . A recent investigation of 224 long-term survivors of CNS cancer used a standardized set of neurocognitive tests 13 and reported wide-ranging rates for severe impairment (8-57%), depending on the task and participants' treatment exposure. In addition to diverging methods of assessment, comparisons between different reports of cognitive late effects are complicated by the fact that childhood and adolescence are periods of fundamental cognitive development. Thus, varying ages at diagnosis and treatment and varying follow-up times contribute to the disparity of outcomes. Furthermore, previous research has often omitted testing potential mediators or moderators of the effects of disease and treatment which shape individual development and adaptation over the lifespan 1,14 . Among these are pretreatment aspects (cancer severity, tumor localization, age at diagnosis, sex 8 ), psychosocial factors (socioeconomic status (SES), age, education, social support 1 ), and other currently relevant circumstances (time elapsed since treatment, physical health status, and health behavior 15 ). There is evidence that these aspects impact cognitive performance from other samples. For example, in a representative sample drawn from the general population, anxious individuals showed worse cognitive performance 16 . Thus, an investigation of cognitive abilities benefits from testing their contributions, especially as CNS tumor survivors have been shown to be at risk for adverse health conditions 17 , risky health behaviors 1 , and unfavorable psychosocial outcomes 7,8 . The relative contribution of risk and protective factors may also change over time. While disease-related biological and treatment factors may have a more substantial impact following diagnosis and acute treatment, the psychosocial context might subsequently shape adjustment over the lifespan 18 and affect long-term psychological functioning. Additionally, cognitive dysfunction can progress over time, accelerate aging and limit restorative capacities 1 . Behavioral assessments of CCS with survival times >25 years, however, are scarce 1,19,20 . Aims of the present study We measured planning ability in a large, register-based sample of adult long-term CCS with an objective neurocognitive test. Planning denotes a form of problem solving, i.e. the mental conception and evaluation of behavioral sequences and their outcomes prior to execution 16 . It is a complex executive function depending on prefrontal cerebral areas closely linked to fluid intelligence 21 . Planning is essential for goal-directed behavioral control, e.g. in the realm of academic attainment. On this basis, the main aims of the present study were twofold. First, we assessed the status of CCS' planning ability: By comparing their performance to healthy controls, and also by comparing the performance of CNS tumor survivors, survivors of other cancers, and healthy controls. Second, we tested predictors of planning ability in all childhood cancer survivors. participants. The nationwide German Childhood Cancer Registry (GCCR) systematically documents patients with childhood cancer treated at one of 34 centers residing in Germany since 1980 22 . A total of 2,894 survivors diagnosed with neoplasia according to the International Classification of Childhood Cancer (ICCC3) between 1980 and 1990 were invited to take part in the studies CVSS (Cardiac and Vascular late Sequelae in longterm Survivors of childhood cancer, clinicaltrials.gov-nr.: NCT02181049) and PSYNA (Psychosocial long-term effects, health behavior and prevention among long-term survivors of cancer in childhood and adolescence). Survivors of Hodgkin lymphoma and a small group of former nephroblastoma patients could not be enrolled as they had taken part in other trials. Between 2013/09 and 2016/02, 1,002 CCS were examined. We excluded 51 individuals due to subsequent malignant neoplasms. As previously reported 2 , the largest diagnosis groups (following the ICCC-3′s classification 23 For comparisons with healthy controls (excluding neurologic and psychiatric disease), we used previously reported population-based results 24 from 830 participants (16-80 years) recruited as part of norm and validation studies. To achieve a comparable age range, we excluded subjects below 20 and above 49 years of age, yielding a remaining sample of N = 215. Materials Cognitive test: tower of london. We used the Tower of London (ToL) 25 to assess planning ability. The setup consists of different colored balls placed on rods of different lengths. The task requires participants to plan ahead in order to transform a given start state into a defined goal state in an efficient way by performing the minimum number of moves. The present study used a touchscreen version 16 . Participants were asked to solve 24 problems in 20 minutes. The test set included eight problems each requiring four-, five-, and six moves with a monotonic increase in problem difficulty and possesses good reliability 24,26 . The main performance measure is solution accuracy, the percentage of problems solved correctly (0-100). www.nature.com/scientificreports www.nature.com/scientificreports/ Disease and treatment data. CCS' cancer-and treatment-related information was abstracted from primary health records of former treating medical centers and/or centrally documented individual therapy data available at the Society for Pediatric Oncology and Hematology's (GPOH) study centers. It was validated by trained medical staff. present somatic illnesses. All CCS completed a standardized 5.5-hour examination including cardiovascular and clinical phenotyping, a computer-assisted personal interview, and filled out questionnaires 2,27 . We assessed chronic obstructive pulmonary disease (COPD), kidney disease, cardiovascular disease (CVD; summarizing myocardial infarction (MI), heart failure (HF), stroke, deep vein thrombosis (DVT), pulmonary embolism (PE), and peripheral arterial disease (PAD)), and diabetes (definite diagnosis of diabetes by a physician or a blood glucose level of ≥126 mg/dl in the baseline examination after an overnight fast of ≥8 hours or a blood glucose level of >200 mg/dl after a fasting period of ≥8 hours). sociodemographic and psychological measures. Socioeconomic status was defined according to Lampert and Kroll 28 . The aggregated index ranges from 3 (lowest) to 21 (highest), based on the level of education, profession, and income. We used the Patient Health Questionnaire (PHQ-9) to measure depression symptoms. Participants are asked to state the frequency of being bothered by each of the 9 diagnostic criteria of major depression over the past 2 weeks (0 = not at all, to 3 = nearly every day), yielding a sum score between 0 and 27. Caseness is defined as a sum score of ≥10 which has achieved a sensitivity of 88% and a specificity of 88% for detecting major depression 29 . Generalized anxiety symptoms were assessed with the two screening items of the short form of the Generalized Anxiety Disorder Scale (GAD-7). Occurrence of "Feeling nervous, anxious or on edge" and "Not being able to stop or control worrying" over the last two weeks was rated on a Likert scale (0 = not at all, to 3 = nearly every day) and assessed generalized anxiety with good sensitivity (86%) and specificity (83%) 30 . Physical activity was inquired with the Short Questionnaire to Assess Health-Enhancing Physical Activity (SQUASH), assessing the contexts of commuting, leisure time, household, work, and school activities. Sleeping, lying, sitting, and standing were classified as inactivity 31 . statistical analyses. With respect to planning ability, we used sex-specific independent t-tests to explore differences between the CCS and controls, and sex-specific one-way ANOVAs to compare CNS tumor survivors, survivors of others cancers, and controls. We chose this strategy as previous research has attested to the association of sex and performance in the Tower of London 16,24,[32][33][34] . Further, sex-dependent vulnerabilities to late effects of childhood cancer are another argument for sex-specific or sex-sensitive analysis methods 7,35 . To ascertain the links of disease-related and -unrelated aspects with planning ability, we tested hierarchical multiple linear regression models with ToL solution accuracy as criterion. Hierarchical linear regression allows for testing whether the introduction of a new predictors (such as current mental distress symptoms) adds to the explanation of the dependent variable's variance (here: planning ability) in a statistically significant way after accounting for all other variables. As we are not aware of previous work investigating this set of predictors alongside each other in conjunction with CCS' planning ability, we chose this exploratory method. Each block added new variables to the model. The final model (after the addition of block 6) contained the following variables: CNS tumor diagnosis, chemotherapy, radiotherapy, age at diagnosis, sex, age, socioeconomic status, depression symptoms, anxiety symptoms, somatic illnesses, active sports, and smoking. Regression models were checked for multicollinearity using the variance inflation factor (VIF). All VIF scores were below 4 (10 being the critical threshold 36 ), indicating no concerning level of multicollinearity. Sensitivity analysis of the sample size was performed using the calculator provided by Soper 37 . A sample size of 94 was required to observe an anticipated mean effect size of f 2 = 0.21 of CNS tumor survival on cognitive performance 20 taking all 12 predictors of the final model into account. P-values correspond to two-tailed tests and in the case of univariate comparisons, they are supplemented by effect sizes (Cohen's d). Statistical analyses were performed using SPSS 23 for Windows. Results Characteristics of the CCs sample. Mean age of all CCS was 34.05 years (SD = 5.56), with a mean age at diagnosis of 6.14 years (SD = 4.28). Table 1 shows the sample characteristics stratified by diagnosis and sex. Differences between CNS tumor survivors and survivors of other cancers pertained to SES and level of education, which was lower in CNS tumor survivors. A larger percentage of CNS tumor survivors had received neither chemo-nor radiotherapy, or only radiotherapy. Between CNS tumor survivors and other cancer survivors, there were no differences with respect to current age at examination, employment status, the sum score of depression or anxiety symptoms, health-related aspects such as smoking, physical activity, or the number of somatic illnesses. planning ability. Status of planning ability in CCS vs. controls. With respect to performance in the ToL, there were no overall differences concerning male CCS and male controls, or female CCS and female controls ( Table 2). Status of planning ability in CNS tumor survivors vs. survivors of other cancers vs. controls. Analyses of variance comparing the two CCS groups' and healthy controls' solution accuracy scores (Table 3) showed that within men, there was no significant group effect (F(2,738) = 2.329, p = 0.098). Within women, there was an effect (F(2,660) = 3.911, p = 0.021) and post-hoc tests revealed that apart from the significant differences between CNS tumor survivors and female controls, female CNS tumor survivors also performed worse than other CCS. (No differences pertained to thinking times and movement execution times in CNS survivors and other CCS). Mean solution accuracy scores of all groups of participants are shown in Fig. 1. www.nature.com/scientificreports www.nature.com/scientificreports/ Regression analysis. The hierarchical linear regression analysis of solution accuracy is reported in Table 4. In step 4, the first model explaining a statistically significant proportion of variance of the criterion, the predictive power of radiotherapy lost statistical relevance. Significant predictors in the final model (R 2 = 0.024, F(12,711) = 2.485, p = 0.003, f 2 = 0.025) were SES, which was positively related to performance, and anxiety symptoms, with a negative association. Variables directly related to the illness or its treatment were no longer statistically relevant predictors in the final model. Discussion The study adds to the scarce body of research which has investigated cognitive performance in adult long-term CCS using a validated cognitive test. It characterizes the relative disadvantages of CNS tumor survivors in a sex-specific way and juxtaposes cognitive test results and measures of societal attainment. Furthermore, the present study is the first to investigate the interrelatedness of CCS' cognitive ability with disease-and treatment-related factors, current mental distress, somatic illnesses, health behavior, and socioeconomic status. It therefore allows an evaluation of how closely different kinds of late effects are associated with cognitive abilities. Our first research question was whether CNS tumor survivors as a whole were a particularly disadvantaged group. With respect to cognitive performance, our results contrast previous reports of general, overarching cognitive difficulties of adult long-term CCS (which were compared with their siblings based on a self-report measure) 38 . Sex-specific analyses showed that female CNS survivors were at a disadvantage. Hence, they might have been affected more severely by disease and treatment than male CNS tumor survivors. This finding is in line with previous reports of more pronounced academic deficits in female CNS survivors 39 . It has been speculated that girls might be more radiosensitive than boys, but the biological basis is uncertain 40 . However, the observed differences were small. Larger differences between CNS tumor survivors and other CCS pertained to the achievement www.nature.com/scientificreports www.nature.com/scientificreports/ of high education and high socioeconomic status and applied to both sexes. These results highlight the tangible long-term consequences of childhood cancer for societal attainment 8,15,41 . They are compatible with other large-scale investigations, e.g. carried out in the United States 42 , and thus show that the association of cancer treatment and survival with diminished personal wealth is not limited to the American health care system. Our second main result was that poor cognitive performance was associated with mental distress symptoms and lower SES. This mirrors previous results from other samples including the general population 16,32,43,44 . Among CCS, this finding is of particular importance as psychological morbidities have been implicated as late effects 7,45 . The hierarchical regression analysis allowed for a comparison of the strength of the associations between planning ability and different variables directly related to disease and treatment, and aspects implicated as late effects. Our results attest to a close relationship of sex, age, and socioeconomic status with cognitive capacities. The relatively low relevance of the cancer diagnosis and its treatment is in line with the notion that direct effects of disease and treatment may cease over time and be mediated or moderated by other factors which are more relevant to an individual's well-being decades later. Thus, groups of CCS for whom adjustment to life after cancer is a challenge might not be defined by their age at diagnosis or treatment exposure. Instead, among long-term survivors, psychosocial factors (such as poverty) could better identify those who are in need of the most support. Another clinical implication of our results is that results of cognitive tests (e.g. carried out in the context of long-term aftercare) should be put into context: Assessments of CCS' cognitive functioning should be interpreted with additional information in mind, for example with respect to common late effects such as psychological distress 46 . A previous study in leukemia survivors has also highlighted the connection of weak cognitive capacities and poor physical and psychological quality of life 9 . Correspondingly, there is evidence that cognitive screening and training can positively influence childhood cancer survivors' developmental trajectories 1 . Importantly, improvements in cognitive functioning co-occured with favorable changes in participants' quality of life 47 . Thus, there is a need for the advancement of evidence-based treatments, and efforts should be undertaken to make them available to all CCS irrespective of diagnosis and treatment exposure. Further, longitudinal studies testing the effects of psycho-oncological interventions should supplement measures of symptom burden with cognitive tests and other functional outcomes. Beyond the extension of knowledge about interventions' modes of action, the paramount questions should be whether observed changes are sustainable, and whether they make a difference in CCS' lived outcomes. study limitations. The reported findings need to be considered in the light of several limitations. First, they concern the lack of more detailed information regarding CCS' radiation dosage or chemotherapeutic agents. Further, the investigated CNS tumor survivor group was rather small and there was no data concerning comorbidities (e.g. including blindness, deafness, hydrocephalus). These individual differences might also have impacted cognitive performance in the present task, mental distress, and societal attainment. While comparisons with controls considered the variables age and sex, there was no data on controls' education. Another limitation is that we could not consult age-adjusted norms for the employed test. Thus, other CCS must be deemed the most valid comparison group to CNS tumor survivors as they are part of the same cohort and underwent identical assessments. The long follow-up time introduces a survival bias, especially as CNS tumor survivors are at high risk for late mortality. Incidence rates show that CNS tumors constituted 20.7% of childhood cancer diagnoses from 1987-2004 in Germany 48 . As they only made up 12.8% of our sample, the most heavenly burdened individuals might have died at an earlier point or not have been able to participate, cautioning against generalization of our results. Albeit planning is a higher-order executive function with relations to numerous relevant outcomes, broader cognitive assessments could yield more information about specific strengths and weaknesses. Lastly, we did not know participants' SES before cancer diagnosis and treatment during childhood/adolescence. In general, the present investigation does not allow for inferences with respect to cause and effect within this sample. Longitudinal projects (e.g. starting in childhood and as close to the time of diagnosis of childhood cancer as possible) are needed to clarify the direction of the relationship between emotional well-being, cognitive abilities, and societal attainment in this sample. Conclusions The present study reports results of a neurocognitive test in a large sample of long-term CCS with survival times surpassing 20 years. We did not confirm cognitive impairment with respect to the whole group of CCS. Female CNS tumor survivors might be a vulnerable group, but the observed deficits in planning performance were small. Much larger effect sizes pertained to male and female CNS tumor survivors' lower societal attainment. Our results also suggest that over the course of development, the direct effects of disease and treatment might be altered or be superimposed by other variables. Furthermore, the interrelatedness of societal attainment, mental distress, and cognitive abilities highlights the need to assess CCS' life situation in a comprehensive way. At the same time, it underscores the potential which is offered by systematic screening and intervention efforts, for instance as part of long-term follow-up care: Adequate support can alleviate suffering and also enable survivors to lead an independent and successful life. Data Availability The written informed consent of the study participants is not suitable for public access to the data and this concept was not approved by the local data protection officer and ethics committee. Access to data at the local database in accordance with the ethics vote is offered upon request at any time. Interested researchers make their requests to the Principal Investigators of the CVSS/PSYNA study (Philipp.Wild@unimedizin-mainz.de).
2019-05-14T14:37:38.022Z
2019-05-13T00:00:00.000
{ "year": 2019, "sha1": "1e1ec6459b762599c4d4b2303f8758d00c6fa3c3", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-019-43874-4.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1e1ec6459b762599c4d4b2303f8758d00c6fa3c3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
236930294
pes2o/s2orc
v3-fos-license
A Newly Identified lncBCAS1-4_1 Associated With Vitamin D Signaling and EMT in Ovarian Cancer Cells Long noncoding RNAs (lncRNAs) were identified rapidly due to their important role in many biological processes and human diseases including cancer. 1α,25-dihydroxyvitamin D3 [1α,25(OH)2D3] and its analogues are widely applied as preventative and therapeutic anticancer agents. However, the expression profile of lncRNAs regulated by 1α,25(OH)2D3 in ovarian cancer remains to be clarified. In the present study, we found 606 lncRNAs and 102 mRNAs that showed differential expression (DE) based on microarray data. Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analysis indicated that the DE genes were mainly enriched in TGF-β, MAPK, Ras, PI3K-Akt, and Hippo signaling pathways, as well as the vitamin D-related pathway. We further assessed the potential lncRNAs that linked vitamin D signaling with EMT, and lncBCAS1-4_1 was identified in the first time. Moreover, we found that the most upregulated lncBCAS1-4_1 showed 75% same transcripts with CYP24A1 (metabolic enzyme of 1α,25(OH)2D3). Finally, the lncBCAS1-4_1 gain-of-function cell model was established, which demonstrated that the knockdown of lncBCAS1-4_1 inhibited the proliferation and migration of ovarian cancer cells. Furthermore, lncBCAS1-4_1 could resist the antitumor effect of 1α,25(OH)2D3, which was associated with upregulated ZEB1. These data provide new evidences that lncRNAs served as a target for the antitumor effect of 1α,25(OH)2D3. INTRODUCTION Ovarian cancer is the leading cause of death caused by gynecologic malignancies (1). Despite the significant medical advances during the past decades, the 5-year survival rate of ovarian cancer is lower than 50% (2). Long noncoding RNAs (lncRNAs), with transcripts more than 200 nucleotides in length, were suggested to play fundamental roles in the development of tumor (3), due to the fact that lncRNAs may exhibit tumor suppressive or promoting functions through the regulation of transcription, translation, protein modification, and the formation of RNA-protein or protein-protein complexes (4). Therefore, lncRNAs are possible candidates of cancer biomarkers and/or therapeutic targets (5). Recently, 13 published papers investigated the expression of lncRNA in normal ovaries, ovarian cysts, and benign and malignant ovarian cancer (6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16)(17)(18), suggesting the important role of lncRNAs in ovarian cancer development and chemotherapeutic survival outcomes of patients. Thus, it is important to explore the potential of lncRNAs as a therapeutic target of ovarian cancer. In the present study, the lncRNA and mRNA networks were constructed using microarray data, which were used to explore the profile of lncRNA in 1a,25(OH) 2 D 3 -treated human ovarian cancer SKOV3 cells comprehensively. Moreover, the potential lncRNAs that linked vitamin D signaling with EMT were analyzed, and lncBCAS1-4_1 was identified. Besides, the effect of lncBCAS1-4_1 on the proliferation and migration in 1a,25 (OH) 2 D 3 -treated ovarian cancer cells was investigated. Microarray Expression Profiling SKOV3 cells were treated with 1a,25(OH) 2 D 3 (100 nmol/L) or vehicle (the same concentration of ethanol) for 72 h. Total RNA was extracted with a TRIzol reagent (Thermo Fisher Scientific, Scotts Valley, CA, USA) and quantified using NanoDrop ™ ND-2000 (Thermo Fisher Scientific). After RNA integrity was assessed using an Agilent Bioanalyzer 2100 (Agilent Technologies, CA, USA), sample labeling, microarray hybridization, and washing were performed based on the manufacturer's standard protocols (OE Biotech Company, Shanghai, Design ID: 076500). Briefly, total RNA was transcribed to double-stranded cDNA and then synthesized into cRNA, which was labeled with Cyanine-3-CTP. The labeled cRNAs were hybridized onto the microarray. After washing, the arrays were scanned by the Agilent Scanner G2505C (Agilent Technologies). Differentially Expressed Gene Analysis Limma (Version 3.8) package in R software was used to identify the differently expressed mRNAs (DE-mRNAs) and -lncRNAs (DE-lncRNAs) with a threshold of |log2 (fold change [FC])| > 2.0 and a false discovery rate [FDR (adjusted p-value)] < 0.05. The heatmap and volcano were constructed by the gplots package in R software. Functional Enrichment Analysis To reveal the functions of DE genes, the Enricher database was used to conduct Gene Ontology (GO) annotation and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment analyses (27). The GO terms comprised of the following three divisions: biological process (BP), cellular component (CC), and molecular function (MF). A significance level of p < 0.05 was set as the cutoff criterion, and the plots were constructed by the gplots package in R software. A PPI network of DE mRNAs was constructed using STRING 11.0 (http://string-db.org), with a combined score > 0.9 as the cutoff value. Significant modules in the PPI network were identified using MCODE 1.5.1 (a Cytoscape software plug-in). Construction of the lncRNA-mRNA Co-Expression Modules The lncRNAs and mRNAs co-expression modules were further selected using Pearson correlation analysis. The lncRNA-mRNA pairs with a correlation coefficient > 0.9 and p < 0.05 were used for bidirectional clustering. Quantitative Real-Time PCR Reverse transcription reactions consisted of 1 mg RNA and 2 mL of 5xPrimerScript RT Master Mix (TaKaRa, Japan) with a total volume of 10 mL. The primer sequences of RNA are shown in Table 1. Reactions were performed in a C100 PCR System (Bio-Rad, Hercules, CA, USA) for 15 min at 37°C. GAPDH was used as the internal control. The qPCR was performed using the SYBR Green (Roche, Basel, Switzerland) dye detection method on the ABI 7500 PCR instrument (Applied Biosystem, Foster City, CA, USA) under default conditions: 95°C for 10 s, 40 cycles of 95°C for 5 s, and 55°C for 30 s. The relative gene expression levels were analyzed by the 2 -DDCt method, where DCt = Ct(target) -Ct(GAPDH). Construction of lncBCAS1-4_1 Loss/Gain Cell Model Overexpression adenoviruses (OE) as well as control adenoviruses (empty vector, EV) of lncBCAS1-4_1 were purchased from GeneChem Corporation (Shanghai, China). The knockdown lncBCAS1-4_1 was produced by siRNA interference. Scramble control and silncBCAS1-4_1 were purchased from RiboBio Co., Ltd (Guangzhou, China) and transfected using riboFECT ™ CP (Guangzhou, China) according to the manufacturer's instructions. All oligonucleotide sequences are listed in Table 2. Cell Proliferation Assay Cell colony formation and the CCK-8 counting were used to assay cell proliferation, respectively. Briefly, 3×10 4 OVCAR8 or 2.5×10 4 SKOV3 cells were seeded onto 60-mm culture plates and transfected by adenoviruses or siRNAs for 72 h. After the cells were treated with 1a,25(OH) 2 D 3 (100 nmol/L) or ethanol for 48 h, they were fixed with 75% alcohol and stained with 0.3% methyl violet for 20 min at room temperature. Then the colonies were dissolved by glacial acetic acid, and the absorbance value (AU) was detected at 585 nm with a microplate reader (Filter Max F5, Molecular Devices, CA, USA). The cell proliferation ratio was calculated as (AU treatment group -AU blank group )/(AU control group -AU blank group ). All experiments were performed in triplicate. According to the manufacturer's instructions, approximately, 3×10 3 OVCAR8 or 2.5×10 3 SKOV3 cells per well were plated in triplicate into 96-well plates and treated with 1a,25(OH) 2 D 3 for 48 h. The control group was treated with ethanol. At each of the desired time points, 10 mL of the CCK-8 solution was added and incubated for 1 h at 37°C, followed by measurement of absorbance at 450 and 630 nm with a microplate reader for quantifying the relative cell density. Cell viability was calculated as: (AU 450-treatment group -AU 630-treatment group )/ (AU 450-control group -AU 630-blank group ). All experiments were performed in triplicate. Cell Migration Assay The cell migration was assessed using a wound healing assay. Cells were plated into a 6-well plate with FBS-free media for 12 h. Afterwards, cells cultured in the bottom of the well were scratched using a pipette tip to create a wound area. After 24 and 48 h, wounds (three images each well) were imaged under a microscope (40, CKX41F, Olympus, Tokyo, Japan) to detect the width of the gaps. Wound healing assay data are displayed as the migration index (%), which is calculated by the formula [(initial width) -(final width)]/(initial width). Values were normalized by the control group. Data points in the figure represent three independent experiments. Statistical Analysis All microarray statistical data were analyzed in the R environment (R version: 3.6.3). Wilcoxon/Mann-Whitney test was used to analyze continuous variables, and Fisher's exact test or chi-square test was used to analyze the categorical data. Experimental data were performed using GraphPad Prism 8 (GraphPad Software Inc., La Jolla, CA, USA). Quantitative data were presented as the mean ± standard deviation (SD). Statistical data were analyzed using an unpaired Student's t-test. For all statistical analyses, a p-value less than 0.05 was regarded as statistically significant. To further validate the findings of the microarray analysis results, five dysregulated lncRNAs were confirmed using quantitative RT-PCR. lnc-BCAS1-4_1 and lnc-RWDD4-5_1 were selected as target lncRNAs with the most upregulated/ downregulated expression. Lnc-ZNF599-3_6 was selected for its potential function of trans-regulating, and the other two (lnc-MBOAT1-4_2 and lnc-KRT7-2_2) were randomly selected. Consistently, their expressions from quantitative RT-PCR results were similar with those of the microarray analysis ( Figure 1E). Similarly, the transcriptional levels of lncRNABCAS1-4_1 and CYP24A1 were indeed dramatically increased after 1a,25(OH) Vitamin D-Regulated lncRNA-mRNA Network in Ovarian Cancer Cells To explore the potential responsible mechanism of cancer cells for 1a,25(OH) 2 D 3 , KEGG pathway analysis was performed on the DE genes. The results indicated that DE mRNAs were mainly enriched in TGF-b, regulating pluripotency of stem cells, and Hippo signaling pathways (Figure 2A). The hub genes with a degree connectivity in PPI network were enriched in insulin-like growth factor 1 (IGF1), which is known to induce cell proliferation (28), TGF-b2 (29), insulin-like growth factor-binding protein 3 (IGFBP3) (28), and COL1A1 (30), which are closely associated with the vitamin D endocrine system ( Figure 2B). Then, we identified a top 5 lncRNAs-mRNAs networks including 5 lncRNAs and 140 mapped mRNAs ( Figure 2C and Supplementary Data Table S1). GO enrichment analysis and subpathway analysis showed that "phagocytosis", "cytoplastic side of plasma membrane", and "growth factor activity" were significantly related to this module ( Figure 2D). KEGG analysis for 140 mRNA from top 5 lncRNA-mRNAs networks revealed that cancer-related pathways were enriched in this network, e.g., Ras, MAPK, TGF-b, Rap1, and PI3K-Akt signaling pathway ( Figure 2E). Construction of the lncBCAS-1_4-1 as a Core of EMT Signal Pathway in 1a,25(OH) 2 D 3 Treated Ovarian Cancer Cells Next, we selected the most dysregulated lncRNA, lncBCAS1-4_1, to construct the lncRNA-mRNA network, and 83 mapped mRNAs were involved to explore the function of this module ( Figure 3A and Supplementary Data Table S2). GO analysis showed that "epithelial cell proliferation", "collagen-containing extracellular matrix", and "growth factor activity" were highly enriched in this network ( Figure 3B). KEGG analysis also revealed that these genes mainly enriched in TGF-b, regulating pluripotency stem cells, MAPK, Ras, and Hippo signaling pathways ( Figure 3C). Because the TGF-b signaling pathway repeatedly occurred, the EMT-related genes were applied to identify the significant pathway associated with 1a,25(OH) 2 D 3 ; as shown in Figure 3D. The EMT pathway was significantly activated in this network ( Figure 3D). The Role lncBCAS-1_4-1 on Proliferation and Migration of Ovarian Cancer Cells To validate the function of lncBCAS-1_4-1, SKOV3 cells were used to build up lncBCAS1-4_1 gain-of-function cell models ( Figure 4A), while OVCAR8 cells were used to build up lncBCAS1-4_1 loss-of-function cell models ( Figure 4B). The results of CCK8 ( Figure 4C) and platting efficiency ( Figure 4D) assay showed that overexpressed lncBCAS1-4_1 promoted proliferation, while knockdown of lncBCAS1-4_1 inhibited proliferation. Similarly, we found that the gain of lncBCAS1-4_1 increased migration, and the loss of lncBCAS1-4_1 decreased cell migration ( Figure 4E). We then detected the expression of mRNAs associated with the EMT signaling pathway. The result demonstrated that overexpression of lncBCAS1-4_1 significantly upregulated the EMT mesenchymal marker including N-cadherin and Vimentin, as well as the EMT-related transcriptional factor (ZEB1) ( Figure 4F). The Inhibition of 1a,25(OH) 2 D 3 on Proliferation and Migration of Ovarian Cancer Cells Is Disrupted by lncBCAS-1_4-1 To ascertain the impact of lncBCAS1-4_1 on the antitumor action of vitamin D, ovarian cancer cells treated with 1a,25(OH) 2 D 3 were interfered or overexpressed by the siRNA or adenovirus vector of lncBCAS1-4_1, respectively. As expected, the knockdown of lncBCAS1-4_1 significantly enhanced the 1a,25 (OH) 2 D 3 mediated antitumor effect, while overexpressed lncBCAS1-4_1 resisted the antitumor effect of 1a,25(OH) 2 D 3 in vitro (Figures 5A-C). The results from Figure 5D showed that the expressions of Vimentin, ZEB1, and Twist1 were significantly reduced by 1a,25(OH) 2 D 3 as compared to mock-vehicle negative control. However, the reduced ZEB1 levels in overexpressed lncBCAS1-4_1 SKOV3 cells were increased after treatment with 1a,25(OH) 2 D 3 . Taken together, these data indicated that the overexpression of lncBCAS1-4_1 significantly resisted the antitumor effect of 1a,25(OH) 2 D 3 , which was associated with upregulating ZEB1. The co-expression/regulatory networks of lncRNAs-mRNAs indicated that "TGF-b signaling pathway", "epithelial cell proliferation", and "Hippo signaling pathway" were significantly involved in 1a,25(OH) 2 D 3 treated cancer cells. Up to date, there are lots of reports about how coding genes or non-coding genes to regulate EMT progress or EMT associated genes and transcription factors (48)(49)(50)(51). Moreover, 1a,25(OH) 2 D 3 was reported to have the effect on inhibiting the progression of EMT (31). Our results also showed that EMT signaling pathway was significantly activated in 1a,25(OH) 2 D 3 treated ovarian cancer cells. It is plausible that these lncRNAs could mediate the EMT process by vitamin D signaling pathway, which supports our hypothesis that 1a,25(OH) 2 D 3 has inhibitory effects on ovarian cancer cells by regulating lncRNA expression patterns. In the present study, the most upregulated lncRNA was lncBCAS1-4_1, which has the closest relationship with the mRNA transcript of CYP24A1, because their 75% transcripts are the same. CYP24A1 is the gene coding the metabolic enzyme of 1a,25(OH) 2 D 3 , resulting in the loss of physiological activity by 1a,25(OH) 2 D 3 (34). In vitro and in vivo studies also showed that CYP24A1 has been deemed as a candidate oncogene in many cancers, such as ovarian cancer (35), colorectal cancer (36,37), prostate cancer (38), lung cancer (39), breast cancer (40), thyroid cancer (41), and so on. Moreover, a recent study showed that the upregulation of CYP24A1 and PFDN4 as well as nearby lncRNAs may be used as the potential diagnostic biomarker in colorectal cancer (52). Interestingly, it has been reported that mice with CYP24A1 knockout exhibited a fourfold reduction in thyroid tumor growth compared with wild-type CYP24A1 mice. They found that this phenotype was associated with the repression of the MAPK, PI3K/Akt, and TGFb signaling pathways, and a loss of EMT in CYP24A1 knockout cells was also associated with the downregulation of genes involved in EMT, tumor invasion, and metastasis (53). Furthermore, functional analysis revealed that the TGF-b pathway was associated with lncBCAS1-4_1. Based on the 75% similarity with CYP24A1 and the relationship between CYP24A1 and EMT, as well as the key role of TGF-b in the EMT process, we focused on the link of lncBCAS1-4_1 and EMT. For the lncBCAS1-4_1 loss/gain cell model, the oncogenic role of lncBCAS1-4_1 was validated in vitro, and the overexpression of lncBCAS1-4_1 significantly resisted the antitumor effect of 1a,25(OH) 2 D 3 , which was associated with upregulated ZEB1. Thus, it was worthy to reveal the molecular mechanism of EMTrelated lncRNAs in cancer and to demonstrate that lncBCAS1-4_1 can be a potential therapeutic target for patients. Additionally, we also found that the most downregulated lncRNA (lnc-RWDD4-5_1) and IGFBP3 mRNAs were negatively correlated. After treatment with 1a,25(OH) 2 D 3 , the expression of lnc-RWDD4-5_1 was dramatically decreased, while that of IGFBP3 was increased (2.2-fold change). The most hub gene in the PPI network was IGF1, which can bind to IGFBP3. IGF1 and its binding proteins can promote cellular proliferation and inhibit apoptosis. In vitro studies showed that IGF1 increased ovarian cell growth and invasive potential (42). It is well documented that high IGF1 levels are significantly associated with early-stage cancer, nonserous histology, and optimal cytoreduction in epithelial ovarian cancers (43)(44)(45). Considerably, it is noteworthy that the most downregulated lncRNA (lnc-RWDD4-5_1) has a potential relationship with the hub gene (IGF1). However, the potential molecular mechanisms needed to be further verified. There are also a couple of limitations to this study. Firstly, although the SKOV3 cell line is a useful model of ovarian cancer cells, it could not be used to predict the performance of 1a,25 (OH) 2 D 3 in actual tumors. And the action of 1a,25(OH) 2 D 3 refers to different sets of genes in different cell lines (54)(55)(56). Secondly, the expressions of lncRNAs and mRNAs were analyzed by ovarian cancer cells, and further testing of these in the tumor tissues of patients is needed. Thirdly, the relationships among noncoding RNAs, mRNAs, and proteins need to be further investigated using bioinformatic prediction to understand their full function. Nevertheless, this is the first study of lncRNA expression patterns regulated by 1a,25(OH) CONCLUSIONS In summary, we identified the 606 DE lncRNAs and 102 DE mRNAs in 1a,25(OH) 2 D 3 -treated ovarian cancer cells, which were mainly enriched in the cancer-related and vitamin Drelated pathway. Moreover, by the lncBCAS1-4_1-mRNA core network, the EMT signal was identified, indicating the linkage of lncBCAS1-4_1 between EMT and vitamin D signaling. Furthermore, we established the lncBCAS1-4_1 loss/gain cell model and found that lncBCAS1-4_1 could abolish the antitumor effect of 1a,25(OH) 2 D 3 , which was associated with upregulating ZEB1. These data provide new evidence that lncRNAs can serve as targets for the antitumor effect of 1a,25 (OH) 2 D 3 . DATA AVAILABILITY STATEMENT The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below: GEO Database and accession number GSE17363 https://www.ncbi.nlm.nih.gov/geo/ query/acc.cgi?acc=GSE173633.
2021-08-06T13:27:56.762Z
2021-08-05T00:00:00.000
{ "year": 2021, "sha1": "d5dc93c4d400741a6ad7f6702103d56b430c94fd", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fonc.2021.691500/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d5dc93c4d400741a6ad7f6702103d56b430c94fd", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
51845795
pes2o/s2orc
v3-fos-license
Statistical model calculations for evaporation residue and fission cross-sections in 210 Po compound nucleus Statistical model calculations for evaporation residue and fission cross-sections are performed for 210Po nucleus populated by 18O + 192Os system in the excitation energy range of 52.43 83.51 MeV. Experimental fusion cross-sections are fitted using CCFULL code. Evaporation residue and fission cross-sections are then fitted using Bohr-Wheeler formalism including shell effects in level density and fission barrier by using scaling factor (Kf ) in the range of 1.0 to 0.75. The results of the calculations are in good agreement with the experimental data. Introduction Heavy ion induced fusion-fission reactions are important to study the dynamics and decay of hot nuclear matter.These reactions are sensitive to entrance channel mass asymmetry between the target and projectile, the spin and deformation of the target, the mass of the projectile and the bombarding energy with respect to fusion barriers and coupling of various degrees of freedom [1,2].At low energies, compound nucleus decays predominantly by emission of particles and fission.Experimental observations clearly show that fusion cross-section is significantly reduced in medium mass region even for very asymmetric systems due to onset of non-compound nuclear processes like quasi fission [3].Evaporation residues are signature of compound nucleus formation and useful probe to study the statistical as well as dynamical aspects of fusion-fission reactions.For gaining a better insight into heavy ion reactions, a detailed study of the decay products of the compound nucleus, such as evaporation residues and compound nucleus fission fragments is necessary.In this regard, evaporation residue and fission cross-section measurements are useful probes to understand the fusionfission dynamics. Sagaidak et al. [4], have found that LDM fission barrier has to be reduced in order to fit the evaporation residue and fission excitation functions leading to Polonium compound nuclei in the framework of the standard statistical model.With same motivation, statistical model calculations for evaporation residue and fission cross-sections have been performed for 210 Po populated by 18 O + 192 Os system in the excitation energy range 52.43 -83.51 MeV.The experimental data for evaporation residue and fission cross-sections have been extracted from [2].Coupled a e-mail: ruchimahajan4@gmail.com channels calculations (CCFULL) reproduce the experimental fusion cross-sections satisfactorily [5].Then, in order to fit the experimental data for evaporation residue and fission cross-sections, final theoretical calculations were performed using Bohr-Wheeler formalism including shellcorrections in the level density and fission barrier.Different scaling factors (K f ) for the finite-range liquid drop model fission barrier in the range 1.0 to 0.75 are used to fit the experimental data. Statistical model analysis In the framework of statistical model, emission of neutrons, protons, alpha particles and giant dipole resonance (GDR) gamma rays are considered along with fission as the possible decay channels of a compound nucleus [6].Statistical model calculations for evaporation residue and fission cross-sections have been performed assuming that the system forms a fully equilibrated compound nucleus after capture of projectile and contribution from noncompound nuclear processes such as quasi fission and fastfission are negligible.The Bohr-Wheeler fission width used in the present calculations is given by [7]: Here, V B is the fission barrier and the nuclear potential is obtained from the finite range liquid drop model (FRLDM).The level density parameter used in the present work is taken from the work of Ignatyuk et al. [8], which takes into account the nuclear shell structure at low excitation energies and goes over to its asymptotic form at high excitation energies as given below: where, Here, a is the asymptotic level density parameter and E D determines the rate at which the shell effects disappear at high excitation energy and δM is the shell correction in the LDM masses, i.e. A value of 18.5 MeV was used for E D , which was obtained from an analysis of s-wave neutron resonances [9].The shell-corrected temperature-dependent fission barrier is given by: where K f is the scaling factor [4], V LDM is the fission barrier from the finite-range rotating LDM potential and E * is the compound nucleus excitation energy.In our analysis, evaporation residue and fission cross-sections are fitted with the adjustment of scaling factor K f in the fission barrier.In this work, shell correction is applied only to the ground state mass, and it is assumed that the shell correction at the saddle deformation can be neglected [10][11][12]. The above assumption of neglecting the shell correction at the saddle deformation follows from the work of Myers and Swiatecki [10].A particular decay channel is selected by performing Monte-Carlo sampling between all the particles and gamma emission widths.Spin distribution of the fused system is obtained by fitting the experimental fusion cross-section using a suitable model.So, to reproduce the experimental fusion cross-section coupled channels calculations have been performed using CCFULL [5]. CCFULL Fusion reactions at energies near and below the Coulomb barrier are strongly influenced by couplings of the relative motion of the colliding nuclei to several nuclear intrinsic motions.The program CCFULL solves the coupledchannels equations to compute the fusion cross-sections and mean angular momenta of compound nucleus, taking into account the couplings to all orders.It works on the ingoing-wave boundary condition inside the Coulomb barrier to account for fusion, along with the isocentrifugal approximation, which works well for heavy ions.The nuclear potential in the entrance channel is defined by parameters V 0 , r 0 and a 0 ; where V 0 is the depth parameter of the Woods-Saxon potential, r 0 is the radius parameter, and a 0 is the surface diffuseness parameter. Results and discussions Spin distribution of the compound nucleus is an important ingredient of statistical model and this can be obtained either from Frobrich systematics or by fitting the experimental fusion cross-section with some appropriate model.In this work, spin distribution of the fused system has been generated using CCFULL code.Depending upon the value of E(4 + ) / E (2 + ), nuclei can be classified as vibrator or rotor.If this ratio is 3.3, the nucleus is treated as rotor and as vibrator if this value is 2. In case of 18 O + 192 Os system, the projectile 18 O is treated as a vibrator and target 192 Os is treated as a rotor.The deformation parameters along with the value of E(4 + ) / E (2 + ) are given in the Table 1.The potential parameters used in the present coupled channels calculations were chosen by fitting the experimental capture cross-section and is shown in Figure 1.The fitted values V 0 , r 0 and a 0 are given in Table 2. Here, V 0 is the depth parameter of the Woods-Saxon potential, r 0 is the radius parameter, and a 0 is the surface diffuseness parameter. Table 2. Fitting parameters from CCFULL code Nucleus V 0 r 0 a 0 210 Po 70 MeV 1.17 fm 0.66 fm From Figure 1, it is clear that the energy points above the Coulomb barrier are reproduced without including the coupling effects whereas energy points well below the Coulomb barrier are reproduced taking into consideration the coupling effects.This is because coupling among the intrinsic degrees become more dominant at energies near and close to the Coulomb barrier. After fitting, CCFULL gives the spin distribution (for capture cross-sections) which is then used as the spin distribution of compound nucleus for statistical model code to fit the experimental evaporation residue and fission crosssections.In order to fit the experimental data for evapo- EPJ Web of Conferences Table 3. Evaporation residue and fission cross-sections (in mb) calculated using statistical model for 18 O + 192 Os as a function of E Lab (MeV).σ ER and σ f ission are the evaporation residue and fission cross-sections, calculated using Bohr-Wheeler fission width, respectively.3. The fitted fission and evaporation residue cross-sections are shown in Figure 2. From Figure 2, it becomes clear that the scaling factor has to be reduced from K f = 1.0 to 0.75 to describe the excitation function in the whole range of compound nu- cleus excitation energy.As the scaling factor is directly related to fission barrier, decreasing the scaling factor implies a reduced fission barrier.It was observed that the statistical model results using Bohr-Wheeler approach overpredict the evaporation residue cross-sections, especially at high excitation energies and, underpredict the fission cross-sections throughout the entire energy range under study.Also, at lower energies, fission cross-section is a very small fraction of total fusion cross-section.Hence, to fit the evaporation residue cross-section in the desired range, we have to increase the fission cross-section and this is done by reducing the fission barrier.In other words we can say that for reducing the fission barrier we have to reduce the scaling factor (K f ) to fit the evaporation residue and fission cross-section data.For the present system we have found that the scaling factor increases with increase in the lab energy as shown in Figure 3. This may be due to quasi fission events which are important for asymmetric systems in recent studies.It was believed that quasi fission becomes dominant only when the charge product Z P Z T > 1600.In an experiment, quasi fission events are detected as fission events and since quasi fission does not go through compound nucleus formation, there will be a reduction in evaporation residue crosssection.Hence, does not reflect the true fission barrier and is certainly not due to any shell effects in fission barrier.Such strong reduction of fission barrier in statistical model calculations, however, points to quasi fission in the system. Conclusion Present study indicates the need to reduce the FRLDM fission barrier in order to fit the experimental evaporation residue and fission cross-sections.More systematic data are required in order to understand the reason for lowering of the fission barrier.Such experiments are planned to be carried out at IUAC, New Delhi in the near future. ) DOI: 10.1051/ C Owned by the authors, published by EDP Sciences, Figure 1 . Figure 1.Experimental capture cross-section (full dots) for 18 O + 192 Os as a function of E CM (center of mass energy).Dashed line shows coupling and solid line shows no coupling. Figure 2 . Figure 2. Solid circles are the experimental data and different lines are the theoretical calculations for different scaling factor, K f (as given inside the diagram): (a) For Fission cross-section (b) For Evaporation residue cross-section. Figure 3 . Figure 3. Variation of scaling factor ( K f ) with E Lab (MeV)
2018-07-17T17:49:45.992Z
2015-01-01T00:00:00.000
{ "year": 2015, "sha1": "4916dc2532e22d23a3d689dd4e865a65c31507fa", "oa_license": "CCBY", "oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2015/05/epjconf_fusion2015_00025.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "4916dc2532e22d23a3d689dd4e865a65c31507fa", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Chemistry" ] }
239768289
pes2o/s2orc
v3-fos-license
Enumeration of non-oriented maps via integrability In this note, we examine how the BKP structure of the generating series of several models of maps on non-oriented surfaces can be used to obtain explicit and/or efficient recurrence formulas for their enumeration according to the genus and size parameters. Using techniques already known in the orientable case (elimination of variables via Virasoro constraints or Tutte equations), we naturally obtain recurrence formulas with non-polynomial coefficients. This non-polynomiality reflects the presence of shifts of the charge parameter in the BKP equation. Nevertheless, we show that it is possible to obtain non-shifted versions, meaning pure ODEs for the associated generating functions, from which recurrence relations with polynomial coefficients can be extracted. We treat the cases of triangulations, general maps, and bipartite maps. These recurrences with polynomial coefficients are conceptually interesting but bigger to write than those with non-polynomial coefficients. However they are relatively nice-looking in the case of one-face maps. In particular we show that Ledoux's recurrence for non-oriented one-face maps can be recovered in this way, and we obtain the analogous statement for the (bivariate) bipartite case. INTRODUCTION AND MAIN RESULTS In this note, we are interested in obtaining simple, or at least efficient, recurrence formulas to count maps on surfaces according to their genus and size parameters. For us, a map is the 2-cell embedding of a connected multigraph in a compact connected surface, considered up to homeomorphism. Our surfaces are not necessarily orientable, and we call genus of a surface the number g ∈ 1 2 N such that its Euler characteric is 2 − 2g. The sphere has genus 0, the projective plane has genus 1 2 , the torus and Klein bottle have genus 1, etc. Perhaps one of the nicest-looking formulas in the field of map enumeration is the Goulden-Jackson recurrence formula for orientable triangulations, i.e. maps in which all faces are incident to three edge-sides. The Goulden-Jackson recurrence [GJ08], in fact also discovered in an equivalent form in [KKN99,Eq. (B.6)], asserts that the number t n,g of rooted 1 triangulations with n faces on an orientable surface of genus g is solution of the equation (n + 1)t n,g = 4n(3n − 2)(3n − 4)t n−1,g−1 + 4 i+j=n−2 h+k=g (3i + 2)(3j + 2)t i,h t j,k . (1) This formula was immediately recognized as a breakthrough in the field, because it gives a much better access to these numbers (computational or theoretical) than the classical techniques. Indeed, in the classical approach, one introduces generating functions of maps of genus g with a certain number of additional boundaries, and one shows that a combinatorial operation of root-deletion on the maps (the "Tutte decomposition") implies a functional equation for these functions. This approach has been very successful in the planar case since the work of Tutte, see e.g. [Tut62a,Tut62b,Tut63,BC94,BBM17,BBM11]. In higher genus, it was pioneered by Lehman and Walsh [WL72] and later Bender and Canfield, who showed that the generating functions of maps of fixed genus and number of boundaries can be computed inductively on the Euler characteristic, thus revealing their particularly nice algebraic structure as well as their singular behaviour [BC86,BC91]. Bender and Canfield's inductive technique can be seen as a predecessor of the Chekhov-Eynard-Orantin topological recursion [CEO06,EO07], a powerful theory invented in the context of matrix integrals [Eyn16] which has now been applied to study the structure of fixed-genus generating functions of many models of maps or in enumerative geometry [Eyn14,ACEH20,BDBKS20,BCEGF21]. The need to introduce additional boundaries (and "catalytic" variables to mark their sizes) makes these approaches ineffective for large values of g. There seems to be no hope to obtain control on the bivariate numbers t n,g for non-fixed g in this way, a striking contrast with the recurrence (1). For example (1) also gives access to the so-called double-scaling limit of the numbers t n,g [KKN99,BGR08], and it is also crucially used in the recent Budzinski-Louf breakthrough on large genus asymptotics [BL20]. Perhaps we should insist on the fact that we mean no harm to the "classical approach". The study of the rational parametrization it gives rise to is a fascinating subject, including purely bijective combinatorics [CMS09, CD17, Lep19, AL19, DL20], with probable link to the study of random geometries [BG14,BGM21]. On the other hand, as of today, the bijective interpretation of the Goulden-Jackson recurrence (1) is wide open. One reason why the recurrence (1) gives access to different results is because it comes from a completely different technique. It is based on the fact that the generating function of maps on orientable surfaces, with an infinite number of variables p i , i ≥ 1 (p i marking faces of degree i), is a solution of the KP hierarchy -an infinite sequence of PDEs originating from the theory of integrable systems, with deep connections to infinite dimensional Lie algebras and algebraic combinatorics [KRR13,MJD00]. The first equation of the hierarchy (the KP equation) reads −F 3,1 + F 2 2 + 1 2 F 2 1 2 + 1 12 where each i-index indicates a partial derivative with respect to p i . In order to go from the KP equation (2) to the recurrence (1), Goulden and Jackson use the fact that the generating function F (p 1 , p 2 , p 3 , 0, . . . ) of maps having only faces of sizes 1, 2, 3 can in fact be expressed in terms of the series F (0, 0, t, 0, . . . ) of triangulations only. This enables one so set p i = tδ i,3 in (2) and obtain an ODE for the generating function of triangulations. The fact that the variables p 1 and p 2 can be eliminated in this way relies on local surgery operations that can, in fact, be interpreted as first cases of the classical Tutte decomposition. A similar elimination technique has since been used to obtain similar results for other models of maps. In [CC15], Carrell and the second author use the fact that the generating function of bipartite maps solves the KP hierarchy, and local operations related to the first Tutte equations for bipartite quadrangulations, to obtain a recurrence formula similar to (1) to count maps by vertices and faces. In [KZ15], Kazarian and Zograf use a slightly different elimination procedure, using the so-called Virasoro constraints (which are also related to Tutte decompositions) to recover the recurrence of [CC15] and to obtain an analogue for bipartite maps. These three works only use the first KP equation. Finally, Louf [Lou19] uses a different integrable hierarchy (the Toda hierarchy) to obtain a remarkable recurrence counting bipartite maps of arbitrary genus with control on all face degrees, using a different elimination technique inspired by Okounkov's work on Hurwitz numbers [Oko00]. In this paper, we are interested in obtaining variants of these results for non-oriented surfaces. Our starting point is the fact that generating functions of maps (or bipartite maps) are solutions of the BKP hierarchy of Kac and Van De Leur [KvdL98] (see also the appendix of [BCD21a]). An important difference between the KP and BKP hierarchy is that the function F which is a solution of this hierarchy also involves a so-called charge parameter N, which in our context will always be a variable marking faces or vertices of a certain kind. The first BKP equation reads; and where S 2 (N) is a model-dependant normalizing factor that will always be an explicit rational function in our case. In [Car14], Carrell used the fact that the generating function of non-oriented maps satisfies this equation, together with the elimination techniques developed by Goulden and Jackson in the orientable case, to obtain a functional equation for the case of triangulations (this technique leads to an explicit recurrence, see Theorem 4.9 below). The first task we perform in this paper, somehow unsurprisingly, is to apply the elimination of variables from the papers [CC15,KZ15] to the BKP equation, to obtain recurrences of the same kind to count maps (by vertices and edges) and bipartite maps (by edges, and vertices of each colour) on non-oriented surfaces. The Virasoro constraints for these models are known (e.g. [BCD21a]) and our main task here is to make sure that the elimination procedure indeed works, i.e. that these equations indeed enable to reduce all derivatives appearing in (3) to differential polynomials in a single variable. For completeness, we also treat Carrell's case of triangulations explicitly. All these recurrences are larger than (1), but incredibly short compared to any alternative, and it is not unreasonable to believe that they could have a combinatorial interpretation. For example, we obtain in Section 5 the following recurrence formula. Everywhere in the paper, the symbol denotes a sum over elements of 1 2 N. The crucial fact that the BKP equation (3) involves not only the function F (N) but also its shifts F (N + 2) and F (N − 2) has an important effect on the recurrence formulas we obtain. The functional equations corresponding to these recurrences, which involve derivatives but also shifts of variables, are not ODE in their main variable. In return, the recurrences obtained do not have polynomial coefficients (for example (4) contains binomial coefficients, which are not polynomials in the summation variables). This is a deep structural difference between the recurrences (1) and recurrences such as (4). It is natural to ask if one could instead obtain formulas in which the shifts are not involved, i.e. true polynomial recurrence formulas, corresponding to nonlinear ODEs with polynomial coefficients for the associated generating functions. This would be much more satisfying, at least at the conceptual level. Maybe surprisingly, we will see that the answer to this question is yes. To see this, we will have to use several (in fact, three) equations of the BKP hierarchy. Using additional derivations and manipulations, we will be able to eliminate the shifts from equations, and obtain equations at fixed N, at the price of having to consider higher derivatives. It is not obvious, but it will be true, that a finite number of Virasoro constraints will still be sufficient to perform the elimination of variables in this context. Due to the use of higher BKP equations and additional manipulations involved, the equations thus obtained are bigger than the previous ones 2 . We will only state them here in a non-explicit form. The reader eager to see them at work may access these equations, and use them to compute numbers of maps, in the accompanying Maple worksheet [BCD21b]. A typical statement we obtain from these methods is the following. Theorem 1.2 (Counting maps by edges and genus -unshifted recurrence). The number h g n of rooted maps of genus g with n edges, orientable or not, is solution of an explicit recurrence relation of the form where the P a,b,k are rational functions with P 0,0,1 = 0, and K 1 , K 2 , K 3 < ∞. We will obtain similar theorems for other models, in particular one for bipartite maps (Theorem 4.5), and for triangulations (Theorem 4.10). Moreover, we will in fact prove a version of Theorem 1.2 with control on the number of faces, from which we obtain a closed recurrence formula enumerating one-face maps, small enough to be explicitly written. Theorem 1.3 (Ledoux's recursion for non-oriented one-face maps). The number u g n of rooted non-oriented maps of genus g with n edges and only one face (or equivalently with n edges 2 They are however not gigantic, see the accompanying worksheet [BCD21b]. The main ODE for maps fits in slightly more than a page in \tiny LaTex print, we have however chosen not to reproduce it here. and only one vertex) is given by the recursion (6) (n + 1)u g n = (8n − 2)u g n−1 − (4n − 1)u with the convention that u g n = 0 for g < 0 and g > n 2 and with the initial condition u The recurrence (6) was first obtained by Ledoux [Led09] using matrix integral techniques unrelated (as far as we know) to the BKP equation. It is remarkable to see that it is, in fact, the shadow of bigger nonlinear recurrence giving access to an arbitrary number of faces. The Ledoux recurrence can be viewed as an non-oriented version of the infamous Harer-Zagier recurrence, a similar (yet smaller) formula which covers the case of orientable one-face maps (and which is itself a special case of the recurrence of [CC15]). The Harer-Zagier recurrence has a nice analogue in the bipartite case due to Adrianov [Adr97], and it is natural to ask if our non-shifted recursion in the bipartite case implies an non-oriented version of Adrianov's result. The answer is yes. Theorem 1.4 (A recurrence for non-oriented bipartite one-face maps). The number b i,j n of rooted one-face maps with n edges, i white and j black vertices, orientable or not, is given by the recursion: with the convention that b i,j n = 0 for i + j > n + 1, and b i,0 n = b 0,j n = 0 and the initial conditions b 1,1 1 = b 2,1 2 = b 1,2 2 = b 1,1 2 = 1, b 3,1 3 = b 1,3 3 = 1, b 2,2 3 = b 2,1 3 = b 1,2 3 = 3, b 1,1 3 = 4. To conclude this introduction, it is natural to ask if our techniques of shift elimination are specific to the case of maps or apply to general solutions of the BKP hierarchy. The latter is in fact true, and any function F (N) which solves the BKP hierarchy is in fact solution of an explicit (yet big) PDE involving only the function F (N) and its derivatives, with no shifts (Theorem B.1 in the appendix). We are not aware of any in-depth study of such "fixed charge" BKP equations, which might be worth considering in the future. Structure of the paper. In Section 2, we will recall what we need about the first BKP equations, directing the reader to other sources for the depth of the BKP theory. In Section 3, we will address the case of maps, taking the time to explain the main ideas and techniques. We will write the Virasoro constraints, and show how to use them to express some derivatives of a specialization of the main BKP tau function as univariate differential polynomials. This will give us "shifted" equations. We will also show how to eliminate the shifts appearing in the BKP equation using instead the first three BKP equations to obtain non-shifted ODEs. In Section 4.1 and Section 4.2, we will address the cases of bipartite maps and triangulations. The main steps are similar to the case of maps and we will give fewer details than in the previous section. In Section 5, we apply the technique of elimination of variables of the paper [CC15] to obtain slightly different recurrence formulas than in Section 3 to count maps. Appendix A contains tables of the numbers of rooted maps and bipartite maps of genus g with n edges and of rooted triangulations of genus g with 2n faces, generated with our recurrences. Appendix B derives the fixed charge equation for BKP solutions (Theorem B.1), which we do not use directly in this paper. Throughout the paper, the notation R[·], R(·), R[[·]] denote respectively polynomials, rational functions, and formal power series with coefficients in the ring R. Accompanying Maple worksheet. A Maple worksheet containing an implementation of the recurrences of this paper, together with automated calculations of the bigger ODEs for the different cases (as well as certain proofs regarding their top coefficients) is available in both Maple and html form in [BCD21b]. The worksheet also contains recursive programs obtained from these ODEs, as well tables for small genus and consistency checks against existing formulas of the literature. A FEW WORDS ON THE BKP HIERARCHY In this paper, we will use the BKP hierarchy as a black box, and only recall the statements and equations needed for our purposes. We refer the reader to [KvdL98,VdL01] for the general theory, and to the appendix of our previous paper [BCD21a] for details about the applications to maps and bipartite maps. The BKP hierarchy is an infinite set of partial differential equations (PDEs) for a sequence of functions τ (N) N ∈Z depending on "time parameters" (formal variables) p 1 , p 2 , . . . . For our combinatorial purposes, it will be convenient to think of the symbol N as a formal variable rather than an integer, and this turns out to be possible under technical conditions, formalized in the notion of "formal N" BKP tau function in [BCD21a]. A formal N BKP tau-function is in fact a pair, consisting of a formal power series τ (N) ∈ Q(N)[[p 1 , p 2 , . . . ]], together with a normalizing sequence (β N ) N ∈Z which is such that for N ≥ 0, and for respectively every odd positive integer k and every positive integer k, for some rational functions R k (N), S k (N) ∈ Q(N). These conditions may seem technical but they are crucial to stating the equations of the BKP hierarchy in a formal way as we will do here. In the context of enumeration, the field Q will often be promoted to a field of rational functions or formal Laurent series involving additional variables, for example Q(t) so that The typical definition of a (formal or not) BKP tau function makes use of the infinite wedge formalism. It is the image of the orbit of the exponential of an infinite-dimensional Lie algebra, often denoted b(∞), via the boson-fermion correspondence. We refer the reader to the references mentioned above. For the purposes of this paper, we will admit the PDEs of the BKP hierarchy as a definition: Here, h j denotes the complete homogeneous symmetric function of degree j, and we define U(q) = e r≥1 qrDr andĎ = (kD k ) k≥1 , where D r is the Hirota derivative with respect to p r , By extracting coefficients in the variables q 1 , q 2 , . . . in (9), one obtains explicit PDEs for the function τ (N), which altogether form the BKP hierarchy. For example, by setting k = 2 and extracting the coefficient of q 3 , we obtain the BKP equation (3) stated in the introduction, where we recall the notation F (N) = log τ (N) and where indices indicate partial derivatives, We will only need two other equations of the hierarchy, namely the following bilinear identi- ) 2 , obtained respectively by extracting the coefficient of q 4 and q 5 , again with k = 2. We now proceed with map enumeration. THE CASE OF MAPS 3.1. Generating functions of maps. For us a surface is a non-oriented two-dimensional real manifold without boundary. A surface of Euler characteristic 2 − 2g has genus g. A map is a graph (with loops and multiple edges allowed) embedded in a surface such that the complement of the embedding is a disjoint collection of contractible components, called faces. The genus of the map is the one of the underlying surface. A corner of a map is a small angular sector around a vertex delimited by two consecutive edge-sides; the degree of a face/vertex is the number of corners belonging to it/adjacent to it, respectively. In this paper orientable surfaces do not play a particular role, however in some places we will explicitly use the terminology non-oriented maps to emphasize that our surfaces can be orientable or not. We will be interested in enumeration of rooted maps, i.e. maps with a distinguished and oriented corner called the root corner. Rooted maps are considered up to homeomorphisms preserving the root corner. Define the generating function The following specialization operator plays a crucial role throughout the paper. Definition 3.1 (Specialization θ). We let θ be the operator that specializes all variables p i to the variable z, namely θ(p i ) = z for every i ≥ 1. We define the formal power series which is the bivariate generating function of rooted non-oriented maps M with variables t, u, z marking respectively twice the number of edges, the numbers of vertices, and faces, i.e. H i,j n denotes the number of rooted non-oriented maps with n edges, i vertices and j faces. It is important to note that a map with n edges, i vertices, and j faces, has Euler characteristic i − n + j = 2 − 2g, so the genus is implicitely controlled in this generating function and Θ(t, z, u) can be rewritten as We additionally set h g n := H g n (1, 1) for the number of rooted, non-oriented maps of genus g with n edges and u g n := H n+1−2g,1 n for the number of rooted, non-oriented maps of genus g with n edges and only one face. The main goal of this section is to obtain functional equations on the function Θ(t, z, u), allowing us to compute its coefficients. For this, we start from the fact that the "bigger" function F has a deep structure inherited from the BKP hierarchy, which was proved by [VdL01] using a connection with matrix integrals (see also [BCD21a,Appendix] for details on the connection with maps). Here we use the notation 2p = (2p 1 , 2p 2 , 2p 3 , . . . ). Proposition 3.2 implies that F (t, 2p, N) satisfies the BKP equation (3). It is tempting to apply the operator θ to this equation in order to get information on the function Θ(t, z, u), however because partial derivatives with respect to the p i do not commute with θ, it is not obvious that such an approach will succeed. For a sequence of non-negative integers λ = (i 1 , . . . , i k ), we introduce the quantity The F θ λ are the quantities naturally appearing when applying θ to the BKP equation (3). In order to obtain information on the F θ λ , we use the fact that τ (t, p, u) satisfies the following Virasoro constraints. It itself comes from the homogeneity relation k≥1 p k p * k F = t ∂F ∂t by acting with ∂ n 1 +n 2 +n 3 and applying θ. This produces (16). Note that the size of all vectors indexing F θ s appearing in the RHS of (16) are strictly smaller than the size i + 2 + n 1 + 2n 2 + 3n 3 in the LHS. In order to be able to iterate this equation on all these terms, we need all vectors appearing in the RHS to have at most one part larger than 3. The only term on the RHS that could have two parts larger than 3 is of the form F θ a,b,3 n 3 ,2 n 2 ,1 n 1 . Since a + b = i, this does not happen unless i + 2 > 9. We thus obtain (17). The last statement is a direct check. Remark 1. The Virasoro constraints in fact imply a more general result: F θ λ for any λ is a differential polynomial of Θ. This is proved by applying k j=1 ∂ ∂p i j to (18) and performing an induction on |λ|. We will refrain from writing it since we will not need it in full generality. Instead, the previous proposition is enough to cover the cases we need with explicit formulas, involving vectors of size |λ| ≤ 6. We insist on the fact that the recurrence given in Proposition 3.4 can be fully automated to compute the polynomials P λ , and this is done in the accompanying Maple worksheet [BCD21b]. It is now immediate to see that applying the operator θ to the BKP equation (3) produces a functional equation on Θ(t, z, u). Theorem 3.5. The generating function Θ(t, z, u) satisfies a functional equation of the following form: where P and Q are quadratic polynomials with coefficients in Q[t, u, z]. Proof. Consider (3) for τ (t, 2p, u). Applying θ to both sides, taking the derivative with respect to t and substituting the initial equation (3) back into it to eliminate exponentials, we thanks to the identity ∂ ∂t S 2 (u) = 4 t S 2 (u) implied by S 2 (u) = t 4 u(u − 1). Proposition 3.4 immediately concludes the proof. Here we have extracted, respectively, in (19) (after substitution u → ur, z → zr), the coefficient of [t 2n+4 r n+2−2g ], of [t 2n 1 −1 r n 1 −2g 1 ], and of [t 2n 2 +5 r n 2 +2−2g 2 ], in the RHS, in the first factor of the LHS, and in the second factor of the LHS. Moreover we have used In our case m = n 1 − 2g 1 , and we parametrized k as k = n 1 + 2 − 2g 0 (the condition k ≥ m + 2 translates into g 0 ≤ g 1 , and the summand is null when g 0 < 0). It now only remains to group the terms of the form H i,j n with i + j = n + 2 − 2g. In the LHS they contribute to the first term H g n and in the RHS they appear as the terms H p,j n 1 when n 1 = n, n 2 = 0, g 1 = g, g 2 = 0, g 0 = g). Collecting these terms on the RHS gives 3 2 δ n 1 ,n δ g 1 ,g δ g 0 ,g u 2 which leads to the main equation of the theorem. The identification of the initial conditions for n ≥ 2 can be done, either: by hand drawing, or from the OEIS, or from explicit expansions in small genera using the equations of this paper, or from the expansion in Zonal polynomials up to order n = 2. 3.2. Removing the shifts. We now proceed with the task of obtaining a functional equation on the function Θ(t, z, u) which does not involve any shift on the variable u. We will do this by using the three equations (3), (11), (12) to eliminate the shifts, and apply the operator θ. This will make terms of the form F θ λ appear, with larger partitions λ than in the previous section, but fortunately they are still in the range covered by Proposition 3.4. We have Theorem 3.7. There exists a polynomial P ∈ Q[t, u, z][x 1 , . . . , x 6 ] of degree 5 such that P ∂ ∂t Θ(t, z, u), . . . , ∂ 6 ∂t 6 Θ(t, z, u) ≡ 0. Proof. Denote ∆f (u) = f (u + 2) − f (u − 2) and ∇f = f (u + 2) + f (u − 2), and E = S 2 (u)e ∇Θ(t,z,u)−2Θ(t,z,u) . An explicit form of P can be obtained by applying Proposition 3.4 to the following equation Using (3), (11), (12) and applying θ we obtain the following equations , where KP1, KP2, KP3 are given in the statement of theorem. The difference between KP1, KP2, KP3 and the LHS in (3), (11), (12) is due to the fact that τ (N) is a formal N tau function of the BKP hierarchy after rescaling the variables p → 2p. Using (16) and the identity S 2 (u) = t 4 u(u − 1) we have and ∇(F θ 1 2 ) = t 6 ∂ 2 ∂t 2 + 2t 5 ∂ ∂t ∇Θ(t, z, u) + t 4 uz + t 2 u so that the third BKP equation reads We now use the first two BKP equations to express ∇ ∂ ∂t Θ(t, z, u), ∇ ∂ 2 ∂t 2 Θ(t, z, u) and ∆ ∂ ∂t Θ(t, z, u) in terms of Θ(t, z, u) and its t-derivatives. Those expressions can then substituted into (22). Taking the t-derivative of the first BKP equation, we have and another derivative gives and further From the second BKP equation, This gives (20). The statement about the form of polynomial P is a direct consequence of Proposition 3.4. Theorem 1.2 is an immediate consequence of Theorem 3.7. Proof of Theorem 1.2. It is enough to make the change of variables u → ur, z → zr in (20) and extract the coefficient of [t 2n r n+2g−2 ]. This substitution allows to track the genus of the underlying maps. Extracting the coefficient gives a recursion for the bivariate version of h g n which additionally tracks the number of vertices and faces via u and z. Specializing u = z = 1 gives the recursion for h g n of the form (5) with a, b depending on the specific form of ODE given by (20). A direct examination of the highest degree terms of this recurrence implemented in [BCD21b] shows that h g n = h g−1/2 n − n 1 =1..n−1 n 1 +n 2 =n g 1 =0..g g 1 +g 2 =g (n 1 + 1)(n 2 + 1) 42(n + 1) h g 1 n 1 h g 2 n 2 n 1 ,...,n k ≥1 n 1 +···+n k =n−a g 1 ,...,g k ≥0 g 1 +···+g k =g−b P a,b,k (n 1 , . . . , n k )h g 1 n 1 h g 2 n 2 . . . t g k n k , which finishes the proof. The coefficient of z 1 in Θ(t, z, u) is the generating function of maps having only one face, with control on the number of edges and vertices (equivalently, edges and genus). Extracting the bottom coefficient in z in (20), we obtain a linear ODE for this generating function. It is equivalent to Ledoux's recurrence (6) stated in the introduction. Corollary 3.8. The generating function of rooted non-oriented maps with only one face satisfies the following linear ODE RECURRENCES FOR BIPARTITE MAPS AND TRIANGULATIONS 4.1. Non-oriented bipartite maps. Consider the generating function where we sum over all rooted non-oriented bipartite maps, and v • (M), v • (M) denote the number of white and black vertices, respectively. Similarly, as in the case of general maps, the function G inherits a deep structure from the BKP hierarchy. This result can be derived directly from Van de Leur's work [VdL01], even though it is not stated explicitly there (see [BCD21a,Appendix] for additional details on the connection with maps). We recall that θ(p i ) = z for i ≥ 1. Define the power series which is the generating function of rooted, non-oriented bipartite maps M. The variables t, u, v, z mark the number of edges, black vertices, white vertices and faces, respectively so that K i,j,k n denotes the number of rooted non-oriented bipartite maps with n edges, i black vertices, j white vertices and k faces (the root vertex is black by convention). Note that due to the Euler relation we can rewrite η(t, z, u, v) so that it is parametrized by the number of edges and genus: We additionally set k g n := K g n (1, 1, 1) for the number of rooted non-oriented bipartite maps of genus g with n edges and b i,j n := K i,j,1 n for the number of rooted non-oriented bipartite maps of genus g with n edges, i black and j white vertices, and only one face. In analogy with Proposition 3.4 we express G θ λ in terms of ∂ i ∂t i η(t, z, u, v), where for a sequence of non-negative integers λ = (i 1 , . . . , i k ). An explicit form of Q can be obtained by applying Proposition 4.2 to the following equation The proof is analogous to the proof of Theorem 3.7 and left to the reader. We have two immediate corollaries, Theorem 4.6 which is analogous to Theorem 1.2 for bipartite maps, and Theorem 4.7 which is a bipartite analogue of Ledoux's recurrence and a non-oriented analogue of Adrianov's. Theorem 4.7 (A recurrence for non-oriented bipartite one-face maps). The number b i,j n of rooted one-face maps with n edges, i white and j black vertices, orientable or not, is given by the recursion: Extracting the coefficient of [t n u i v j ] produces the desired recursion. 4.2. Non-oriented triangulations. The generating series of triangulations can be obtained from F (t, p, u) (given by (13)) by applying another specialization instead of θ. Indeed, define the specialization operator θ 3 by θ 3 (p i ) := zδ 3,i . This operator enforces that all faces must be of degree 3, and is the generating function of rooted, non-oriented triangulations M. Of course, triangulations satisfy 2e(M) = 3f (M). By using Euler's relation, one can expand Ξ(t, z, u) by the genus and the number of edges, and here t g n denotes the number of rooted, non-oriented triangulations with 3n edges (or equivalently 2n faces) and genus g. Similarly as in the previous sections, we want to express F θ 3 λ as a polynomial in Ξ(t, z, u) and its derivatives with respect to t, where for a sequence of non-negative integers λ = (i 1 , . . . , i k ). Since the sizes of the vectors indexing F θ 3 s appearing in the RHS of (32) are strictly smaller than |λ|, one computes F θ λ recursively for vectors of the form λ = [ℓ, 3 n 3 , 2 n 2 , 1 n 1 ], where ℓ ≤ 10 (by eliminating all the parts of length 3 thanks to (33), and reducing the sizes of the indexing vectors thanks to (32) and finally using the recurrence (34) for the parts of the form F θ 1 l ). The last statement follows by induction on ℓ. Remark 2. We want to highlight the fact that the above computations are possible because we are working with the specific model of triangulations. Replacing the specialization θ 3 by θ l : p i → δ i,l with l ≥ 4 makes the above technique fail to even compute F θ l 1 . Proof. As in the case of bipartite maps, the proof is almost identical to the proof of Theorem 3.6 and we leave the details for the interested reader. The only difference is that (19) should be replaced by Theorem 4.10. There exists a polynomial R ∈ Q[t, u, z][x 1 , . . . , x 6 ] of degree 5 such that R ∂ ∂t Ξ(t, z, u), . . . , ∂ 6 ∂t 6 Ξ(t, z, u) ≡ 0. The proof is the same as in the previous cases, so we leave it as an exercise. As a standard consequence we have. In analogy to what we did for maps and bipartite maps, it would be natural to study now the case of triangulations with only one vertex (or by duality, cubic one-face maps). However, there exist very explicit and simple formulas in this case, obtained from bijective methods [BC11] so we prefer not to go into such calculations here. ANOTHER METHOD IN THE CASE OF MAPS In this section, we quickly adress the case of maps treated in Section 3 with another method, which actually leads to different recurrence relations. The situation is similar to the orientable case, where the approaches used in [CC15] and [KZ15] differ. In Section 3 (non-oriented analogue of [KZ15]) we started from the fact that the generating function F of maps is a BKP tau-function, and applied the substitution operator θ : p i → z. In this section, we will instead start from the fact that the generating function G of bipartite maps is a BKP tau function, and apply the different substitution operator θ 2 : p i → δ i,2 . We will only treat the equations with shifts, our main motivation being that they are relatively nice looking -Proof. The lemma can easily be proved with Virasoro constraints in the same manner as Proposition 4.8 and details are left to the reader. However, a calculation-free proof based on digon contraction and elementary combinatorial map operation is also easily doable. The proof is completely similar to [CC15, Lemma 7], the only difference is the extra term of genus g − 1/2 in the two equations involving a hexagonal default (i.e. a p 3 -derivative). This term comes from the possibility to create a rooted quadrangulation by adding a twisted diagonal inside a digon. This is the only difference between the oriented and non-oriented case, and it adds one term to Equation (11) in [CC15]. Once this difference is taken into account, the proof of [CC15, Lemma 7] can be copied verbatim. A direct consequence of what precedes is the recurrence formula stated as Theorem 1.1 in the introduction. One can also obtain a version with control on vertices and faces, from which Theorem 1.1 follows immediately. We finally equate the RHS of the above two equations and multiply by KP1 3 to obtain (42). Email address: mdolega@impan.pl
2021-10-26T01:17:02.173Z
2021-10-25T00:00:00.000
{ "year": 2021, "sha1": "56da778a7226b5c5a6d7b3a5221fcbb8083fb81c", "oa_license": "CCBY", "oa_url": "https://alco.centre-mersenne.org/item/10.5802/alco.268.pdf", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "56da778a7226b5c5a6d7b3a5221fcbb8083fb81c", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
25223294
pes2o/s2orc
v3-fos-license
Tension pneumoventricle after resection of a fourth ventricle choroid plexus papilloma: An unusual postoperative complication Background: Pneumocephalus is defined as the presence of air within the intracranial vault. A common complication of head trauma and surgery, pneumocephalus is usually related to ventricular shunts, craniotomies, and surgery in the sitting position. Tension (symptomatic) pneumoventricle is a rare entity associated with significant clinical morbidity. Case Description: We report an unusual case of a 15-year-old girl with tension pneumoventricle developed shortly after removal of a choroid plexus papilloma of the fourth ventricle by a midline suboccipital approach while in the sitting position. Conclusion: The presence of a cerebrospinal fluid (CSF) diversion system that causes a decrease in intracranial pressure and the existence of a craniodural defect with or without an obvious CSF leak may be the cause of tension pneumoventricule. According to our present understanding, this is the first report of this peculiar complication of fourth ventricular surgery. We discuss clinical manifestations, surgical management, contributing factors, and mechanisms involved in the pathogenesis of tension pneumoventricle. INTRODUCTION Pneumocephalus is defined as the presence of air in the intracranial vault. [2,6] It is commonly associated with head injury, craniotomies, [3,7,10,13] or the insertion of a lumbar drain. [6] Intraventricular pneumocephalus, also known as pneumoventricle, commonly follows cerebrospinal fluid (CSF) diversion procedures and fourth ventricular surgery. [4] Pneumoventricle is mostly asymptomatic, not requiring treatment, [2,7,10] and takes approximately 2-3 weeks for complete reabsorption of any air present. [12] Tension pneumoventricle, which means the presence of air in the ventricles under pressure, is a rare incident that suggests a connection between the atmosphere and the intracranial cavity. [13] We describe an unusual case of a sudden postoperative tension pneumoventricle, related with considerable clinical deterioration soon after surgery and improved with treatment. The patient underwent resection of a fourth ventricle choroid plexus papilloma via a midline suboccipital approach while in the sitting position. History and examination A 15-year-old girl presented with a 3-month history of mild episodic headache refractory to medical treatment. Neither consciousness impairment nor any comorbidities were observed. A neurological examination revealed bilateral extreme lateral and upward gaze nistagmus, global hyperreflexia, and bilateral papilledema, without hemorrhage signs. A radiological evaluation showed an abnormal mass lesion in the fourth ventricle. On a computed tomographic (CT) scan, the mass lesion was hyperdense with contrast enhancement [ Figure 1a and b]. A well-delineated mass within the ventricle was present, determining a gross obstructive hydrocephalus and effacement of the convexity sulci [ Figure 1c]. Brain magnetic resonance imaging (MRI) was performed on the patient, demonstrating a large 3 × 4 × 3 cm 3 intraventricular lesion with irregular contrast-enhancing margins [ Figure 2]. Elective operation-Tumor resection A midline suboccipital approach was used to excise the fourth ventricle tumor, while the patient was maintained in the sitting position. Concern that may be necessary to rapidly decompress the lateral ventricles intra-or postoperatively, a burr hole was drilled in the right posterior occipital region before the craniotomy was performed. There was no external drainage throughout the operation, but intravenous mannitol was administered. The fourth ventricle was exposed by separating the cerebellar tonsils, widening the vallecula, and allowing tumor resection in an "en bloc" fashion. Postoperatively, the patient presented with severe left palsies in the VI, VII, IX, and X cranial nerves (CN). She opened her eyes in response to voice, responded with exclamatory articulated speech and obeyed commands, with a Glasgow Coma Scale (GCS) score of 12. Approximately 4 hours later, systemic arterial pressure increased and the girl's consciousness deteriorated: she did not open her eyes, uttered incomprehensible sounds, and localized painful stimuli, with a GCS score of 8. The CT scan, performed immediately after decline of the patient's condition, revealed the presence of prominent intraventricular air with dilatation of the lateral and third ventricles as transependymal fluid passage [ Figure 3]. The patient was rushed to the operating room. Emergency operation-External ventricular drainage Postoperative tension pneumoventricle was treated via the right occipital burr hole made during the elective operation. We chose our insertion site based on the presence of an existing burr hole in the skull. Intracranial air gushed out under pressure through the external ventricular drain immediately after insertion. Approximately 60 mL of air was drained during occipital burr hole aspiration, resulting in pressure relief and clinical recovery. The patient's consciousness level also improved to a GCS score of 12. Postoperative course The following day, a CT scan demonstrated a marked improvement of the pneumoventricle [ Figure 4]. The patient was subjected to tracheostomy and gastrostomy. Thirteen days later, the external ventricular drain was removed from the patient. Her condition continued to improve (GCS = 15), and she was discharged on day 60 postoperative. At 9-month follow-up, the patient remains with mild left CN VI palsy and left peripheral CN VII palsy (House-Brackmann II). An MRI revealed total tumor resection [ Figure 5]. Histopathological analysis demonstrated choroid plexus papilloma. DISCUSSION While pneumoventricle is common immediately after a CSF shunt procedure for hydrocephalus or head trauma, like skull base and sinus fractures, [3] delayed tension pneumoventricle is an extremely rare complication and less than 50 cases have been described in the literature. [13] One case occurred following wound dehiscence, resulting in exposure of the shunt chamber. [2] Tension pneumoventricle was also described after resection of a cerebellar medulloblastoma, associated with hydrocephalus and CSF leakage from the suture line; [8] and after surgical resection of a cerebellar medulloblastoma and insertion of a ventriculoperitoneal shunt due to a petrous bone defect. [5] Another interesting case occurred after the removal of an acoustic neurinoma and a CSF shunt procedure due to a concomitant hydrocephalus. [9] We report here a unique case of tension pneumoventricle after surgical management of a fourth ventricle tumor. Some authors advocated that two requirements are needed to the development of pneumocephalus: the presence of a CSF diversion system that causes a decrease in intracranial pressure; and the existence of a craniodural defect with or without an obvious CSF leak. [11] Prevention of this complication is made by proper layered closure. [2] In the case of postoperative CSF leak treated with lumbar drainage, the drain must be removed immediately, since it favors more air intake. [10] In our case, external drainage was not applied during surgery and we have not observed a wound with dehiscence as the possible entry point of air; neither the patient had a lumbar drain. Tension pneumoventricle probably occurred by the existence of a craniodural defect without an obvious CSF leak and a massive inflow of air with the patient in the sitting position. [10] The "valve mechanism" does not allow the air to escape, as the brain's soft tissue blocks the "valve" defect on the exhalation cycle, causing a mass effect and increasing intracranial pressure. [1] Risks also included sudden loss of CSF from enlarged lateral and third ventricles after removal of the tumor in the fourth ventricle, [4] which was opened and explored in the course of surgery. The influx of air into the ventricle is greater in the presence of a noncompliant system because the ventricles do not collapse as the fluid is drained and more air fill it. [6] Administration of mannitol may have enhanced the CSF loss by reducing the brain volume and decreasing the production of CSF. [12] We believe that the progression to tension pneumoventricle also occurred by the presence of remnant blood in the fourth ventricle after surgery, thereby predisposing the patient to obstructive hydrocephalus. [7] Tension pneumoventricle may manifest as deterioration of consciousness, convulsions, focal neurological deficit, or cardiac arrest. [12] Our patient experienced conscious level deterioration and arterial hypertension. The postoperative CN palsies were probably due to surgical manipulation. Conduction to the operating room followed by external ventricular drainage to relieve the pressure caused by the trapped air improved her clinical condition. An effective approach is positioning the head of the patient so that the air is in the least dependent area and filling the ventricles with irrigation fluid. [6] Intracranial hypertension can be detected by intracranial pressure monitoring in the postoperative period, but the benefits must be weighed against the associated complications, i.e. infection, limited patient's movements. [12] Factors contributing to development of tension pneumoventricle in this case include surgery in the sitting position; intraoperative administration of mannitol; sudden loss of CSF from enlarged ventricles; opening of the fourth ventricle during surgery; the presence of remnant blood in the fourth ventricle; and possibly the existence of a craniodural defect. Because nitrous oxide can diffuse into airfilled spaces and expand any trapped air loculi, it is possible that it may also be linked to tension pneumoventricle, thereby increasing intracranial pressure. [4,12] We believe that nitrous oxide was not a contributor in our case as the patient deteriorated 4 hours after anesthesia was interrupted. According to our present understanding, this is the first report of this peculiar complication of fourth ventricular surgery. This case emphasizes an uncommon complication of posterior fossa surgery. Temporary external ventricular drainage may represent an effective treatment of tension pneumoventricle.
2018-04-03T02:26:27.798Z
2012-10-13T00:00:00.000
{ "year": 2012, "sha1": "89737f2e5ba9461212d221d9f027242d9cd9d771", "oa_license": "CCBYNCSA", "oa_url": "https://europepmc.org/articles/pmc3512331", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "89737f2e5ba9461212d221d9f027242d9cd9d771", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
238701569
pes2o/s2orc
v3-fos-license
Automobile Tires’ High-Carbon Steel Wire Definition: It is a well-known fact that to manufacture an automobile tire more than 200 different materials are used, including high-carbon steel wire. In order to withstand the affecting forces, the tire tread is reinforced with steel wire or other products such as ropes or strands. These ropes are called steel cord. Steel cord can be of different constructions. To ensure a good adhesive bond between the rubber of the tire and the steel cord, the cord is either brass-plated or bronzed. The reason brass or bronze is used is because copper, which is a part of these alloys, makes a high-strength chemical composition with sulfur in rubber. For steel cord, the high carbon steel is usually used at 0.70–0.95% C. This amount of carbon ensures the high strength of the steel cord. This kind of high-quality, unalloyed steel has a pearlitic structure which is designed for multi-pass drawing. To ensure the specified technical characteristics, modern metal reinforcing materials for automobile tires, metal cord and bead wire, must withstand, first of all, a high breaking load with a minimum running meter weight. At present, reinforcing materials of the strength range 2800–3200 MPa are increasingly used, the manufacture of which requires high-strength wire. The production of such wire requires the use of a workpiece with high carbon content, changing the drawing regimes, patenting, and other operations. At the same time, it is necessary to achieve a reduction in the cost of wire manufacturing. In this context, the development and implementation of competitive processes for the manufacture of high-quality, high-strength wire as a reinforcing material for automobile tires is an urgent task. Introduction Over the past, relatively short, time, the range of reinforcing materials for car tires has undergone significant change. Firstly, this can be explained by the increased requirements for automobile tires, which are now more stringent for mileage, weight, imbalance (power non-uniformity), and so on (Figure 1) [1]. To ensure the elevated technical characteristics of tires, the modern metal cord and bead wire have to withstand a high breaking load with minimum mass of a running meter (linear density), have a sufficient level of bond strength with rubber, and have an increased resistance to fatigue failure under applied loads. To date, special attention is paid to such indicators as the level of residual torsion, straightness, and deflection arrow, which directly affect the manufacturability of the rubber cord sheets' (bead rings) technological processing on modern rubber lines. The idea of increasing the strength of reinforcing materials for automobile tires and decreasing the volume weight was justified in 1979 through the experience of such leading manufacturers of metal cord and bead wire as Bekaert (Belgium), Goodyear, Firestone (USA), Michelin (France), Bridgestone (Japan), and Pirelli» (Italy) [2]. But the more active process of the substitution of normal-strength reinforced materials for high-strength materials started at the beginning of the 1990s. While designing new constructions of high-strength steel cord and wires for reinforcing the bead rings of tires, different companies have developed technologies for production wire for high-strength reinforcing materials. The most progressive technologies of manufacturing bead wire were produced in Japan. At the moment, reinforcing materials of high strength are increasingly used instead of materials with normal tensile (NT) 2400-2800 MPa. They are divided into the following groups: high tensile materials (HT) 2800-3200 MPa and super tensile (ST) materials 3200-3500 MPa. Furthermore, the increase of tensile strength promotes an increased endurance strength of brass-plated wire for metal cord, especially compact beam structures for reinforcing the car tire carcass ( Figure 2) [3]. At present time the tendency to increase the tensile strength of steel cord is observed as shown in Figure 4 [5][6][7][8]. The tensile strength of metal cord with a diameter of 0.20 mm was 2800 MPa in the 1970s, 3300 MPa in the 1980s, and reached a high strength of 3600 MPa in the early 1990s. The increase of speed and the increase in highway transport demanded a raise in the level of tensile strength to higher values [6,7]. In addition to high tensile strength, the wire for modern reinforcing materials must have a high range of ductile (fatigue) properties. As for metal cord, this condition is necessary to ensure the processability of the double twisting method of high-speed laying. At the level of the HT group (approximately 3000-3200 MPa) thin brass-plated wire with 0.2-0.35 mm in diameter must withstand a certain amount of forward and backward twists. Otherwise, the quality of the finished steel cord (fatigue endurance) and the productivity of the lay process are sharply reduced. The aim of this paper is to describe the peculiarities of the manufacturing process of the high-carbon steel wire which is used as the reinforcing material for automobile tires. A general description of the technological process is given in Section 2, which also contains information about special aspects of every technological operation of the manufacturing process. This overview can help the reader to learn about those technological techniques which are necessary in order to produce high-carbon steel wire with the desired exploitation properties. The main tendencies for the improvement of high-carbon steel manufacturing process are denoted in the conclusion. Structure, Role, and Demands of the Technological Process of High-Carbon Steel Wire Manufacturing The main direction of the perspective technological design and development of new technological processes in metallurgy is the creation of such technological systems which are based on low-operational, unmanned, and waste-free technology providing a multiple increase in labor productivity and a significant improvement in product quality and other indicators. The technology for the manufacture of high-strength wire for automobile tires should be generally observed. For example, in Japan, there are conceptually two main directions to achieve the required level of steel-wire strength: strengthening in patenting and strengthening in drawing. Moreover, these two directions are each esteemed comprehensively in terms of the regularity of the pearlite structure refinement in the wire [9][10][11]. At the input stage of the technological process of manufacturing wire for reinforcing materials for automobile tires, there are main and auxiliary materials (high-carbon steel wire rod, copper and zinc anodes, etc.), and at the output of the process there is the cold-deformed (brass plated, bronzed) wire. The structure of the actual technological process of manufacturing wire for the reinforcement of materials for automobile tires consists of the following main interrelated subprocesses: -surface preparation and "rough-medium" drawing of sorbitized wire rod; -patenting of cold deformed workpiece. Patenting can be a transitory operation used for the formation of the final properties of the wire; -brass plating to ensure the adhesion of the metal cord to rubber; -finishing drawing; -laying of the metal cord; -annealing, processing in alternative deformation (if necessary for bead rings of tires). The technological scheme can be presented by means of blocks. Each block contains information about the name of the technological operation. The technological scheme for brass-plated, high-carbon steel wire with high strength for steel cord is presented in Figure 5, as are the range of diameters of the processed wire. For thin high-strength brass plated wire with 0.85% C the diameters of patented workpiece were chosen as shown in the blocks. Based on the experimental results, it was proved [12] that the intermediate operation of patenting was obligatory in the manufacturing process because it reduced wire breakage in drawing. At the present time high-strength, bronzed-steel wire with a diameter of 1.60 mm is highly requested for all-steel automobile tires. The current way to get the desired level of mechanical properties is to use a special kind of heat treatment in an air-fluidized bed with alternative bending as final operations. The application of these kinds of processes guarantees a ratio of yield strength to tensile strength of 75-85%. The technological scheme for bronzed, high-carbon steel wire is presented in Figure 6. The implementation of these technological schemes (see Figures 5 and 6) at the industrial scale makes it possible to improve the competitiveness of the manufactured high-strength steel wire for cord [12]. Steel Rod for High-Strength Wire Manufacturing The choice of steel rod for the manufacture of high-strength wire has a significant role in the technological process of the production of reinforcing materials for automobile tires [13]. One of the basic factors which affects the technological effectiveness of metal cord manufacturing, as well as its technical and exploitation characteristics, is the quality of the high carbon steel rod which is used as a reinforcing material in automobile tires. Demands on the steel rod for metal cord and bead wire are formulated, first of all, taking into consideration further regimes of its processing and the functions of the final product. To manufacture high-strength and ultra-high-strength metal cord, steel rod made from high-carbon steel with 0.70-0.95% C and 5.5 mm in diameter is used. The pearlitic microstructure is typical for steel with such an amount of carbon and consists of ferritecarbide mixture (Figure 7). As shown in Figure 7, the microstructure consists of troostite with small amount of bainite, and ferrite which is located as a net around pearlite colonies Special demands are exhibited to the chemical composition of steel, the quantity of impurities and imperfections in steel, and the macro and microstructure. To ensure the required level of properties such companies as Cobe Steel [14], Nippon Steel [15], Kawasaki Steel (Japan) [16], THYSSEN (Germany) [17], and others alloy their steel with chromium, copper, manganese, cobalt, etc. It is stated in many papers [18][19][20][21][22][23][24] that chemical composition, pollution of steel by non-metallic inclusions, results of liquation processes, presence of scale on the surface of rod and its decarburization, and the peculiarities of macro and microstructure have a great influence on the processability of steel rod in the following operations of technology: rough drawing, patenting, drawing of brass-plated wire, and laying, as well as the quality of the final product. Role of Drawing in the Technological Process of High-Strength Wire Manufacturing In drawing, the cross section of a long rod or wire is reduced when it is pulled through a die. Tensile strain and compression strain are obvious in drawing. The major processing variables in drawing are reduction in cross-sectional area, die angle, friction along the die-workpiece interface, and drawing speed. Drawing is usually performed as a cold working operation. Drawing speeds are as high as 50 m/s for steel cord. In drawing, reductions in the cross-sectional area per pass range up to about 45%. Usually, the smaller the initial cross section, the smaller the reduction per pass. Fine wires for steel cord usually are drawn at 15 to 25% reduction per pass. In order to avoid the breakage of wire in high-speed drawing, the emulsion coolant "oil in water" is used. In metal cord manufacturing it is impossible to produce high carbon steel wire with a diameter less than 1 mm directly from the rod because of the large amount of total reduction in drawing [25]. For this reason, the technological process «Rod-Wire for Metal Cord» is divided into several subprocesses and can be presented as the combination of basic operations of drawing in monolithic dies and thermal treatment (patenting). Conditionally it can be determined as two variants: 1. rod-workpiece for the final wire (rough process stage) 2. brass-plated wire after patenting-thin bras-plated wire (final process stage). As a matter of fact, the rough process stage «Rod-Workpiece for the Final Wire» is the shape-generating stage which ensures the necessary diameter of the workpiece for the further drawing of the rod so as to manufacture the final wire with the definite diameter. To lower costs for the rough process stage, it is necessary, on the one hand, to reduce the quantity of thermal treatments and, on the other, it is necessary to keep in mind that with the increase of the total deformation degree the probability of breakage of the wire in drawing also raises. In particular, cracks, tears, and other kinds of breakage are dangerous because these kinds of defects do not disappear during further heat treatment and decrease the wire quality as well as the metal cord laid from this wire. In the manufacture of bead wire with a diameter of between 1.30-1.85 mm at present time both physical and chemical properties of the final product are dependent on the process stage «Rod-Workpiece for the Final Wire». For this reason, special attention is paid to the regimes of coarse drawing in the technological process of bead bronzed wire. The role of the final process stage (fine drawing) in the manufacture of metal cord, besides shaping, is of ensuring the strength and ductile properties of the final wire. This is why the diameter of the workpiece for the final wire is chosen by taking into consideration the necessary degree of total deformation. The key points in this case are the steel composition (carbon content), the degree of total deformation (determination of the diameter of patented brass-plated workpiece), and the regimes of wet drawing. In drawing, pearlite colonies of the processed high-carbon steel wire elongate towards the drawing direction as shown in Figure 8. This kind of microstructure is characterized by a disposition of grains along the force applied. As compared with coarse drawing, the fine drawing of brass-plated wire is characterized by tough friction conditions and a higher drawing rate. The approaches used for the designs associated with drawing of high-strength wire for metal cord are presented in [26][27][28]. Quality control in drawing is based on the distribution of hardness across the wire including fine brass-plated wire. The difference in hardness between outer and internal areas of wire should not be more than 7% [26]. This is why, besides the magnitude of reduction, the control factor to ensure the properties of cold-drawn wire is the angle of the drawing tool. In the drawing of high-carbon steel wire, much attention is paid to the negative affect of the heat deformation warming-up. It is considered that the temperature of the wire on the finishing drum of the drawing mill should not be more than 150 • C. The negative affect of the temperature is proved during tribological analysis of the contact system «Brass-Lubricant-Drawing Tool» which was carried out by specialists of «Michelin» (France) [29]. For the coarse drawing of wire, the direct-flow drawing mills with intensive system for cooling drums and drawing tools are used. Well-known drawing mills are produced by «GCR EURODRAW SPA» (Italy), «MARIO FRIGERIO SPA COMPANY» (Italy), «ERNST KOCH GMBH @ CO.» (Germany), «SWARAJ TECHNOCRAD PVT. LTD.» (India). Special attention is paid to the quality of the surface of motoblocs. For wet drawing of brass-plated wire, the drawing mills of higher deformation ratio produced by «M + E Macchine + Engineering S.p.a», «VVM», «Team Meccanica», and «Samp Steel» (Italy) are used. Drawing mills are equipped with a high pressure emulsion supply system [30] and cooling for the drawing tools, fine dies, and drawing drums [31]. The maximum drawing rate reaches 20-25 m/s. The drawing emulsion is fed to the group of mills through a closed loop, which make it possible to effectively control its parameters. It is known [32] that an increase in drawing rate leads to a reduction of the viscosity of the lubricant and the thickness of its layer in the deformation zone. As a result, the wear of the drawing tool increases and the warming-up of the wire and pulling pulleys of the drawing mill intensify. This should be taken into consideration when designing the regimes of wet drawing for thin high-carbon steel wire on sliding drawing mills. The increase of drawing rate also facilitates the localization of deformations on the outer layers of the wire and, eventually, an irregularity across the wire cross section. This fact enhances the influence of surface phenomena during the wet drawing of brass-plated wire, in other words, it enhances the influence of the scale factor. It has been stated [25,[33][34][35] that when drawing high-carbon steel wire, the development of dynamic and static deformation aging processes leads to a deterioration in the plastic and fatigue life of thin brass-plated wire. For this reason, in drawing thin brass-plated wire on drawing mills of wet drawing, lower deformation degrees are used as compared with coarse drawing. Wire slip on the pull pulleys of the drawing mill is the result of additional thermal effects on the wire. Taking into consideration the negative effect of temperature in drawing thin brass-plated wire it is necessary to reduce wire slip on the final passes as well as single reductions [36] and ensure the effective cooling of the wire in its exit from the finishing die. Role of Thermal Treatment in the Technological Process of High-Strength Wire Manufacturing There are two kinds of thermal treatment in the technological processes of metal cord and bead wire manufacturing. Patenting is used to recover ductility of cold-drawn wire and to ensure the necessary level of mechanical properties in the final product. Annealing of the final bead wire is used for stress relaxation which is necessary to match the requirements of normative and technical documentation to the relative elongation values of the finished wire. In both cases, a reliable and efficient implementation of the temperature regime is required, which provides not only the required complex of properties for the finished product, but also a minimal energy consumption for the operation. Analysis of the applied technologies of patenting shows that to get the desired microstructure, air cooling, heating (cooling) in fluidized area of particles, quenching in water, keeping temperature by the direct transmission of electric power, etc. are used [26,[37][38][39][40][41][42]. There are two variants of practice in patenting. In the first case, the cooling rate is regulated only by the difference of temperatures between the heating of the wire in a furnace and a bath of isothermal decomposition; while other parameters also affect the cooling rate, in particular the coefficient of forced convective heat transfer between the wire and the bath environment, they are not taken into account. The other way is to consider both the difference of temperatures between the heating of the wire in a furnace and a bath of isothermal decomposition and the coefficient of forced convective heat transfer between the wire and the bath environment. Special attention to this aspect is paid in [43][44][45] where different methods which ensure the reliable regime of wire cooling are described. With regard to the process of patenting wire in lead, the efficiency of convective heat transfer during the decomposition of supercooled austenite can be increased by raising the speed of movement of the lead (wire). More perspectives, from the point of view of energy saving, ecology, and harmful effects on the human body, for methods of wire heating and cooling can be used not only in patenting but also during annealing of finished bead wire. In particular, fluidized bed heating technology, which, with proper technical support, has a number of advantages over heating in lead is widely used to heat the wire to 450-500 • C. Deposition of Adhesive Coatings on the Metal Cord Wire To date, the assortment of wire for tire bead ring reinforcement has become wider with a consequent substitution of brass-plated wire for bronze-plated wire which is considered to be more competitive [46,47]. Technological schemes of bead brass-plated wire and bronze-plated wire are different. Brass-plated coating is deposited on the wire by the consistent electrochemical deposition of the copper layer and the zinc layer. In this case, it is necessary to heat the wire to initiate the diffusion process of copper and zinc. The technological process of bronze deposition is more efficient. Bronze is deposited chemically by means of simultaneous deposition of copper and tin in one bath. One of the disadvantages of bronze coating, as compared with brass-plated coating, is its low level of adhesion with rubber. However, the technology of preparation of rubber mixtures at tire manufacturing enterprises makes it possible to change this parameter through a correction of the compounding. As a result, adhesion of the bronzed wire increases which allows it to be used quite successfully for reinforcing the bead rings of tires. Taking into consideration manufacturing costs together with the level of exploitation properties the perspective way is to carry out the industrial technology of deposition of bronze coating on bead wire instead of brass-plating. Use of Setups for Alternative Bending to Increase the Ductility of Bead Wire The application of enterprises for tire manufacturing using modern high-capacity bead-making units has led to the formulation of strict demands to the bead wire mechanical properties. As a result, the percentage ratio of yield strength to tensile strength should be equal to 75-85% in accordance with the demands in technical certificates to the bead wire. This can be ensured by alternative bending of cold drawn wire. Alternative bending causes the appearance of stresses which lead to the breakup the unstable substructures in the processed wire [48,49]. This kind of processing promotes the increase of its ductile properties. Laying The laying of metal cord is basically carried out on single twisting machines when the wire is not exposed to alternating deformation. For this reason, the existing reserve of plasticity in the wire ensures a sufficient level of its manufacturability in laying. Breakage of the wire in laying can be predominantly explained by the presence of non-metallic inclusions in the steel [50]. Machines operating on the principle of double twisting, when metal cord is twisted in two pitches during one rotation of the rotor, are usually used for laying. Laying on double twisting machines is more efficient and effective as compared with the same operation when single twisting rotor type machines are used. But at the same time, thin brass-plated wire is exposed to high alternative deformation, hence it raises demands to the mechanical properties [51]. Conclusions and Prospects The manufacturing process consists of complex technological actions on the workpiece. During any technological operation the workpiece changes its parameters. Furthermore, products made of modern materials can be processed technologically in a number of different ways. Under such conditions the manufacturer should have some algorithms and models to select the technological process considered to be optimal taking into consideration the peculiarities of the industrial enterprise. This technological process has to guarantee the production of the finished product with the related level of quality and exploitation properties. Because of high strength and high corrosion resistance, the steel cord still remains the main reinforcing material for tires of different types of automobiles. New trends in steel cord manufacturing processes are presented in [3,52,53]. The necessity to decrease artificial pollutants in the use the gasoline engines put forward new tasks for engineers to find new ways to increase the steel cord tensile strength. One of the prospective ways is to use steel with a nanostructure which ensures high values of both tensile strength and ductility in the processed material [54][55][56][57]. At the present time, the implementation of methods of severe plastic deformation under industrial conditions is on the cutting edge of technological progress. However, considering that the diameter of steel wire for cord is less than 1 mm, it would be necessary to create alternative ways to achieve a similar nanostructure in the processed material. The technological process of steel cord manufacturing consists of several operations of different physical natures. For this reason, the risk of breakage of the processed material increases. This is why one of the important problems of the manufacturing process is to decrease the quantity of non-metallic inclusions, segregation of alloyed elements in steel for cord, surface blemishes, etc. Engineering should be addressed to solve these issues. Perspectives for the design of the manufacturing process for a competitive highcarbon steel wire for steel cord and bead wire should be based on the solution of the following tasks: -development of a methodology for calculating the drawing modes of high-carbon wire based on the selection and use of the fracture criterion and assessment of the influence of the deformation zone shape factor on its destruction; -investigation of the nature of metal flow in the near-surface layer during drawing, assessment of the influence of drawing factors on its depth, and development of practical recommendations for calculating deformation modes of thin high strength brass plated, and bronze coated wire for metal cord and bead wire. Furthermore, the level of technology of every manufacturing process has a decisive influence on its economic performance. This is why the choice of the optimal variant of the technological process should be carried out on the basis of the most important indicators of its effectiveness: productivity, cost, and quality of products. The tendency to find new materials to substitute steel wire for automobile tires with the required level of exploitation properties remains a challenge for scientists and engineers.
2021-09-25T16:09:33.172Z
2021-08-24T00:00:00.000
{ "year": 2021, "sha1": "a4e815c0d24e0692c0c375a94e6f23e82a459f27", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2673-8392/1/3/66/pdf", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "c6885111023b843db954e261f4308faa751b5639", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
18509912
pes2o/s2orc
v3-fos-license
Broadening the Scope of Societal Change Research: Psychological, Cultural, and Political Impacts of Development Aid To date, the study of societal change in social and political psychology has been dominated by an intergroup relations research agenda. But in addition to intergroup dynamics, there are other major pathways to societal change and emancipation, which are almost never systematically considered in psychological research. The distribution of technologies (e.g., " ICT for development ") or money (e.g., microcredits) are among the supposed drivers of societal change. Many development aid projects are anchored in expectations about the effect that such instruments have on anticipated primary goals and the emancipation of particular groups (such as women). In the current paper, we begin by reviewing theories in the field of social change. Social psychological theories mainly address the conditions under which social change stimulated by intergroup dynamics is likely to occur, while other mainly historical and sociological research has focused on the role of different technologies as drivers of social change in history. Next, we review recent research focusing on the anticipated primary goals and (often) unanticipated psychological and cultural changes resulting from development aid interventions, presenting two examples of such interventions in Ethiopia and Sri Lanka in more detail. We suggest that (1) development aid projects can instigate profound psychological and cultural change and (2) that the pathways to such changes are markedly different from those traditionally examined in the literature. At the political level, we reflect on the unanticipated side effects of development aid. We conclude with some recommendations for practice following from the research described. which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. If we follow the daily news reports on societal changes around the world, we may get the impression that intergroup conflict is a key driver of societal change. Citizens, for example, oppose their governments or fight for human rights and freedom in general, or for particular group rights for women, homosexual people or other disadvantaged groups. Across countries people protest against the power of financial institutions and support actions against undisclosed programmes to monitor telephone and internet traffic. All these examples illustrate that people engage in protest and struggles to effect significant alterations in cultural values, norms, and intergroup relations. These Journal of Social and Political Psychology jspp.psychopen.eu | 2195-3325 pathways are the main focus of the social psychological literature in the field of social movements and they are conventionally understood as social struggles propelled by … Development Aid 274 ical sciences and social psychology have developed different theories to account for social change.In the following we provide an overview of the relevant theories in the field, describe their main assumptions, and most importantly analyze each theory based on the three above mentioned criteria: (1) conditions, (2) underlying processes, and (3) outcomes of social change i .A pathway of social change refers to the sequence through which one specific condition leads to social change.We explicate the pathways of social change ii suggested by existing theories. In political science and sociology, modernization theory has been developed since the 1950's (e.g., Inglehart, 1997;Inglehart & Baker, 2000;Lerner, 1958).This macro-theory has been developed in mainly large-scale historical research investigating the effects of the modernization process on social change at national/societal levels. Modernization is the process through which the activities of a more traditional culture are aligned with the activities, institutions, and tools of industrialized nations (Inkeles & Smith, 1974).By comparing different nations, political scientists have shown that modernization and economic development (conceptualized as industrialization, rationalization, structural differentiation or political development) are associated with the adoption of values that are increasingly tolerant, rational, trusting, and participatory (e.g., Inglehart, 1997;Inglehart & Baker, 2000).At the same time, distinctive cultural traditions such as following religious requirements are persistent and do not seem to erode that quickly.Since modernization deals with societal change from agrarian societies to industrial ones, it is important to look at the technological drivers of change.This line of research has identified new technologies as a major source of societal change.Importantly, new technologies do not change societies by themselves.Rather, it is the response to technology that causes change.According to Giddens (1991), traditional societies are based on direct interaction between people living close to each other.In contrast, modern societies are based on new communication technologies such as mass media and interactive media that stretch further across space and time and change how people interact.To conclude, modernization theory predicts that if industrialization sets in it will foster social changes such as changes in attitudes, values, or behaviour.More precisely, the adoption of technologies is a key driver of change towards more tolerant, rational, trusting, and participatory values as mentioned above.However, modernization theory is a macro-theory and accordingly the majority of research in this field has focused on macro-level data in the form of cross-country comparisons and longitudinal analyses (e.g., Inglehart, 1997;Inglehart & Baker, 2000).Evidence about the mechanisms of how new technologies foster micro-level societal change has not been provided in this line of research. In social psychology, by contrast, theories tend to focus on the micro-level processes making individuals strive for social change.At this level of analysis, the focus tends to be on the particular set of structural conditions that encourages the individual to engage in social action (e.g., conflict or by mobilizing for protest).The assumption in theories of social movements (e.g., van Zomeren, Postmes, & Spears, 2008), for example, appears to be that if individuals gear up for action, social change is likely to happen some way down the line.Similarly, the assumption is that if people do not engage in social movements, the status quo will remain unchanged.We first review theories which are based on these assumptions and afterwards we review theories which explain the status quo. One key theory that focuses on competition for limited resources between groups is realistic group conflict theory (RGCT; Campbell, 1965;Sherif, 1966).This theory looks at the structural conditions, outside the individual, that give rise to antagonistic intergroup relations.Thus, it focusses on the development of intergroup conflict.When groups perceive incompatible goals and competition over valuable resources that are of material (e.g., food, land, or money) or of symbolic nature (e.g., power or status), they are likely to enter into competition; and antagonism is likely to rise.However, when there is no negative interdependence over such valuable resources, groups cooperate, and exist in harmony.Thus, based on this theory, predictions can (1) only be derived about the conditions under which an intergroup conflict may arise, namely when there is negative interdependence over valuable resources, but (2) not about underlying mechanisms or (3) outcomes of social change. Another well-known social psychological theory to explain social change is social identity theory (SIT; Tajfel & Turner, 1979, 1986; for an overview see Postmes & Branscombe, 2010).This theory proposes that, when acting in groups, we define ourselves in terms of our group membership (through social identity) and seek to positively distinguish our group relative to other groups.According to social identity theory there are three major factors that influence whether people seek social change or not.The first factor is the perception of status stability.This perception refers to the intergroup status relation between low and high status groups.Social identity suggests that if the low status of an ingroup relative to an outgroup is stable, lower-status group members are likely to engage in action.This is related to the second factor, which is permeability.If there is a chance of individual advancement in society (because group boundaries are permeable), members of lower-status groups may begin to disidentify with the group to try to join the higher-status group (social mobility).If permeability is low, low-status group members engage in social creativity, for example behaviours aimed at redefining the social value of their group.The third factor is legitimacy.Social identity theory suggests that if people perceive the low status of their ingroup as stable, the situation of their low status group as illegitimate, and if they can envisage other ways of organizing society (i.e., there are cognitive alternatives to the status quo), people will act collectively to challenge the status quo and bring about social change based on the combination of these three factors.Thus, the primary formulation of SIT (Tajfel & Turner, 1979) focused almost exclusively on (1) the structural conditions whether social change is likely to arise or not, (2) the processes by which low status groups strive for social change or not, but (3) not directly on outcomes of social change.Only recently have scholars begun applying these processes systematically to high status groups (e.g., Haslam, 2001;Postmes & Smith, 2009), suggesting that high status groups oppress others when facing relative gratification as well when facing relative deprivation (Postmes & Smith, 2009) and show higher levels of prejudice compared to low status groups (e.g., Guimond, Dambrun, Michinov, & Duarte, 2003). Contemporary offspring of social identity theory focus on social psychological variables that predict mobilization for action.These variables partially echo those contained in SIT.For example, the social identity model of collective action (SIMCA; van Zomeren et al., 2008) proposes that collective action intentions and collective action are predicted by three proximate variables: social identification with the ingroup, a sense of collective injustice, and perceived efficacy of the action.Stronger feelings of social identification, collective injustice, and group-based efficacy to make change happen increase the likelihood that people will engage in collective action.This microlevel theory extends previous research by suggesting a combined effect of these three psychological variables in shaping whether disadvantaged groups strive for social change or not. But social psychology does not just harbor theories that seek to explain conflict and change: there are also theories seeking to understand their absence.According to theories such as system justification theory (SJT; Jost & Banaji, 1994) and social dominance theory (SDT; Sidanius & Pratto, 1999), societies and people have inbuilt mechanisms that preserve the status quo.According to SJT, people have an intrinsic need to see the social system they live in as just and fair: they will accordingly act to preserve the system.According to SDT, people tend to value and maintain group-based hierarchies.This theory assumes that group-based hierarchies are reinforced by legitimizing myths that postulate how status and power should be distributed among different groups.These legitimizing myths can take one of two different forms.On the one hand, they can be hierarchy-enhancing, such that they promote social inequality.Sexism, racism, and nationalism are examples of myths that justify group-based domination. Development Aid 276 On the other hand, they can be hierarchy-attenuating, such that they promote social equality.Multiculturalism, socialism, and beliefs in human rights are examples of myths that work against the maintenance of inequality in society.To conclude, both theories specify mechanisms that maintain status differences and the status quo, but do not offer any explanations as to the conditions under which social change may occur (however, see Pratto, Stewart, & Bou Zeineddine, 2013, this section). In all these theories the overriding assumption is that social change stems from an intergroup dynamic or even intergroup conflict.Social dominance theory and system justification theory attribute the maintenance of the status quo to an absence of conflict.Realistic group conflict theory, social identity theory, and the social identity model of collective action assume that the status quo will change due to intergroup dynamics or even conflict.Accordingly, the study of social change in social psychology has focused on the view that intergroup relations (i.e., conflict), protest and collective action are key to understanding change (for overviews see van Zomeren & Iyer, 2009;van Zomeren et al., 2008).The social psychological factors which motivate individuals to engage in collective action are in the foreground of the analysis.Collective action and mobilization might in turn result in social change and emancipation.However, the precise outcomes of social change are not specified by these theories. In sum, only one theory, namely modernization theory, focuses on social change as an outcome but tends to offer little understanding and explanation of the underlying processes of this change.In contrast, realistic group conflict theory, social identity theory, and the social identity model of collective action offer a richer understanding and explanation of the underlying processes of intergroup relations in general.However, they do not study social change as an outcome.The intergroup dynamics or even conflicts may in turn trigger social change.Thus, these theories suggest that the key pathway to social change is driven by intergroup dynamics or conflicts between groups leading to protest and collective action resulting in social change, the precise form of which is not specified. Social Change Driven by Technologies Within the last centuries dramatic societal changes have occurred in, for example, social life, the division of work, or communication.However, these changes cannot only be accounted for by the above-mentioned social psychological theories of social change in the field of intergroup dynamics, except (in a very generic sense) by modernization theory.In this section we will take a closer look at some major social changes that have occurred over past centuries.As outlined above, intergroup strife can be a major factor in change processes.In addition, historical and sociological analyses suggest that technology is another prime driver of social change (e.g., Cowan, 1976Cowan, , 1997;;Giddens, 1991;White, 1962;Zuboff, 1988).There are many cases in which technology caused shifts in social relations (e.g., by causing changing occupational demands, increasing mobility or changing communication patterns).Although occasionally this may lead to intergroup conflict, many of the changes caused by technology play out through social dynamics between individuals, rather than those between groups.In the following we outline different theories and examples that illustrate how technologies have driven social change to develop a more comprehensive account for possible pathways of social change. Taking Marxist theory (e.g., Marx, 1867) as an example, there is broad consensus surrounding its analysis of how certain material conditions may give rise to social structures (e.g., the industrial revolution creating the conditions for urbanization, the creation of a working class, and so on).Thus, there is broad consensus that technological innovations have changed society in dramatic ways.However, the Marxist analysis of how these societal and relational changes would propel people to revolt has proven to be less deterministic.Even though the social structure was established, people often did not revolt.We also have to acknowledge that over the last century social movements have stimulated social change: the rise of trade unions, social movements more generally, and the unprecedented extension of suffrage in the same period (Tilly & Wood, 2012).Our point here is not so much that intergroup relations would be irrelevant or uninteresting (far from it!) but rather, that the lack of interest in the adoption of technology as the driver of social change is an omission that is important to correct. In disciplines other than social and political psychology, there are some landmark studies of social change in Western societies that have underscored the importance of technology adoption in change processes.One classic case is the rise of the feudal system in the 8th century Frankish kingdom.This was a turbulent time characterized by the invasion of mainland Europe by Muslim armies, and their repulsion by the Franks, led by Charles Martel, at the battle of Poitiers (732).According to many historians, the feudal system arose as a result of these outside threats.Indeed, various historians have argued that these social cleavages resulted when the new and expensive method of fighting on horseback led to the growth of a specialized aristocracy of mounted warriors (e.g., White, 1962, p. 15, for a review).But on closer inspection, these social changes appear to have been preceded by technological innovation.White (1962) argues that "it was the Franks … who fully grasped the possibilities inherent in the stirrup and created in terms of it a new type of warfare supported by a novel structure of society which we call feudalism" (p.28).This technical innovation of using stirrups for "mounted shock combat", in other words, instigated the rise of feudalism in the context of outside threats.It was the interaction of technology adoption and social conditions which led to mass change.As White (1962) observes, other turbulent societies (such as the Saxons) that did not realize the novel potential of the stirrup for mounted combat, but continued to use horses for transport as they had done for centuries, did not witness similar social changes. This may be an early example of technology adoption interacting with existing social conditions to produce social change, but it is a pattern observed more regularly throughout history.Agricultural innovations in Great Britain from the 16th-18th century played a key role in freeing up labour to sustain the industrial revolution at the start of the 19th century.In social terms, there were social changes in settlement patterns, occupational structure, role division, and so on.The industrial revolution, in turn, was filled with innovations that had their own impact on social life.Transport technology (railroads, shipping) increased mobility in many parts of Europe, and made the emergence of the nation state a practical possibility (Landes, 2003).Communication technology (telegraph, telephone, the rotary press in combination with mass production of paper) fuelled powerful new industries centered on the exchange of news and information at the national and international level (e.g., Cowan, 1997).Finally, the 20th century is replete with examples of technology adoption fuelling social change: the mechanization of household chores, for example, massively reduced the need for household staff, again freeing up labour for industry (e.g., Cowan, 1997). The introduction of the motorcar is credited with changing residential patterns (suburbanization in the USA; Mc-Shane, 1995).Radio and television are perceived as pivotal technologies for the shaping of public opinion (e.g., Ansolabehere, Behr, & Iyengar, 1993). But although all these technologies undeniably have social effects and are instigators of change, it is important not to fall into the trap of technological determinism.The history of technology is replete with examples that illustrate the complexity of predicting social change outcomes merely from the technology itself.For women, the invention of household technologies (e.g., washing machines, gas heating, and other inventions that gradually made it to the majority of households in the USA during the first half of the 20th century) marked dramatic changes in household activities (Cowan, 1976).Based on theory, one could have made the prediction that this freed up women's time and dependency, thereby creating the conditions for a social revolution in gender relations.But instead of fuelling feminism, technology adoption (at least in the first instance) enabled the emergence of the new role of Development Aid 278 housewife: middle class women did not take advantage of the freed-up time afforded by technology usage to rebel against structures or even to capitalize on their independence; rather, they excelled in the replacement of the roles formerly performed by their servants.In more general terms, technology usage provides certain opportunities, but how these will be used is hard to predict from the characteristics of the technology alone. In the present time, this unpredictability of social transformations that accrue from technology is illustrated by the so-called social effects of computing.More precisely, in the workplace technology has been heralded as a major agent of change for many decades but the impact of technology on power relations within organizations is very hard to predict (Zuboff, 1988).Even if technology is experienced by its users as empowering, and communication technology as inherently democratizing (or "open"), there is nothing to stop powerholders from using the very same technologies for repression, autocratic control, and deceit.With reference to China, for example, one might on the one hand say that the communist party has much less control over public opinion than ever before.But at the same time the party is better informed about dissident thought than ever before in history and there are no signs that increased freedom of information has political repercussions.Across the globe we can observe examples that can be interpreted as technologies facilitating democratization and collective action (as may be the case in some Arab countries, see Castells, 2012) and those same technologies facilitating state control (as appears to be the case in most Western countries, where governments are monitoring telephone and internet traffic of their citizens to an unprecedented degree). Does this mean that technology exerts effects that are intrinsically unpredictable?Maybe not entirely, we would suggest.All technology has a range of primary consequences that may sometimes be inferred without problems. Stirrups make riding a horse easier.Agricultural technology increases output and frees up labour.Industrialization does the same.All technologies thereby lend some power to those who can exercise ownership or command their use.The social consequences of these developments, however, are secondary outcomes.They tend to be much less easily predicted: for example, the freeing up of labour might increase unemployment, fuel developments in other industrial sectors, or simply increase the amount of leisure time.In the example of household technologies such as the washing machine, the time that housewives invested in household chores did not decrease, but rather time gained was spent on existing and new household chores (e.g., because more elaborate meals were cooked, clothes were washed more often etc., see Cowan, 1976). Interestingly, the primary consequences of computer technology (with which the research that we report below was primarily concerned) are notoriously hard to predict.Even the simple prediction that computing technology increases productivity has proven elusive.For information and communication technology (ICT), it can be said that it speeds up and facilitates certain types of communication (e.g., Castells, 1996;Katz & Rice, 2002;Sproull & Kiesler, 1991).But most clearly, too, ICT has a strong symbolic function: it signals modernity and innovation. Accordingly, information technology tends to benefit the social status of its users, at least in the eyes of those who have a positive image of it.In sum, the reviewed research clearly shows that technology usage can drive social changes resulting in remarkable changes in Western society. Beyond Modernization: Pathways of Cultural Change Cultural Change Instigated by Development Aid In order to learn about societal change, it makes sense to study it where and how it occurs -on a micro level. One obvious place to look for change is in developing nations.Currently, 1.2 billion people live in extreme poverty worldwide.They suffer from hunger, lack of material possessions or money, and live on less than 1.25 U.S. dollars a day (World Bank, 2010).Large sums of money are spent on development aid projects to improve the living conditions of people in developing nations.Interventions focus on different aspects that should improve the life of people, such as installing computers to improve educational outcomes or offering microcredits so that people can develop their own businesses.These two examples illustrate two different primary goals of such development aid interventions envisioned by the project leaders.To date, there is a lot of debate about the effectiveness of development aid projects to reach the projects' goals (e.g., Moyo, 2009;Sachs, 2005).One example that illustrates the ineffectiveness of such an intervention is the introduction of an improved cooking stove in India (Hanna, Duflo, & Greenstone, 2012).Cooking stoves are widely used in developing nations.However, the open fires inside the houses often cause health problems.To prevent these, an improved traditional cooking stove was first developed and tested in the laboratory which reduced indoor air pollution and required less fuel.This improved stove was then distributed to 2651 households in 44 villages in the East of India.The primary goals of the intervention were to decrease indoor pollution, improve health, and reduce fuel consumption.However, the researchers only found a reduction on smoke inhalation in the first year and no further impact.Over the four years of the study, people did not use the stoves regularly or appropriately, and did not make the necessary investments to maintain the stoves, so that their usage ultimately declined over the years.Thus, the primary goals of the project were not achieved.Similar to the primary consequences of computer introduction in Western societies as discussed above, primary consequences of development aid projects are not that easy to predict either. Another criticism is that these innovations have been primarily developed in more individualist Western nations, based on the assumption that they should stimulate sustainable development.Introducing these innovations in traditional and collectivist developing nations is likely to also stimulate (often) unanticipated consequences, so called side effects, which are likely to drive cultural changes.In the context of our research we therefore refer to cultural change as a more specific form of social change.Most interventions aim to achieve their primary goals as mentioned above and do not necessarily envision or intend to stimulate cultural change.Other interventions, however, may intend to stimulate cultural change by empowering specific groups such as women, for example by providing access to microfinance services.Thus, some cultural change attempts may be intended.However, these interventions may also stimulate cultural changes that were not intended, such as less positive side effects of decreased social cohesion or even conflict genesis.Although the effectiveness of development aid to reduce poverty is highly debated (e.g., Moyo, 2009;Sachs, 2005), it is undeniably the case that development aid is a good vehicle for studying cultural change attempts. Over the past years, we have conducted various studies examining the psychological and cultural effects of these aid programs.We began studying the potential changes that small laptop computers would bring to Ethiopian schoolchildren.Later, we studied effects of providing access to microfinance services among people living below the poverty line in Sri Lanka.In all projects, the same overriding questions were asked: is there evidence of anti- Development Aid A Laptop Program for Students in Ethiopia Within the context of a laptop program for students iii we have studied the psychological and cultural changes driven by the introduction of a single novel piece of modern technology, namely a laptop.This field experiment was conducted in Ethiopia, one of the least developed countries in the world, with a low level of modernization in a very collectivist and traditional culture (e.g., Becker et al., 2012).We compared students who were given a personal laptop that they could use in school and take home.To children in the developing world a laptop represents an information-rich novelty, which does not immediately compare to any other prior experience.The laptops provided to students enabled them to read their schoolbooks on the laptop, make calculations, use a text editing application, browse an offline database of Wikipedia articles and a picture gallery, play memory games, draw freeform images, make pictures and videos, chat with other laptops within 10 meters, or explore applications to compose music (for an overview see Hansen, Koudenburg, et al., 2012).It is important to note that there was no internet access available at the time of the study.In total, 4375 laptops of the One Laptop Per Child (OLPC) initiative were distributed in Ethiopia.Laptops were distributed in entire schools.Four schools were selected across the country (for a detailed project description and the selection criteria see Hansen, Koudenburg, et al., 2012;Kocsev, Hansen, Hollow, & Pischetola, 2010).Within the four schools we tracked students in classes in grade 5, 6 and 7over two years and compared them with a matched comparison group of students without a laptop. We first investigated the impact of the laptop usage on the anticipated educational outcomes, the primary goal set by the organization.In Ethiopia this program set out to improve students' educational outcomes and prospects and change the teaching style from frontal teacher-focused to student-centered by introducing the laptop for learning purposes in class (e.g., text books, small group exercises; Kocsev et al., 2010).Six months after laptop deployment we took a stratified sample within all schools of 203 students who received a laptop and matched it with a comparison group of 210 students without a laptop in grades 5, 6 and 7. Our research clearly shows that six months after deployment laptops are hardly used in the class for learning purposes by the teachers; only 2.8% of students who received a laptop indicated that they used it in class (Hansen, Koudenburg, et al., 2012).However, students most frequently used their laptop in breaks (58%) and at home, outside (28.7%) and inside their parental home (10.5%) when they had some free time to do so. Furthermore, we compared students' grades in the semester just before the laptop deployment and at the end of the semester (approximately eight months later).In line with previous research conducted in developed nations that showed some learning benefits in mathematics and writing, and because some activities on the laptop were presented in English (such as an offline Wikipedia), we focused on English and mathematics as well as the overall grade.Interestingly, we did not find any evidence of improved grades.Thus, our study is one of several that failed to replicate such findings found in developed nations in the developing world (Nugroho & Lonsdale, 2010;Zucker & Light, 2009).Considering the fact that laptops appear to be hardly used in class and are more frequently used during breaks and outside school, the absence of laptop effects on school-related outcomes may not be surprising.Benefits may accrue if these laptops are more tightly integrated into the school curriculum and the assessments. We further reasoned that the laptops would offer an entirely new environment for the development of specific cognitive abilities, namely reasoning abilities.When students start exploring the activities afforded by the laptop they will use their reasoning abilities to learn more about the similarities and differences between activities.We tested two distinct cognitive abilities: reasoning by analogy and the application and development of categories. These cognitive abilities are fundamental for learning in general.To be able to assess abstract reasoning abilities independent of reading ability and language differences between the three different regions in our study, we used two subtests of a non-verbal and cross-culturally validated intelligence test (Tellegen & Laros, 1993, 2011).The results show that Ethiopian children who had laptops outperformed children without laptops on abstract reasoning tests of reasoning by analogy and categorization, compared to a comparison group.This effect was stronger among older compared to younger children.Older students used more advanced and complex activities afforded by the laptop compared to younger students, suggesting that more advanced laptop activities may boost students' abstract reasoning more strongly.These better abstract reasoning abilities may stimulate learning in a fundamental way.Previous research suggests that these abilities are closely related to educational performance and success (e.g., Rohde & Thompson, 2007). In a second step we systematically investigated the unanticipated psychological and social side effects of the intervention.Based on previous theorizing and research of modernization theory in political science (e.g., Inglehart & Baker, 2000;Inglehart & Welzel, 2005) and research in cultural psychology (e.g., Markus & Kitayama, 1991, 2010), we expected that children who used their laptop in a traditional and collectivist developing nation should develop a more agentic and independent sense of self compared to those who did not have a laptop.A range of indicators provide evidence for this change.First, we conducted a longitudinal test of the assumptions of modernization theory on the impact of technology usage on value endorsement at a micro-level across the whole sample (Hansen & Postmes, 2013).We compared 573 children who had received a laptop to the matched comparison group of 485 children.Children were asked to indicate to what extent specific values were important to them before and six months after the laptop deployment.In line with modernization theory, children with laptops endorsed modern values more strongly, such as achievement (e.g., to be ambitious), self-direction (e.g., to be independent), universalism (e.g., to treat everyone equally) and benevolence (e.g., to help people around me), and became more supportive of gender equality (e.g., boys and girls should be treated equally) compared to the control group. This change was stronger in rural compared to urban areas.At the same time, traditional values (e.g., to do what religion requires) and conformity (e.g., to do what you are told) increased as well. One year after laptop deployment, we provided further cross-sectional evidence in another subsample of students from one school (Hansen, Postmes, van der Vinne, & van Thiel, 2012).Similarly to the previous results, students who had been using a laptop showed stronger endorsement of individualist values (i.e., achievement, self-direction) compared to two control groups without a laptop or students whose laptop had broken; while collectivist values (i.e., tradition, conformity) did not differ between the groups.In addition, we assessed students' self-construal, that is, how they see themselves.Students who were actively using their laptop showed a higher independent self-construal (e.g., it is important for me to be unique, different from others) compared to students who did not have a laptop; while students did not differ in their traditional cultural expression of an interdependent self-construal (e.g., it is important to me what others think of me).In sum, these results provide the first evidence that students start developing an agentic and more independent sense of self as evidenced in higher values of more modern values, attitudes towards gender equality as well as a stronger independent self-construal.At the same time, more traditional cultural expressions such as endorsement of traditional and collective values and an interdependent self-construal persisted.However, we can only speculate about how to explain this result.The increase of traditional and conformity values is consistent with previous research: inhabitants of countries that faced economic difficulties also showed increased endorsement of traditionalism (Inglehart & Baker, 2000).The authors suggest that this increase might be a protective response in times of change and uncertainty to strengthen the traditional community bonds.Important to note is that in these collectivist societies individual enhancement is only possible with the enhancement of one's family. Development Aid 282 We further investigated the underlying processes of the observed changes.Based on our research we suggest four interrelated pathways of social change.First, the laptop offers a fundamental new student-focused learning opportunity (Hansen, Koudenburg, et al., 2012).The activities provided by this laptop require a set of complicated actions that children had to learn and undertake completely independently of their teachers and elders.For the first time students could access new information independently.A separate field experiment mentioned above provided evidence that students who used laptops performed better on an abstract reasoning test compared to a comparison group (Hansen, Koudenburg, et al., 2012).Interestingly, this effect was strongest among older children who used a range of more complicated programs for painting, memorizing, and chatting.In contrast, younger children mainly read their school books.We believe that the facilitation of independent learning stimulated by the laptop activities (e.g. by exploring programs or searching for information) independently of their teachers and elders is an important driver, especially among older children. Second, related to the previous point only active usage seems to drive these changes.To test this, we conducted another field experiment and compared students who were actively using their laptop with students whose laptop broke (i.e., mere ownership) and with a comparison group of students without a laptop at the same school (Hansen, Postmes, et al., 2012).Only students who were actively using their laptop showed higher endorsement of an independent self-construal and individualist values compared to the comparison groups.Students whose laptop broke and who could not use the applications provided by the laptop did not show these differences.Again, a similar pattern emerged: traditional cultural expression did not differ between the groups, suggesting that some aspects of culture persisted (e.g., tradition, conformity).This study provided initial evidence that active laptop use was instrumental in the development of an independent self-construal and in the adoption of more individualist values, while mere laptop ownership did not have a very large impact. Third, a laptop of this kind is an immensely valuable property in this context.We assume that providing such a laptop to Ethiopian children constitutes a major upheaval of social relations in itself.Ownership of this object distinguishes a child from others, from their parents, teachers, and friends and thereby makes children visibly different and independent from others.However, mere ownership is not the main driver.Specifically, these changes only emerged when technologies become part of social interactions-if children 'share' their laptop with their family and friends.Mere ownership of technology is not sufficient (Hansen, van der Klauw, & Postmes, 2013).This is consistent with a broader literature which suggests that the social impact of technology is largely due to the social actions and interactions it affords (e.g., Kraut et al., 2002).These changing actions and interactions facilitate the development of a more agentic and independent sense of self, evidenced by a transformation of selfperceptions and values.In other words, the laptop changes with whom and how children interact, offering new possibilities for personal development. Access to Microfinance Services in Sri Lanka Another example of development aid programs are programs that offer access to microfinance services.More than 30 years ago the concept of microcredit was introduced in Bangladesh by Nobel Peace Prize winner Muhammad Yunus to reduce poverty by providing small loans to the country's rural poor.Since then the number of microfinance institutions (MFI) and people who are receiving microcredits has greatly increased.Moreover, microcredit has also evolved over the years.It not only provides credit to the poor, but also includes myriad financial services such as savings, and non-financial services such as financial literacy training and skills development programs (now referred to as microfinance; Armendáriz de Aghion & Morduch, 2010).Women in particular have been targeted because they are more likely to repay the loan and reinvest their earnings in the business and their families, compared to men (e.g., Pitt & Khandker, 1998).Although large sums are spent in microfinance services in developing countries, we know very little about the longer-term impacts.More precisely, results on women's empowerment are mixed, with evidence of no, weak, or even negative impacts on empowerment, coupled with a lack of quantitative evidence (for an overview see Duvendack et al., 2011).Therefore, the question is: Do microfinance programs meet their goals in terms of enhancing women's empowerment? A recent cross-sectional field experiment in Sri Lanka focused on psychological and social change instigated by providing access to microfinance services among marginalized people who live below the poverty line (Hansen, 2013;Hansen & Fernando, 2013).In the context of an intervention, people who were not presently eligible for such microfinance services were recruited and received a special program aimed to help them to become eligible for a micro loan iv .This program included three steps: (1) participation in training on soft skills, financial literacy, and technical trainings, (2) learning to save money in groups, and (3) becoming eligible for a micro loan, but only if participants took part in training and successfully started saving.For this quasi-experimental, cross-sectional field study, a random sample of 88 women was selected from two regions in the North of Sri Lanka who had, on average, been part of the program for 12-18 months.These women were interviewed and their results were compared with a matched comparison group of 84 women who had not yet been approached by a microfinance institution but were interested in joining one.This ensured that both groups shared the same motivation. It is important to note that traditional countries tend to exhibit greater gender inequality and women often tend to stay close to their house resulting in a restricted interaction radius compared to men (e.g., Inglehart & Norris, 2003).This study provided the first evidence that women who were participating in this program showed higher levels of psychological empowerment as documented in a stronger endorsement of control beliefs in the ability to achieve goals.Furthermore, women were asked to name all the groups they were a member of and indicate the number of people they could ask for help outside their family if a family member would get ill.Both indicators were used to assess women's social network size.Results show that women who profited from the program had bigger social networks compared to women who did not take part in the program.This study further provided evidence that these effects were most strongly predicted by the amount of training women had participated in and not by whether they had received a loan or not.We believe that these results provide some initial evidence that the individual capacity built by providing training, which offers new insights and a new increased social network through the participation in the training, are the pathways through which these changes come about.Interestingly, these effects were stronger among women who received a loan for their husband's business compared to women who received a loan for their own business.We believe that women who were running their own business were already to some extent more independent compared to women who were supporting their husband's business. For the latter, the participation in this program offered even newer content, a larger territorial radius, and more social interactions. Pathways of Cultural Change Through Modernization The In the case of the microfinance program, women learned new abilities with respect to life and business skills. Second, both interventions offered new possibilities for acting and interacting and in turn changing the structure of social relations.For example, when children start explaining to their parents or older siblings how to use the laptop, they are changing the deeply-embedded hierarchical structures that characterize their culture.Traditionally, children are less likely to address their fathers and explain and teach their elders.We believe that it is the increased scope for action, in particular, that has the capacity to produce long-lasting change.In such processes, intergroup conflict may be a potential risk, but our research has found no evidence of it actually emerging.Instead, considerable cultural change occurred over the course of our studies apparently without any intergroup tensions, purely because the technology was opening new avenues for people to explore.Similarly, for example, by starting to interact with new people as well as learning and discussing new topics such as how to handle financial issues and how to set up a business in the context of training, women are changing the deeply-embedded hierarchical social roles that characterize their culture. Implications for Theory and Research on Societal Change To date, social psychological research on social change has mainly focused upon intergroup dynamics or conflict as a driver of change (e.g., van Zomeren & Iyer, 2009).We extend previous research by suggesting (1) that the adoption of modern innovations is a key driver in cultural change and ( 2 According to the theoretical perspective of the relational models theory (Fiske, 1992(Fiske, , 2000)), there are four fundamental choices human beings have in dealing with each other: communal sharing, ranking on the basis of authority, equal matching, and pricing.Sharing seems the dominant relational model among people living below the poverty line in developing countries.Furthermore, this is very likely not the only model; much of life revolves around ranking and matching as well.To date, pricing may still be less important.However, when for example people start selling small proportions of their farm produce and craft on markets, pricing is likely to become an important relational model which is likely to result in dramatic social changes with respect to social cohesion.Future research should focus on longer-term changes with respect to social relations and social cohesion. Practical and Political Implications A key practical and political implication of our research is that 'modern' innovations can instigate the first signs of cultural change.Development aid projects are often set up with high and utopian primary goals anticipated by the involved stakeholders.As our research shows, for example in the context of a laptop program for students, the primary anticipated goal of improved educational outcomes is only partly met.Students showed, for example, improved abstract reasoning abilities, while the laptops are hardly used for learning in class and school performance (i.e., grades) did not improve.This is an important insight for involved stakeholders who want to improve the effectiveness of these programs.Most importantly, development aid projects also instigate unanticipated secondary changes such as changes towards an agentic and independent sense of self, stronger endorsement of 'modern' values alongside traditional values, and attitude change towards more gender equality.These so-called side effects are often not (directly) anticipated by involved stakeholders.However, they are often crucial to gain insight into the social consequences of interventions, not least of all because changes in relations or tension between members of a community might otherwise fuel conflict.Only if the whole community such as partners, parents, family, and neighbors are carefully involved in the intervention do these programs have a chance of achieving their goals. Interestingly, we did not find evidence for an erosion of traditional culture in our studies.This is in line with previous research on value change (Inglehart & Baker, 2000): making your parents and community proud is a major motivation in collectivist developing countries.Only when also caring for one's family can people become successful in life. Another political implication is that many development aid projects are designed to empower the poor.The programs are set up to provide people with tools that enable them to engage in activities that are aligned with more modern societies and thus foster economic and social development (i.e.Inkeles & Smith, 1974).However, the way these programs are often set up does not lead to the desired empowerment.In contrast, they often create a dependencyorientated rather than an independency-orientated relation with the donor side.For example, in the field of laptop programs for students, children receive their own laptop.They sometimes (depending on the approach of the program) receive a short introduction on the activities afforded by the laptop and can then start exploring these activities depending on their own interest and level.However, what happens when a laptop breaks down?In many contexts the expertise on how to repair a laptop is not yet available and even worse, many of these programs do not offer any spare parts (e.g., Warschauer & Ames, 2010).To be able to further use the new technology, recipients are dependent from the often commercial interests of these organizations.On a higher level, developing countries often stay dependent on these organizations from the developed world.Thus, this intergroup 'helping' stays dependency-orientated and not autonomy-orientated (e.g., Nadler & Halabi, 2006;van Leeuwen & Täuber, 2010). Development Aid 286 At a more basic level, people might even stop using technologies because they broke or require electricity, which is often a scarce resource. Many projects also neglect the additional costs that are required to ensure the sustainability of these projects.Again referring to the example of laptop programs for students, the significant additional costs of providing technical support such as repairs of broken laptops, electricity, or new software (e.g., Adhikari, 2011) and for adapting laptops to the specific context (e.g., Warschauer & Ames, 2010) are often neglected.These additional investments are crucial for the sustainable success of these laptop programs for students in developing countries.Only with accompanying support (see also Warschauer & Ames, 2010) and sustainable interest from involved stakeholders (e.g., Unwin et al., 2010) can these programs contribute to sustainable societal change. Related to the previous point we want to pose a further question: Who is helped (and in what way) by different forms of aid? Development aid has very different forms and is aimed at different groups of beneficiaries.Thus, the answer to this question is multifaceted.We want to illustrate this with two examples.First, in both of the contexts addressed here, another party also profits from the development aid interventions (i.e., commercial interests): organizations providing laptop programs for students want to increase sales figures, and microfinance institutions aim to secure stable repayments and earn from the interest rates.Thus, development aid is clearly often not purely charitable.Second, in the field of laptop programs for students, our research suggests that the anticipated primary goals are often not really achieved, while often unanticipated secondary social changes might occur.We provide initial empirical evidence for some remarkable changes with respect to the development of an agentic and more independent sense of self (e.g., Hansen, Koudenburg, et al., 2012;Hansen, Postmes, et al., 2012;Hansen et al., 2013).However, laptop programs for students are quite expensive by local standards and cost-benefit analysis may not always warrant the investments.Given that education is another key driver of societal change, other investments such as teacher training or other learning materials might be cheaper and may lead to other learning benefits.Thus, a careful, culturally-sensitive implementation of these interventions is crucial for sustainable development. Conclusions In this article, we proposed that cultural change can be stimulated by modernization, extending previous theorizing on societal change in political and social psychology that has mainly focused on intergroup dynamics.We advanced the idea that it is important to look at micro-level changes driven by development aid projects that introduce modern innovations from developed nations in more traditional developing nations.Studying these attempts to stimulate anticipated or unanticipated cultural change is another fruitful way to develop psychological theory in the field of societal change.On a broader level, we propose that examining societal change driven by development aid in developing countries might be crucial for future research on globalization issues.Such changes are likely to play a key role in processes of globalization.For example, social media has played a crucial role in the Arab Spring and is likely to play an important role in the future.Within a more globalized world, cultural changes in developing nations will likely also influence the intergroup dynamics between the developing and developed societies worldwide. cipated and unanticipated psychological and cultural change, and what is the process by which it occurs?Journal of Social and Political Psychology 2013, Vol.1(1), 273-292 doi:10.5964/jspp.v1i1.15 Journal of Social and Political Psychology 2013, Vol.1(1), 273-292 doi:10.5964/jspp.v1i1.15 above mentioned micro-level research extends previous theorizing on social change by (1) introducing modern innovations as drivers of cultural change, (2) suggesting new underlying processes, (3) providing evidence of cultural change such as value and attitude change.Next, we will outline in detail two key pathways of cultural change in developing countries.Based on our previous research on two different modern innovations (laptop usage and access to microfinance services) in two different cultures (Ethiopia and Sri Lanka), we suggest that there are two key conceptual pathways that stimulate cultural change in developing countries.Our recent research provides the first empirical evidence that the introduction of modern innovations can instigate cultural change as demonstrated in stronger endorsement of more modern values, an increased social network, and stronger attitude change towards gender equality.At the same time, however, traditional values persisted.Based on our empirical evidence we suggest two key pathways that drove the change observed across the interventions we studied.First, across both interventions and documented on a range of indicators, people developed an agentic and independent sense of self through technology usage or participation in trainings.In both cases people started learning independently and acquired new information to which they did not have previous access.In the case of the laptop program for students, children independently of their teachers and elders learned to master the laptop depending on their own interest and proficiency level. ) introducing two pathways of psychological change that are likely to stimulate cultural change.More precisely, one pathway is instigated by people's development of an agentic and independent sense of self through modern innovations such as technology usage or participation in trainings.The second pathway is instigated by new possibilities for acting and interacting with others.Both pathways are parallel processes that have the power to change the structure of social relations.Investigating the impact of development aid projects provides insight into how these changes occur and result in cultural and societal change.Our suggestions are based on research that provides insights into micro-level changes driven by two different modern innovations, and has so far focused on rather short-term impact of up to two years.Future research should more carefully investigate the benefits and risks of the cultural changes instigated by development projects in the long-run.The observed changes in cultural values, female empowerment, and attitudes towards gender equality might also trigger conflicts with friends, partners, or parents who did not profit from a program or who might be skeptical about the changes.More precisely, with respect to the microfinance intervention for example, it is likely that some people may profit from this program (i.e., alleviate poverty) whereas the lives of other families stay unchanged.
2014-10-01T00:00:00.000Z
2013-01-01T00:00:00.000
{ "year": 2013, "sha1": "33f84025fa49031fc326df510c7bfbe652d8902c", "oa_license": "CCBY", "oa_url": "https://jspp.psychopen.eu/index.php/jspp/article/download/4757/4757.pdf", "oa_status": "GOLD", "pdf_src": "CiteSeerX", "pdf_hash": "33f84025fa49031fc326df510c7bfbe652d8902c", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Political Science", "Sociology" ] }
235352979
pes2o/s2orc
v3-fos-license
Transition intensities of trivalent lanthanide ions in solids: Revisiting the Judd-Ofelt theory We present a modified version of the Judd-Ofelt theory, which describes the intensities of f-f transitions by trivalent lanthanide ions (Ln$^{3+}$) in solids. In our model, the properties of the dopant are calculated with well-established atomic-structure techniques, while the influence of the crystal-field potential is described by three adjustable parameters. By applying our model to europium (Eu$^{3+}$), well-known to challenge the standard Judd-Ofelt theory, we are able to give a physical insight into all the transitions within the ground electronic configuration, and also to reproduce quantitatively experimental absorption oscillator strengths. Our model opens the possibility to interpret polarized-light transitions between individual levels of the ion-crystal system. INTRODUCTION The Judd-Ofelt (JO) theory has been successfully applied since almost 60 years, to interpret the intensities of absorption and emissions lines of crystals and glasses doped with trivalent lanthanide ions (Ln 3+ ) [1][2][3]. Despite its remarkable efficiency, this standard JO theory cannot reproduce some of the observed transitions, because of its strong selection rules. It is especially the case for europium (Eu 3+ ) [4,5], well known to challenge the standard JO theory [6]. Many extensions of the original model have been proposed to overcome this drawback [7], including e.g. J-mixing [8][9][10], the Wybourne-Downer mechanism [11,12], velocity-gauge expression of the electric-dipole (ED) operator [13], relativistic or configuration-interaction (CI) effects [14][15][16][17][18], purely ab initio intensity calculations [19]. In this respect, Smentek and coworkers were able to reproduce experimental absorption oscillator strengths with a very high accuracy, with up to 17 adjustable parameters [20]. But in spite of all these improvements, even the most recent experimental studies use the standard version of the JO theory [21,22]. In the standard JO theory, the line strength characterizing a given transition is a linear combinations of three parameters Ω λ (with λ = 2, 4 and 6), which are functions of both the properties of the Ln 3+ ion and the crystalfield parameters [3,6]. Since the Ω λ -parameters are adjusted by least-square fitting, those two types of contributions cannot be separated. However, the properties of the impurity can be investigated by means of fee-ion spectroscopy. In this respect, recent joint experimental and theoretical investigations have provided a detailed knowledge of some free-Ln 3+ ion structure [23][24][25][26]. Although such a study has not been made with Eu 3+ , the continuity of the atomic properties along the lanthanide * gohar hovhannesyan@etu.u-bourgogne.fr † maxence.lepers@u-bourgogne.fr series opens the possibility to compute the Eu 3+ spectrum using a semi-empirical method, based on adjusted parameters of neighboring elements [27]. In this article, we present a modified version of the JO theory in which the properties of the free Ln 3+ ions, i.e. energies and transition integrals, are computed using a combination of ab initio and least-square fitting procedures available in Cowan's suite of codes [28,29]. This allows us to relax some of the strong assumptions of the JO theory, for instance the strict application of the closure relation. The line strengths appear as linear combinations of three adjustable parameters which are only functions of the crystal-field potential, giving access to the local environment around the ion. We account for the spin-orbit (SO) interaction responsible of spinchanging transitions by calculating the line strengths at the third order of perturbation theory. Our results on Eu 3+ suggest that the spin-mixing transitions are mainly due to the SO mixing within the ground electronic configuration, in contradiction with the Wybourne-Downer mechanism described in Ref. [11,12]. In addition, our model gives a simple physical interpretation of the transitions that are forbidden in the framework of the standard JO theory, including 7 F 0 ↔ 5 D 0 , 7 F 0 ↔ 5 D J or 7 F J ↔ 5 D 0 with J odd. To benchmark our model, we reproduce quantitatively the set of experimental absorption oscillator strengths of Babu et al. [30], although we overestimate the strength of 7 F 0 ↔ 5 D 0 transition. The paper is organized as follows. Section II contains our analytical development resulting in the ED line strengths, which then allow for calculating oscillator strengths and Einstein coefficients. Our model is based on the time-independent perturbation theory, up to second and third orders (see Subsections II A and II B respectively). Then in Section III, we apply our model to the case of europium, describing first the free-ion properties required for our model in Subsections III A-III D, and then the f-f transitions within the ground configuration in Subsection III E. Section IV contains conclusions and prospects. II. ELECTRIC-DIPOLE LINE STRENGTHS The aim of the present section is to derive analytical expressions for the electric-dipole (ED) line strengths, which enable to characterize absorption and emission intensities of Ln 3+ -doped solids. Unlike the magneticdipole (MD) and electric-quadrupole (EQ) transitions [31], the ED ones are activated by the presence of the host material, which relaxes the free-space selection rules. We use similar hypotheses as in the original JO model [1,2]: the crystal-field (CF) potential slightly admixes the levels of the ground configuration [Xe]4f w and those of the first excited configuration [Xe]4f w−1 5d, where [Xe] denotes the ground configuration of xenon, dropped in the rest of the article. In the resulting perturbative expression of the ED line strength, we assume that all the levels of the excited configuration have the same energy. However, we relax some of the original hypothesis, by accounting for the energies of the ground-configuration levels, and by applying the closure relation less strictly. Unlike the standard and most common extensions of the JO model, we do not introduce effective operators, like the so-called unit-tensor operator U (k) [28], but rather work on the matrix elements of the CF or ED operators. To calculate the line strength, we firstly use the secondorder perturbation theory (see subsection II A) and then the third-order perturbation theory (see subsection II B), for which the free-ion spin-orbit operator is within the perturbation. The common starting point of those two calculations is the multipolar expansion of the crystal-field potential, where k is a non-negative integer and q = −k, −k + 1, ..., +k, (r j , θ j , φ j ) are the spherical coordinates of the j-th (j = 1 to N ) electron in the referential frame centered on the nucleus of the Ln 3+ ion, and C (k) q are the Racah spherical harmonics of rank k and component q, related to the usual spherical harmonics by C (k) q (θ j , φ j ) = 4π/(2k + 1) × Y kq (θ j , φ j ), see for example Chap. 5 of Ref. [32]. In Eq. (1), the quantities P (k) q represent the electric multipole moment as defined in Chaps. 14 and 15 of Ref. [28]. The simplest way of calculating the CF parameters A kq is to assume that they are due to distributed charges inside the host material. More elaborate models can be used, like distributed dipoles resulting in the so-called dynamical coupling [7], or the vibration of the ion center-of-mass. This would affect the physical origin of the A kq coefficients, but not the validity of the forthcoming results [1]. A. Second-order correction In the theory of light-matter interaction, the ED approximation arises at the first order of perturbation theory. Furthermore, the f-f transitions in Ln 3+ -doped solids are only possible if the free-ion levels are perturbed by the CF potential. Therefore, using the first-order correction on the ion levels to calculate the matrix element of the ED operator gives in total a second-order correction. We call |Ψ i the eigenvectors associated with the ion+crystal system (without electromagnetic field). In the framework of perturbation theory, we express them as |Ψ i = m |Ψ m i , where m denotes the order of the perturbative expansion. In this subsection, we consider that the 0-th, i.e. unperturbed, eigenvectors |Ψ 0 i are the free-ion levels. Those belonging to the ground configuration nℓ w (with nℓ = 4f for Ln 3+ ions) are written in intermediate coupling scheme [28] where L i , S i and J i are the quantum numbers associated with the orbital, spin and total electronic angular momentum respectively, while M i is associated with the z-projection of the latter. The free-ion levels of energy (2), α is a generic notation containing additional information like the seniority number [28]. In the 4f w configuration of Ln 3+ ions, the energy levels are usually well described in the LS coupling scheme (see Table II). In the first excited configuration nℓ w−1 n ′ ℓ ′ , with (nℓ, n ′ ℓ ′ ) = (4f, 5d) for Ln 3+ ions, we consider free-ion levels in pure LS coupling, where the overlined quantum numbers characterize the nℓ w−1 subshell alone. As Table IV shows, the LS coupling is not appropriate for the energy levels of the excited configuration. But since, in our ED matrix element calculation, we will assume that all the levels of the excited configuration have the same energy, the choice of coupling scheme is arbitrary, and so we take the simplest one. Now we express the ED transition amplitude D 12 between eigenvectors |Ψ 0 i + |Ψ 1 i (i = 1, 2), perturbed by the CF potential up to the first order, where the index p = 0 denotes π light polarization, and p = ±1 denote σ ± polarizations. We recall that p |Ψ 0 2 = 0, because in free space, there is no ED transition between levels of the same electronic configuration. In what follows, we assume that all the energies of the excited configuration are equal, E t ≈ E n ′ ℓ ′ . Rather than the center-of-gravity energy of the excited configuration, E t can be regarded as the mean energy for which the coupling with both levels 1 and 2 is significant (see Fig. 2). Equation (4) contains matrix elements of P (1) p and V CF , which are themselves functions of P (k) q , as Eq. (1) shows. Being irreducible tensor operators, the matrix elements of P (k) q satisfy the Wigner-Eckart theorem [32] By contrast, the products of the kind are not irreducible tensors; still we overcome this problem by expanding the product of two CG coefficients given in [32], which yields where the quantity between curly brackets is a Wigner 6-j symbol. Equation (6) is interesting because the only dependence on quantum numbers M i is in the CG coefficient C J1M1 J2M2λµ , while M ′ is absent. The equation appears as a sum of irreducible tensors of rank λ and component µ coupling directly |Ψ 0 1 and |Ψ 0 2 . The selection rules governing this coupling are ∆J = |J 2 − J 1 | ≤ λ ≤ J 1 + J 2 and M 1 = M 2 + µ. Moreover, the triangle rule associated with C λµ kq1p imposes λ = k, k ± 1, µ = p + q and −λ ≤ µ ≤ +λ. Applying the same reasoning for the third line of Eq. (4), we obtain the same result as Eq. (6) except the permutations of the couples of indexes (k, q) and (1, p). Using the symmetry relation of CG coefficients C λµ 1pkq = (−1) 1+k−λ C λµ kq1p , we get to the final expression for the transition amplitude where we have introduced the quantities in which |αLS, L ′ S ′ J ′ is a condensed representation of |Ψ 0 t , see Eq. (3). The superscripts (k1) and (1k) correspond to the order in which the tensor operators P (k) and P (1) are written. For eigenvectors |Ψ 0 1,2 belonging to the ground configuration and |Ψ 0 t belonging to the first excited configuration, the 3-j symbol of Eq. (A1) imposes that the CF potential matrix elements are non-zero for k = 1, 3 and 5, which, according to Eq. (6), imposes λ = 0, 1, ..., 6. By contrast, in the standard version of the JO theory, λ = k + 1 = 2, 4 and 6. The λ = 0 contribution in Eq. (6) comes from the dipolar term k = 1 of the CF potential; it is the only non-zero contribution when J 1 = J 2 = 0, for instance the 5 D 0 ↔ 7 F 0 transition in Eu 3+ . Our odd-λ contributions are responsible for the transitions like 5 D 0 ↔ 7 F 3,5 and 5 D 3 ↔ 7 F 0 ; they arise because we consider distinct energies for levels 1 and 2, E 1 = E 2 , unlike the standard JO theory. But since the energy difference |E 2 − E 1 | is significantly smaller (although not negligible) compared to E n ′ ℓ ′ − E 1,2 , those transitions are weak. Finally, since the operators P (k) do not couple different spin states, the spin-changing transitions are only due to the mixing of different spin states within the ground-configuration levels |Ψ 0 1,2 , see Eq. At present, we calculate the ED line strength S ED = pM1M2 (D 12 ) 2 . Expressing Eq. (7) twice gives many sums: in particular on p, M 1 , M 2 , k, q, µ and J ′ , but also k ′ , q ′ , µ ′ and J ′′ (coming from the second expansion of D 12 ). Focusing on the sum involving CG coefficients, we have where the Kronecker symbols come from the orthonormalization relation of CG coefficients. Plugging Eq. (10) into the line strength gives When expanded, the last two lines contain four terms: two of the kind where the sum on λ is actually the orthonormalization relation of 6-j symbols; and two terms of the kind where we use some properties of 6-j symbols (see Ref. [32], p. 305). The final expression of the line strength is Equation (14) looks very different from the standard JO line strength S ED = λ Ω λ Ψ 0 1 U (λ) Ψ 0 2 , especially because it does not depend on λ, but depends on J ′ and J ′′ (which are by contrast eliminated in the standard case). The index λ is still relevant in the ED transition amplitude D 12 , see Eq. (7), because it allows for deriving the selection rules, but it disappears in the line strength, where we consider unpolarized light and ions (that is to say sums on p, M 1 and M 2 ). In Eq. (14), the influence of the CF potential are only contained in the three parameters X k = (2k + 1) −1 q |A kq | 2 , for k = 1, 3 and 5, which are q-averages of the square of A kq . In what follows, they will be treated as adjustable parameters, whereas all the atomic properties will be computed using atomic-structure methods. B. Third-order correction In this section, we address the influence of spinorbit (SO) mixing in the excited configuration on spinchanging f-f transitions. Contrary to the ground configuration, the LS coupling scheme is by far not appropriate to interpret the levels of the 4f w−1 5d configuration (see Table IV), because the electrostatic energy between 4f and 5d electrons and the SO energy of the 5d electron are comparable. Therefore one can expect these excited levels to play a significant role in the spin-changing transitions. To check this hypothesis, we will investigate the effect of the SO Hamiltonian of the ion H SO using perturbation theory. Namely we define a perturbation operator V containing SO and CF interactions, In consequence, the new unperturbed eigenvectors related to the ground configuration are called manifolds, i.e. atomic levels for which the SO energy is set to 0. Those manifolds | Ψ 0 i , of energy E i are degenerate in M i as previously, but also in J i , and they are characterized by one L i and one S i quantum number, Some manifolds, like the lowest 5 D one in Eu 3+ , are linear combination of different terms having the same L and S but different seniority numbers, hence the sum on α in Eq. (16). For the excited configuration, the unperturbed eigenvectors are those given in Eq. (3). The selection rules associated with H SO and V CF are very different. In particular, H SO couples unperturbed eigenvectors of the same configuration, whereas the odd terms of V CF couple configurations of opposite parities. Therefore, the influence of both SO and CF potentials appears as products of matrix elements like and we need to go to the third order of perturbation theory to calculate the transition amplitude, where the second-order correction of eigenvectors is given in Eq. (A4). By expanding Eq. (17), we get six terms corresponding to the six possible products of matrix element of H SO , P (1) p and P (k) q . Since H SO couples states of the same configuration, unlike P (1) p and P (k) q , we distinguish two kinds of terms: q | Ψ 0 2 , for which the SO interaction mixes levels of the excited configuration, for example quintet and septets in Eu 3+ ; In those cases, the SO interaction mixes manifolds of the ground configuration, for example in Eu 3+ , 7 F with 5 D, 5 F and 5 G. Because H SO is a scalar, i.e. a tensor operator of rank 0, the application of the Wigner-Eckart theorem gives a CG coefficients So, applying the Wigner-Eckart theorem to P (k) q and P (1) p as in Eq. (5), the product of three matrix elements can be expanded in a similar way to Eq (6). For example, where |Ψ 0 t,u are two eigenvectors of the excited configuration with the same total angular momentum J ′ . The other products give similar results: the order of reduced matrix elements in the last line is of course the same as the order of matrix elements in the first line; if P (1) p appears before P (k) q , the 1 and k are interchanged in the CG coefficients and 6-j symbols, like in Eq. (7). Gathering the six matrix-element products, we can write the ED transition amplitude as where the terms D (k1k2k3) 12,J ′ are built in analogy to Eqs. (8) and (9): the order of the superscripts is the one in which the matrix element of operators appear (the "0" standing for H SO ). Firstly, the D with In Eu 3+ for example, for | Ψ 0 1 in the lowest 5 D manifold and | Ψ 0 1 in the 7 F manifold, H SO couples | Ψ 0 1 to the 7 L ′ 1 manifolds on the ground configuration (actually there is only one: 7 F). The quantum numbers (αLS, L ′ 2 ) characterize the septet levels (S 2 = 3) of the excited configuration. Similarly, where ∆ k1k2 is given by Eq. (21). Here the SO Hamiltonian couples the 7 F manifold to the various quintet manifolds, for instance 5 D, 5 F and 5 G manifolds, since correspond to the Wybourne-Downer mechanism [11] where H SO couples the quintet and septet levels of the excited configuration. Namely, where ∆ k1k3 is given by Eq. (21). If we assume that in Equations (20), (22) and (23), the spin-orbit interactions are of the same order of magnitude (see Table I), the main difference between them comes from the energy denominator. The quantity ∆ lm is on the order of several tens of thousands of cm −1 , while the differences which are the energies between different manifolds of the ground configuration, are on the order of several thousand cm −1 . This means that Eq. (23) is, roughly speaking, one order of magnitude smaller than Eqs. (20) and (22). This fact is really a precious information that brings the third order correction. Combining Eqs. (14) and (19), we can see that the ED line strength S ED now contains 36 terms, containing products of the kind D . For the 18 terms in which k and 1 appear in the same order in (k 1a k 2a k 3a ) and (k 1b k 2b k 3b ), we have the same prefactor as the second line of Eq. (14), that is (2J ′ + 1) −1 . For the 18 other terms in which k and 1 appear in different orders, we have the prefactors with the 6-j symbols as in the second and third lines of Eq. (14). Namely, we can write the line strength as where κ a = (k 1a k 2a k 3a ) and κ b = (k 1b k 2b k 3b ) designate the possible combinations of indices k, 1 and 0. The quantity δ κ,(k1) = 1 if κ is a combination in which k appears firstly and 1 secondly, namely κ = (k10), (k01), (0k1), and 0 otherwise. The quantity δ κ,(1k) corresponds to the inverse situation. Similarly to Eq. (14), the line strength (14) depends on the CF potential through the three parameters X k = (2k + 1) −1 q |A kq | 2 , which will be treated as adjustable in the next section. III. APPLICATION TO EUROPIUM In this section, we aim at benchmarking our model with experimental data. To this end, we have chosen the measurements of absorption oscillator strengths by Babu et al. [30], who performed a thorough spectroscopic study of Eu 3+ -doped lithium borate and lithium fluoroborate glasses. Because our model relies on free-ion properties, we start with studying the free-ion energies of the two lowest Eu 3+ electronic configurations 4f 6 , of even parity, and 4f 5 5d, of odd parity, and the free-space transitions between them. A. Calculation of free-ion energy parameters The calculations of the Eu 3+ free-ion spectrum are performed with the semi-empirical technique provided by Robert Cowan's atomic-structure suite of codes [29], whose theoretical background is presented in Ref. [28]. In a first step, ab initio radial wave functions P nℓ for all the subshells nℓ of the considered configurations are computed with the relativistic Hartree-Fock (HFR) method. Those wave functions are used to calculate energy parameters, for instance center-of-gravity configuration energies E av , direct F k and exchange G k electrostatic integrals, or spin-orbit integrals ζ nℓ , that are the building blocks of the atomic Hamiltonian. In a second step, the latter are treated as adjustable parameters of a least-sqaure fitting calculation, in order to find the best possible agreement between the Hamiltonian eigenvalues and the experimental energies. To make some comparisons between different elements and ionization stages, one often defines the scaling factor (SF) f X = X fit /X HFR between the fitted and the HFR value of a given parameter X. In an attempt to improve the quality of the fit (and therefore, the accuracy of the resulting eigenvectors), a variety of "effective-operator" parameters, called α, β and γ and "illegal"-k F k , G k have been introduced, representing corrections to both the electrostatic and the magnetic single-configuration effects [28]. "Illegal"-k means that these are the values of k for which k + ℓ + ℓ ′ is odd. These effective parameters, unlike other parameters, can not be calculated ab initio. By contrast, we do not include the M k , T k and P k parameters that are sometimes used in Ln 3+ -ion ground configuration. The general methodology for our fitting calculations is as follows: (a) fitting the parameters with an ab initio values while effective parameters are forced to be zero; (b) fixing the parameters resulting from step (a) and fitting the effective parameters; (c) using the final values of (b), fitting all the parameters together. Our fitting calculations require experimental energies. For the Eu 3+ ground configuration 4f 6 , we find them on the NIST ASD database [33]. However, no experimental level has been reported for the 4f 5 5d configuration. Because the 4f w configurations (with 2 ≤ w ≤ 12) and the 4f w−1 5d ones (with 3 ≤ w ≤ 13) possess the same energy parameters, we perform a least-square fitting calculation of some 4f w−1 5d configurations for which experimental levels are known, namely for Nd 3+ (w = 3) and Er 3+ (w = 11) [24][25][26]. Then, relying on the regularities of the scaling factors f X along the lanthanide series, we multiply the obtained scaling factors given in Table I by the HFR parameters for Eu 3+ to compute the energies of 4f 5 5d. The interpretation of Nd 3+ and Er 3+ spectra show that, because CI mixing is very low, a one-configuration approximation can safely be applied in both parities, which is done here. For Nd 3+ , experimentally known levels are taken from the article of Wyart et al. [24]. There are 41 levels for 4f 3 configuration and 111 for 4f 2 5d configuration. For Er 3+ , 38 experimental levels of the configuration 4f 11 and 58 of 4f 10 5d are taken from Meftah et al. [25]. For the 4f 6 configuration of Eu 3+ , the NIST database gives 12 levels [33]. Table I shows a comparison of the final SFs (for ab initio parameters) or the fitted values (for effective parameters), for the two lowest configurations of the above mentioned ions. It also shows the parameter values used in the Eu 3+ spectrum calculations of the next subsections. In the 4f w configurations, the least-square fitting calculations, performed for each element, illustrates the regularities of SFs for F k and ζ f parameters. Regarding effective parameters, the negative values of β are usual, while the small values of α and γ of Er 3+ are not. The regularities are also visible between 4f 2 5d and 4f 10 5d configurations of Nd 3+ and Er 3+ respectively. Therefore, we calculate our Eu 3+ parameters by multiplying the HFR values by the average SF obtained for Nd 3+ and Er 3+ . The effective parameters are those obtained for Nd 3+ , and the center-of-gravity energy of 4f 5 5d is calculated by assuming that the difference E av (4f w−1 5d) − E av (4f w ) increases linearly with w. Figure 1 shows the levels computed with the parameters of Table I, whose energies are between 0 and 100000 cm −1 , for the 4f 6 and 4f 5 5d configurations. They will be analyzed in details in the next two paragraphs. B. Energy levels of the ground configuration 4f 6 For the 4f 6 configuration, values from the NIST database [33] were taken as the experimentally known energy levels. Because the free ion has not been analyzed yet, the energies was determined by interpolation or extrapolation of known experimental values or by semiempirical calculation [35]. Table II shows a good agreement between these experimental values, our computed values and the theoretical values calculated by Freidzon and coworkers [34]. Our values are closer to the experimental ones in the 5 D manifold. Note that a direct comparison with the article of Ogasawara and coworkers [17] is difficult, as the authors do not give tables of energy levels for Eu 3+ . In total, the 4f 6 configuration contains 296 levels with J values ranging from 0 to 12. Table II also illustrates that the ground-configuration levels are well described by the LS coupling scheme. Some levels are mainly characterized by a single term, like 7 F or 5 L, but others are shared between several terms with the same L and S quantum numbers, but different seniority numbers like 5 D(1,2,3) or 5 G(1,2,3), which are used to indicate that these are coming from different parent terms of 4f 5 (see subsection II A). The small deviations from LS coupling are due to the SO interaction, for example, a small 5 D component in the 7 F levels. The terms coupled by SO are such that ∆L = 0, ±1 and ∆S = 0, ±1 in agreement with Eq. (A2). Finally, Table III contains the energy value and eigenvector of the manifolds with S = 2 and 3, calculated by setting to 0 the spin-orbit parameter ζ f of Table I. This information is necessary to build our third-order theory, see Eq. (16). Note that the first excited manifold is a superposition of 5 D3, 5 D1 and 5 D2 terms. But due to its strong importance in Eu 3+ spectroscopic studies, it will be denoted 5 D in the rest of the paper. C. Energy levels of the first excited configuration 4f 5 5d This subsection is devoted to the energy levels of the first excited configuration 4f 5 5d. The parameters necessary for the calculations are given in Table I Table IV shows the 20 lowest energy levels with J = 0, 1 and 2, along with their three dominant eigenvectors. Table IV shows that the levels of the 4f 5 5d configuration do not possess a strongly dominant eigenvector (or a group of eigenvectors) characterized by the same L and S quantum numbers. This means that, unlike the ground configuration, see Table II, the LS coupling scheme is not appropriate for the excited configuration. It can be shown that the jj coupling scheme is not appropriate either, because the spin-orbit energy of the 5d electron is of the same order of magnitude as the electrostatic energy between 4f and 5d electrons. The eigenvectors are therefore written in pair coupling, i.e. linear combination of LS-coupling states. In a given energy level, the L and S quantum numbers, which characterize the parent term of the 4f 5 subshell, are common to the majority of the eigenvectors. With increasing energy, the levels mainly possess 6 H o , 6 F o and 6 P o characters; then come the quartet and doublet parent terms. Indeed the SO interaction within the 4f 5 subshell is too small to significantly mix different L and S of the 4f 5 subshell. By contrast, the total L and S quantum numbers of the LS states differ at most by one unity. For example, we notice the pairs 7 H-5 G (∆S = 1 and ∆L = 1), 7 G-7 F (∆S = 0 and ∆L = 1) and 5 F-7 F (∆S = 1 and ∆L = 0) for the level at 78744, 79541 and 80396 cm −1 respectively. In consequence, the mixing between quintet and septet states of Eu 3+ is mainly due to the SO interaction of the 5d electron. That is why we ignore the influence of the 4f electrons to account for the Wybourne-Downer mechanism (see Subsection II B and Eq. (A3)). D. Free-ion transitions between the two configurations Equations (14) and (24) show that our f-f transition line strengths require the reduced multipole moments of some free-ion transitions which only occur between levels of the ground and excited configurations. In this subsec- tion, we focus on the electric-dipole (ED) free-ion transitions (k = 1), that are the most intense. A widely used quantity for the discussion of spectral lines and transitions is the absorption oscillator strength f 12,ED , which is related to the ED line strength S ED through the expression where 1 (2) denotes the lower (upper) levels of energy E 1 (E 2 ) and total angular momentum J 1 (J 2 ), is the reduced Planck constant, m e the electron mass, a 0 = 4πǫ 0 2 /m e e 2 the Bohr radius, ǫ 0 the vacuum permitivity and e the electron charge. In Eq. (25), the ED line strength is in atomic units (units of e 2 a 2 0 ). Because the oscillator strength for stimulated emission is defined as f 21 = − 2J1+1 2J2+1 f 12 , the so-called weighted oscillator strength gf ED = (2J 1 + 1)f 12,ED = −(2J 2 + 1)f 21,ED (26) does not depend on the nature of the transition. For ED free-ion transitions, the line strength of Eq. (25) is the square of the reduced ED matrix element, S ED = | Ψ 1 P (1) Ψ 2 | 2 . In the rest of the article, we will focus on the absorption oscillator strengths, and so will drop the "12" subscripts. Figure 2 shows the dependence of the logarithm of the weighted oscillator strengths given by Eq. (26) on the en- ergy of the excited-configuration level, for transitions involving two levels of the ground configuration. It shows that the energy band with strong transitions is rather narrow and lies in the range of 80000-100000 cm −1 , while for larger excited-level energies, the values of log(gf ) for the level 7 F 1 (blue dots) decrease faster than those for 5 D 1 (red dots). Indeed, the total spin S of 4f 5 5d levels tends to decrease with energy (see Table IV), the coupling with levels of the 4f 6 7 F manifold drops faster than the coupling with levels of the quintet manifolds. Therefore, in the framework of the JO theory, the excitedconfiguration energy E n ′ ℓ ′ appearing in the denominators of the line strengths, see Eqs. (14) and (24), is not the center-of-gravity energy of the excited-configuration, but rather the strong-coupling window between 80000 and 100000 cm −1 : in practice, we take E n ′ ℓ ′ = 90000 cm −1 . In addition to the free-ion ED reduced matrix elements, the JO theory requires those for k = 3 (octupole) and k = 5, which depend on the radial transition integral n ′ l ′ |r k |nl = ∞ 0 drP n ′ l ′ (r)r k P nl (r), where nl = 4f and n ′ l ′ = 5d. We have calculated those integrals with a home-made Octave code, reading the HFR radial wave functions P 4f and P 5d computed by Cowan's code RCN. We obtain 1.130629 a 0 , -3.221348 a 3 0 and 21.727152 a 5 0 for k = 1, k = 3 and k = 5, respectively, while the k = 1 value calculated by Cowan is 1.130618 a 0 . E. f-f transitions in Eu 3+ -doped solids Now that we have all the necessary information about the free-ion spectrum, in this subsection, we aim to benchmark our model with experimental data. To that end, we have chosen the thorough investigation of Babu Table IV. First 20 energy levels for 4f 5 5d configuration of Eu 3+ with total angular momentum J = 0, 1 and 2, as well as first et al. [30], who measured absorption oscillator strengths and interpreted them with the standard JO theory. Their study deals with transitions within the ground manifold 7 F and between the ground and first excited manifold 5 D for Eu 3+ -doped lithium fluoroborate glass. In the latter case, the transitions involve a change in spin, well known to challenge the standard JO theory. Description of our calculations We have written a FORTRAN program which firstly reads the energies and four leading eigenvectors of the ground-configuration free-ion levels (see Table II) and manifolds (see Table III). Then, the code performs a linear least-square fitting of experimental line strengths S exp and the ED part of the theoretical ones S ED given by Eqs. (14) and (24), with the free adjustable parameters for k = 1, 3 and 5, which describes the electrostatic environment at the ion position. During the least-square step, we seek to minimize the standard deviation on line strengths where N tr is the number of transitions included in the calculation and N par = 3 is the number of adjustable parameters. The experimental line strengths are extracted from the absorption oscillator strengths f exp by inverted Eq. (25), where n r is the host refractive index, and χ ED = (n 2 r + 2)/9 the local-field correction in the virtual-cavity model (see for example Ref. [36]). In contrast with the freeion case, Eq. (29) takes into account the host material through its refractive index n r ; for lithium fluoroborate, n r = 1.57 is assumed wavelength-independent. Note that our code can also apply the fitting procedure to Einstein A coefficients, as the latter are transformed in line strengths. After the fitting, using these optimal X k parameters, we can predict line strengths, oscillator strengths and Einstein A coefficients, for other transitions. Of course, that procedure only involves transitions with a predominant ED character; magnetic-dipole (MD) transitions like 5 D 0 ↔ 7 F 1 and 5 D 1 ↔ 7 F 0 are therefore excluded from the fit. For them, the MD line strength S MD , oscillator strengths and Einstein coefficients can be calculated from the free-ion eigenvectors (see Table II) [31]. Results of the least-square fitting We have included 9 out of the 14 transitions measured with the so-called L6BE glass in Table 3 of Ref. [30]. We have excluded three predominant MD transitions, 5 D 0 ↔ 7 F 1 , 5 D 1 ↔ 7 F 0 and 5 D 2 ↔ 7 F 1 , as well as the 5 G 4 ↔ 7 F 0 and 5 D 0 ↔ 7 F 0 for which we observe large deviations between theory and experiment. They are probably due to the fact that the 5 D 0 and 5 G 4 are further from LS coupling than the other levels. In particular, the four leading components represent 88.4 and 86.9 % of the total [30]. The second column gives standard JO parameters Ω2,4,6; the third and fourth ones give X k obtained with Eqs. (14) and (24) eigenvectors respectively. Table V shows the results of our least-square calculations with the second-and third-order theory, in comparison with the standard JO theory used in Ref. [30]. For each transition, the table contains the experimental values of the oscillator strength (×10 −6 ) [30] and the ratios r n between the theoretical and experimental oscillator strengths, where r 0 is the ratio for the standard Judd-Ofelt theory, and the r 1 and r 2 are the ratios, respectively, for second and third order corrections of theory (see subsection II A and II B). For each model, we present the absolute σ and relative standard deviations, taken by dividing Eq. (28) by the largest experimental line strength S max = 3.057 × 10 −4 for the 7 F 6 ↔ 7 F 1 transition. Figure 3 gives a visual insight into the results of Table V, with histograms of the experimental absorption oscillator strengths, and those resulting from the standard JO theory and our third-order correction, plotted as functions of the transition wavelength. Globally, the three methods have similar performances. That shows that the SO interaction in the excited configuration has little effect, since it is included in the thirdorder correction and not in the second-order one. Our third-order corrections better describes transitions between 7 F and 5 D manifolds. However, it predicts the smallest oscillator strength for 5 G 2 ↔ 7 F 0 , owing to the proximity between the 5 G3 and 5 H1 manifolds (see Table III), which puts into question the use of SO interaction as a perturbation. On the other hand, the second-order correction fails to describe the 5 D 4 ↔ 7 F 0 transition. The three methods tend to underestimate the oscillator strengths for high-energy transitions, where the refractive index n r is larger than 1.57. The final fitting parameters are given in Table VI for the standard JO calculation of Ref. [30] (see Set B of Table 4), as well as our second-order correction (14) and third-order correction (24). The orders of magnitude of the X k are the same for the two corrections. The parameter X 3 are the largest, then the X 1 are roughly one order of magnitude smaller than the X 3 , and the X 5 are roughly two orders of magnitude smaller than X 3 . It is hard to make direct comparisons with the standard JO parameters given in Table 4 of Ref. [30] (data set B); but we see that that the Ω 6 parameter, responsible of the 7 F 6 ↔ 7 F 0,1 and 5 L 6 ↔ 7 F 0,1 transitions just like X 5 is respectively 9 and 6 times smaller than Ω 2 and To give more insight values of the parameter, we notice that the quantities √ X k × nℓ|r k |n ′ ℓ ′ is the orderof-magnitude energy of the ion-field interaction: in the third-order correction, they are respectively equal to 298, 2226 and 1207 cm −1 for k = 1, 3 and 5. Transitions with a MD character Now that we have the X k parameters, we can calculate oscillator strengths for transitions not present in the fit. In particular, we can predict the percentage of ED and MD characters for the transitions having both characters [37][38][39][40], assuming that the total oscillator strength is equal to the sum f ED + f MD . The ED part can be calculated by inverting Eq. (29) and replacing the subscripts "exp" by "ED", while the MD part reads [31] f MD = 2m e a 2 0 (E 2 − E 1 ) 3(2J 1 + 1) 2 n r S MD (30) where the MD line strength is written in units of e 2 a 2 0 [28] with α the fine-structure constant and g s the electronicspin g-factor. Because the orbital L and spin S angular momenta are even-parity tensors of rank one, MD transitions can occur in free space or in solids, between levels of the same configuration and with ∆J ≤ 1 except (J 1 , J 2 ) = (0, 0). Table VII presents experimental and theoretical absorption oscillator strengths for the transitions having in principle an ED and a MD character. The MD oscillator strengths are calculated by multiplying the free-ion one computed with Cowan's code by the host refractive index n r , see Eq. (30). The table clearly shows that the 5 D 1 ↔ 7 F 1 transition is purely electric (at least 99.9 %), hence its inclusion in the fit. The 5 D 2 ↔ 7 F 1 is also mainly electric, but to a lesser extent, roughly at 95 %. The two others are mostly magnetic, but the experimental and theoretical MD oscillator strengths significantly differ from each other. Still, the ED character looks larger for the 5 D 1 ↔ 7 F 0 transition (1-2 %) than for the 5 D 0 ↔ 7 F 1 one (4-9 %). Since the 5 D 0 ↔ 7 F 0 transition is forbidden by the selection rules of the standard JO model, it has attracted a lot of attention (see Ref. [5] and references therein), in order to understand its origin. Even though it is not forbidden in our model, we had to exclude it in the fit, because of a strong discrepancy between the experimental and our computed oscillator strength. With our optimal parameters X k , we obtain an oscillator strength 1.25 × 10 −7 , that is 7.8 times larger than the experimental value. In this paragraph, we investigate in closer details the possible origin of that discrepancy and how to reduce it. Firstly, as mentioned in Subsection II B, the sums in Eqs. (19) and (24) involves quintet and septet manifolds of the ground configuration. But a closer look at the eigenvectors shows that the 5 D 0 level contains 6.7 % of 3 P6 character, see Table II, as well as 5.1 % of 3 P3, while 7 F 0 contains 0.1 % of 3 P6 character. These small components are likely to contribute to the transition amplitude, and so they need to be accounted for in a future work, through a complete description of the free-ion eigenvectors. The selection rules associated with Eq. (19) show merely the terms with k = 1 of the CF potential can induce a transition of the kind (J 1 , J 2 ) = (0, 0). This result seems consistent because: (i) those terms are stronger in sites with low symmetries, and (ii) observing the 5 D 0 ↔ 7 F 0 transition is an indication of C nv , C n or C s point groups at the ion site [41][42][43]. Another frequently invoked mechanism to explain the 5 D 0 ↔ 7 F 0 transition is J-mixing [8][9][10], especially between levels of the lowest manifold 7 F. However, because this mixing is limited to 10 %, it cannot explain the strongest 0-0 transitions listed in Ref. [44]. Chargetransfer states are also likely to play a role in the 0-0 transition, especially in hosts with oxygen-compensating sites around by which the CF tends to be strongly deformed [45]. However, those two mechanisms are not present in our model. Radiative lifetime of the 5 D0 level In addition to absorption oscillator strengths, our model also makes it possible to calculate the ED Einstein coefficient for the spontaneous emission from level 2 to 1, where S ED is given by Eq. (24). We can also compute the MD Einstein coefficients A MD , by multiplying the free-ion value calculated with Cowan by n 3 r ; namely A MD = e 2 a 2 0 (E 2 − E 1 ) 3 3πǫ 0 4 c 3 (2J 2 + 1) n 3 r S MD where S MD is given by Eq. (31). From them, we can deduce the radiative lifetime τ of a given level. In particular for the 5 D 0 level, it reads Transitions 5 D 0 ↔ 7 F J , where J = 1 to 6, are not included in our fit, and so are considered as additional transitions, for which our program calculated line strengths and Einstein coefficients. For the transition 5 D 0 ↔ 7 F 1 , the total Einstein coefficient is the sum of the electric and the magnetic parts, calculated using Cowan code. The latter is found to be A MD ( 5 D 0 , 7 F 1 ) = 53.44 s −1 . The sum of Einstein coefficients for all other transitions, including the electric part of transition 5 D 0 ↔ 7 F 1 , is 500.529 s −1 . That sum includes the transition 5 D 0 ↔ 7 F 0 , whose Einstein coefficient (32) is calculated using the line strength deduced from the experimental oscillator strength following Eq. (29). This yields the very small value of 0.029 s −1 . The resulting radiative lifetime is τ ( 5 D 0 ) = 1805 µs, which is close to the experimental value of 1920 µs reported in Ref. [30]. In principle, the relaxation limiting the lifetime is due to radiative as well as nonradiative processes; however the latter are expected to be unlikely for the 5 D 0 level [46], due to the large gap between the 5 D 0 and 7 F 6 levels, see Table II. IV. CONCLUSION In this article, we have developed an extension of the Judd-Ofelt model enabling to calculate intensities of absorption and emission transitions for Ln 3+ -doped solids. In our model, the properties of the Ln 3+ impurity are fixed parameters calculated with free-ion spectroscopy, while the crystal-field ones are adjusted by least-square fitting. In particular, the line strengths, oscillator strengths and Einstein coefficients are functions of three least-square fitted crystal-field parameters. We have benchmarked our model with a detailed spectroscopic study of europium-doped lithium borate glasses. Not only our model allows for giving a simple physical insight into the transitions which are not described by the standard Judd-Ofelt theory, but it also reproduces measured oscillator strengths with a similar accuracy to the standard theory [30]. Moreover, we demonstrate that the spin-changing transitions in Eu 3+ mainly result from the spin-orbit mixing within the ground elec-tronic configuration, even if its levels are well described by the LS-coupling scheme. In consequence, our model may be improved in the future, by taking into account all the eigenvector components of the free-ion levels, while the four leading ones are taken into account in the current study. We expect this improvement to give more a precise calculation of the 7 F 0 ↔ 5 D 0 intensity. We also plan to account for the wavelength-dependence of the refractive index of the host material. Finally, the fact of separating the dopant and crystal-field parameters opens the possibility to interpret transitions between individual crystal-field levels or involving polarized light, which is especially relevant for nanometer-scale host materials. In contrast, spectroscopic studies of free Ln 3+ ions indicate that configuration-interaction mixing, between the configurations 4f w and 4f w−1 6p on the one hand, 4f w−1 5d and 4f w−1 6s on the other hand, does not have a strong role in the energy spectrum [24,25], and so shall not be included in our model. However, in the case of Er 3+ [26], the lowest core-excited configuration of opposite parity compared to the ground one, 5p 5 4f 3 5d starts at 182000 cm −1 . Assuming a similar order of magnitude for the 5p 5 4f 6 5d configuration of Eu 3+ , and taking the relativistic Hartree-Fock value of the radial integral 5p|r|5d = 1.62 a 0 , we can expect the excitation of the 5p core electrons toward the 5d orbital, to have a sizeable effect on the crystal-field coupling to opposite-parity configurations.
2021-06-07T01:15:53.936Z
2021-06-04T00:00:00.000
{ "year": 2022, "sha1": "0f6eafde8ebafceea26076a46484d83aa6aa4e3e", "oa_license": null, "oa_url": "https://hal.archives-ouvertes.fr/hal-03364107/file/2021_hovhannesyan_jl-JO-Eu3+.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "0f6eafde8ebafceea26076a46484d83aa6aa4e3e", "s2fieldsofstudy": [ "Physics", "Materials Science" ], "extfieldsofstudy": [ "Physics" ] }
37329488
pes2o/s2orc
v3-fos-license
How do viruses control mitochondria-mediated apoptosis? Highlights • Viruses can block extrinsic and intrinsic apoptosis signaling of host cells.• Type I interferon response is used to either eliminate virus or induce host apoptosis.• Viruses can actively engage the apoptotic cell death machinery.• Virally infected cells are killed by CTL and NK cells. a b s t r a c t There is no doubt that viruses require cells to successfully reproduce and effectively infect the next host. The question is what is the fate of the infected cells? All eukaryotic cells can "sense" viral infections and exhibit defence strategies to oppose viral replication and spread. This often leads to the elimination of the infected cells by programmed cell death or apoptosis. This "sacrifice" of infected cells represents the most primordial response of multicellular organisms to viruses. Subverting host cell apoptosis, at least for some time, is therefore a crucial strategy of viruses to ensure their replication, the production of essential viral proteins, virus assembly and the spreading to new hosts. For that reason many viruses harbor apoptosis inhibitory genes, which once inside infected cells are expressed to circumvent apoptosis induction during the virus reproduction phase. On the other hand, viruses can take advantage of stimulating apoptosis to (i) facilitate shedding and hence dissemination, (ii) to prevent infected cells from presenting viral antigens to the immune system or (iii) to kill non-infected bystander and immune cells which would limit viral propagation. Hence the decision whether an infected host cell undergoes apoptosis or not depends on virus type and pathogenicity, its capacity to oppose antiviral responses of the infected cells and/or to evade any attack from immune cells. Viral genomes have therefore been adapted throughout evolution to satisfy the need of a particular virus to induce or inhibit apoptosis during its life cycle. Here we review the different strategies used by viruses to interfere with the two major apoptosis as well as with the innate immune signaling pathways in mammalian cells. We will focus on the intrinsic mitochondrial pathway and discuss new ideas about how particular viruses could activately engage mitochondria to induce apoptosis of their host. Extrinsic apoptotic pathway and its inhibition by viral proteins Apoptosis is a highly conserved, physiological type of cell death, which is required to eliminate damaged, used-up and misplaced cells within a multicellular organism (Kerr et al., 1972;Hotchkiss et al., 2009). In contrast to necrosis or necroptosis, which often leads to pathological, chronic inflammatory immune reactions due to cell lysis, apoptotic cells do not fall apart but are discretely eaten up by macrophages and other non-professional phagocytes in a regulated and non-inflammatory manner (Vanlangenakker et al., 2012). Apoptosis is induced by two distinct, yet tightly disruption of the cytoskeleton, inhibition of DNA repair, initiation of DNA fragmentation and the exposure of so-called eat-me signals, which mediate the uptake and elimination of apoptotic cells by macrophages. The extrinsic, death receptor pathway is indispensable for the execution and limitation of both innate and adaptive immune responses. Cells of the innate immune system such as dendritic or natural killer (NK) cells generate a rapid antiviral immune response by directly detecting viral products such as dsRNA by Toll-like receptor 3 (TLR), by upregulating death ligands such as FasL, TRAIL and TNF␣ and/or by killing infected cells by granule exocytosis (NK cells) (Medzhitov, 2001). Later during adaptive immunity, virally infected cells are killed by antigen-specific cytotoxic T cells (CTLs) via both the perforin/granzyme and the FasL/Fas pathways (Chavez-Galan et al., 2009). Subsequently, activated T cells are eliminated by activation-induced cell death involving FasL/Fas-mediated killing of the same or an activated neighboring cell after an acute infection (Arakaki et al., 2014). This ensures that highly proliferative, superfluous killer cells are disposed of accordingly, preventing the development of autoimmune or leukemic cells. It is therefore not surprising that viruses have evolved strategies to inhibit the death receptor signaling pathway at several steps although these cells may still be killed by granule exocytosis (Benedict et al., 2002;Hay and Kannouraki, 2002). As a consequence infected cells are at least partially protected from cytolysis by CTLs, NK cells or neighboring cells which express FasL or TRAIL upon activation (Ashkenazi and Dixit, 1999) and TNF␣ may not be able to induce a sufficient inflammatory response. For example, the Shope fibroma virus (rabbit poxvirus) produces a TNF receptor ortholog (TNFR2), which neutralizes TNF␣ (Smith et al., 1991) (Fig. 1). Other TNFR orthologs have been identified in the genomes of lepri-and orthopoxviruses, including cowpox (CrmB) (Hu et al., 1994) and smallpox (CrmE) (Reading et al., 2002) and in the genome of CMV (UL144 orf) (Benedict et al., 1999). They can either be expressed in the plasma membrane of infected cells or be shed as soluble decoy receptors (for example myxoma virus T2 protein) Reading et al., 2002). These TNF signaling inhibitors clearly contribute to the high virulence of poxviruses because they are mutated in vaccinia viruses, an attenuated form of smallpox. In addition, a TRAIL receptor ortholog (TRAILR2) was detected in the genome of avian leukocytosis virus (Brojatsch et al., 1996). On the other hand adenoviruses use several proteins encoded in the E3 gene region to promote the internalization and lysosomal degradation of Fas, TNFR, TRAILR1 and R2 (Shisler et al., 1997;Tollefson et al., 1998;Stewart et al., 1995;Benedict et al., 2001). Finally, several viruses produce proteins capable of blocking extrinsic death receptor signaling at the level of the DISC, i.e. by either inhibiting caspase-8 activation or its proteolytic activity. For example, the viral FLIP proteins (vFLIPs) are inactive caspase-8 homologs, which are recruited by FADD to form a DISC that lacks caspase activity (Thome and Tschopp, 2001;Krueger et al., 2001;Subramaniam et al., 2013). In addition, they can recruit TRAF2, RIP, NIK and IKK2, which favor the induction/activation of NFB, a crucial anti-apoptotic transcription factor (Kataoka et al., 2000;Subramaniam et al., 2013). vFLIPs are present in the genome of ␥-herpesviruses, including equine herpesvirus 2 (EHV-2), herpesvirus saimiri (HVS), KSHV (Kaposi), bovine herpesvirus (BHV-4) and moluscum contagiosum virus (MCV) (Bertin et al., 1997;Thome and Tschopp, 2001;Benedict et al., 2002). The vICA protein produced by the HCMV viral gene UL36 also associates with caspase-8 and blocks its activation but it has no sequence homology with caspases (Skaletskaya et al., 2001). Last but not least, the CrmA protein from cowpox virus is able to inhibit the proteolytic activity of caspase-8 by binding to and blocking the catalytic center (Zhou et al., 1997). Intrinsic apoptotic pathway and its inhibition by viral proteins The intrinsic apoptotic pathway is activated by the lack of soluble survival factors or hormones, cell-cell or cell-matrix interactions (deprivation-induced cell death, also called anoikis), the exposure of cells to pathogens such as fungi, bacteria or viruses or the treatment with genotoxic/DNA damaging stimuli (irradiation, chemotherapeutic drugs), toxins or pro-oxidants, agents which stress the endoplasmic reticulum, inhibit protein kinases, proteasomal degradation, transcription, translation or perturb the cytoskeleton (Fig. 2) (Youle and Strasser, 2008;Hotchkiss et al., 2009;Chipuk et al., 2010). The critical step of this pathway is the permeabilization of the mitochondrial outer membrane (MOMP), which results in the release of several apoptogenic factors from the intermembrane space of mitochondria. One such protein, cytochrome c, binds to the adapter Apaf-1 which then recruits cytosolic pro-caspase-9 into a heptameric complex, called the apoptosome (Fig. 2). In a similar way as caspase-8 aggregates on the DISC, monomeric caspase-9 dimerizes on the apoptosomal platform leading to proximity-induced autoprocessing and activation. Active caspase-9 then cleaves and activates effector caspase-3 and -7. Other proteins released from mitochondria either stimulate yet unknown caspase-independent cell death processes or they enhance caspase-9 and -3 activation (Fig. 2). For example Smac/DIABLO and Htr2A/Omi bind to XIAP, a member of the Inhibitors of Apoptosis Proteins (IAPs). XIAP is an endogenous caspase-9 and -3 inhibitor, which prevents accidental autoprocessing and activation of these caspases in healthy cells (Deveraux and Reed, 1999;Shi, 2002). Upon sequestration by Smac or Omi, XIAP no longer binds to caspase-9 or -3, therefore allowing the full activation of these caspases in response to apoptotic stimuli, activating the intrinsic mitochondrial pathway (Fig. 2). Since MOMP results in both caspase-dependent and independent cell death signaling it is a crucial life-or-death decision checkpoint ("a point of no return") ( Fig. 2). This checkpoint is controled by the Bcl-2 family of proteins (Youle and Strasser, 2008;Chipuk et al., 2010). The Bcl-2 family is subdivided on the basis of structural conservation of so-called Bcl-2 homology (BH) domains and comes in three flavors. The pro-apoptotic BH3-only proteins (Bim, Bid, Bad, Bik, Bmf, Hrk, Puma, Noxa) only share a ca. 20-30 amino acid long alpha-helical BH3 domain with the rest of the Bcl-2 family. They act as sentinels/sensors of apoptotic stimuli which activate the intrinsic mitochondrial pathway (Happo et al., 2012). Depending on the apoptotic stimulus, particular sets of BH3-only proteins get transcriptionally induced or posttranslationally modified, migrate to and insert into the MOM and then activate at this site a second pro-apoptotic subclass of Bcl-2 family proteins, the socalled "effectors" Bax and Bak (Happo et al., 2012). For example, Bim is transcriptionally induced by Foxo3B in response to growth factor deprivation or phosphorylated by JNK during thymic selection and upon exposure to UV or gliotoxin (Happo et al., 2012;Geissler et al., 2013). By contrast, Puma and Noxa are target genes of p53 after genotoxic stress. Bid is proteolytically cleaved to truncated tBid by caspase-8 in response to death receptor ligands such as FasL, TRAIL or TNF␣ defining a second, so called type II death receptor pathway that crosstalks with the mitochondrial pathway (Youle and Strasser, 2008;Chipuk et al., 2010;Happo et al., 2012). Bax and Bak directly induce MOMP. They contain BH1, BH2 and BH3 domains, which form an elongated hydrophobic binding pocket interacting with other members of the Bcl-2 family (Suzuki et al., 2000;Czabotar et al., 2013;Brouwer et al., 2014;Volkmann et al., 2014;Westphal et al., 2014;Borner and Andrews, 2014). In healthy cells, Bax resides inactive in the cytoplasm (Suzuki et al., 2000;Schinzel et al., 2004) and constantly shuttles between the cytoplasm and the periphery of the MOM without stably inserting in the membrane (retrotranslocation) (Wolter et al., 1997;Edlich et al., 2011). Bak on the other hand is stably inserted into the MOM but held in check by inhibitory proteins such as VDAC2 and Bcl-2 survival factors (Wang et al., 2001;Cheng et al., 2003;Willis et al., 2005). Three BH3-only proteins Bim, tBid and Puma are capable of directly activating Bax/Bak on the MOM (Letai et al., 2002;Cartron et al., 2004;Kuwana et al., 2005;Gavathiotis et al., 2008;Czabotar et al., 2013). Recent structural analysis of the activation process suggests that after their translocation to and insertion into the MOM, Bim and tBid bind to the hydrophobic pocket in Bax/Bak via their BH3-domain (Czabotar et al., 2013;Brouwer et al., 2014). This changes the conformation of Bax and Bak in a way that their BH3-regions become exposed for dimeric interaction with the hydrophobic pocket of another Bax or Bak molecule (Dewson et al., 2009;Czabotar et al., 2013;Brouwer et al., 2014). For that purpose the BH3-only protein has to dissociate from the hydrophobic binding pocket, which explains why direct interaction between BH3-only proteins and Bax/Bak is only transient and difficult to detect biochemically. Presumably through an additional interaction site in the rear part of the molecule, Bax/Bak dimers can then assemble into multimers (Dewson et al., 2009). It is not yet clear if these multimers form a protein pore or perturb the lipid bilayer of the MOM to rearrange into a lipid pore that may be generated by a hemifusion intermediate (Montessuit et al., 2010;Bleicken et al., 2014;Borner and Andrews, 2014). Newest findings from lipid nanodiscs and fluorescence measurements indicate that Bax/Bak may form pores of different sizes depending on whether they are monomeric, di-or multimeric (Xu et al., 2013;Volkmann et al., 2014). At one point the pores are large enough to allow the passage of cytochrome c or even bigger molecules such as Smac and Omi to the cytoplasm. The third subgroup of the Bcl-2 family is formed by the antiapoptotic proteins Bcl-2, Bcl-x L , Mcl-1, Bcl-w and A1 (Youle and Strasser, 2008;Chipuk et al., 2010). They also contain BH1, BH2 and BH3 domains and some even have an additional BH4 domain at the N-terminus. The 3-dimensional structure of these survival factors looks very similar to that of Bax/Bak (Muchmore et al., 1996;Suzuki et al., 2000). However, in contrast to Bax/Bak, they are unable to undergo conformational changes to expose their BH3domain, multimerize and form pores, except under unphysiological conditions such as low pH (Minn et al., 1997;Schendel et al., 1997). Instead their hydrophobic binding pocket interacts with BH3-only proteins in a high affinity manner. On one hand this inhibits the pro-apoptotic action of BH3-only proteins. On the other hand, prebound Bim, tBid and Puma can be released from the Bcl-2 survival factors by other BH3-only proteins activated by apoptotic stimuli such as Bik, Bad, Bmf, Hrk and Noxa (Borner and Andrews, 2014). Bim, tBid and Puma can then directly activate Bax/Bak as described above. This explains the effectiveness of BH3-mimetics in anticancer therapy. These are small molecule compounds that bind with high affinity to the hydrophobic pocket of Bcl-2 survival factors and thereby release pre-bound Bim, tBid and Puma for MOMP activation (Oltersdorf et al., 2005;Happo et al., 2012). It turns out that many cancer cells do not only upregulate Bcl-2 survival factors but also Bim, tBid and Puma, a mechanism called "addiction" (Certo et al., 2006). This may explain the high sensitivity of cancer cells to be killed by BH3-mimetics such as ABT-267, ABT-199 and others (Tse et al., 2008;Wilson et al., 2010;Vandenberg and Cory, 2013). Apart from interacting with BH3-only proteins, Bcl-2 survival factors also sequester accidently activated Bax and Bak in healthy cells. Since active Bax/Bak expose their BH3-domain, this region can interact with the hydrophobic pocket of Bcl-2 survival factors and hence Bax/Bak are inhibited Willis et al., 2005Willis et al., , 2007Uren et al., 2007). This may also constitute one of the mechanisms by which MOM-inserted Bak is kept in check by Bcl-2 survival factors. When large amounts of Bax/Bak are bound by Bcl-2 survival factors, BH3-only proteins can displace appreciable numbers of Bax/Bak molecules from Bcl-2 and therefore make more pore-forming proteins available for the activation of MOMP (Youle and Strasser, 2008;Borner and Andrews, 2014). Since Bcl-2 survival factors are able to sequester both the "activators" (BH3-only proteins) and the "effectors" (Bax/Bak) of MOMP and MOMP triggers both caspase-dependent and -independent cell death, overexpression of these survival factors is the most efficient way to block apoptosis, even allowing clonogenic survival (which caspase inhibitors are not able to do) (Borner, 2003). Given the importance of the intrinsic mitochondrial pathway for apoptosis induced by numerous stimuli, viruses have acquired gene products which mimick the action of Bcl-2 survival factors to potently inhibit MOMP in their host cells. Functional homologs of Bcl-2 are collectively called vBcl-2s. They are present in various members of the Poxviridae (F1L, N1L, M11L, A179L, ORFV125), Herpesviridae (BHRF1, BALF1, vMIA, KSBcl-2, MHVBcl-2), Adenoviridae (E1B-19K) and Birnaviridae (VP5) family of viruses ( Fig. 2) (Benedict et al., 2002;Galluzzi et al., 2008;Postigo and Ferrer, 2009). While some of them like E1B-19K have extensive sequence homology with cellular Bcl-2 survival factors in all regions (White, 1998), others such as FPV039 from Fowlpoxvirus only retain sequence homology with the BH1 and BH2 domains while ORFV125 of Parapoxvirus is homologous in the BH1 and BH3 regions (Galluzzi et al., 2008;Westphal et al., 2007;Banadyga et al., 2007). Other vBcl-2s such as vaccinia virus F1L and N1L, myxoma virus M11L and vMIA from cytomegalovirus do not even show any sequence homology with mammalian Bcl-2 proteins (Galluzzi et al., 2008). However, the crystal structure of some of these vBcl-2s reveals a close conservation of the Bcl-2 family structural conformation. Thus, it is not the primary amino acid sequence and/or the conserved BH-domains per se, which determine the function of a Bcl-2 survival factor but the helical bundle structure that makes up the hydrophobic pocket where BH3-only proteins and Bax/Bak are sequestered. Other viral proteins inhibit the intrinsic, mitochondrial signaling pathway by modulating Bcl-2 family members on the transcriptional level or via post-translational modifications. The tumor suppressor p53 is known to induce the transcription of the BH3only proteins Noxa and Puma in response to genotoxic stress (Oda et al., 2000;Nakano and Vousden, 2001) and growth factor deprivation (Jabbour et al., 2010) contributing to the elimination of stressed and damaged cells (Fig. 2). Cells lacking functional p53 survive and proliferate and accumulate gene mutations eventually leading to cancer (Harvey et al., 1993). Also, p53-deficient mice are more prone to certain viral infections, indicating that p53 is not only crucial to prevent cancer but also to induce apoptosis of some infected host cells. Therefore, viruses have evolved strategies to inactivate p53. The SV40 large T antigen binds to p53 and sequesters it in an inactive complex (Lane and Crawford, 1979;Linzer and Levine, 1979). The human papillomavirus (HPV) E6 protein and the adenovirus E1B-55K protein induce the ubiquitination and proteasomal degradation of p53 (Scheffner et al., 1990;Werness et al., 1990;Steegenga et al., 1998;Querido et al., 2001). Also, the X protein of hepatitis B virus (HBx) interacts with p53 and prevents its transcriptional activation of target genes, thereby inhibiting apoptosis (Wang, 1995) (Fig. 2). A similar strategy is used by the measles virus V protein, which blocks apoptosis by sequestering the p53 homolog p73 (Cruz et al., 2006). A p53-independent mechanism to inhibit MOMP is exploited by the human T cell leukemia virus type I (HTLV-1) which uses the Tax protein to activate the Bcl-x L promoter and to repress the Bax promoter (Tsukahara et al., 1999). Moreover, viruses can effectively block both the intrinsic mitochondrial and the extrinsic death receptor signaling by engaging the transcription factor NFB (Sun and Cesarman, 2011). NFB induces transcription of the survival factor Bcl-x L , the caspase-8 inhibitor FLIP or inhibitor of apoptosis proteins (IAPs), which often directly or indirectly act as caspase inhibitors (DiDonato et al., 2012). For example, herpes simplex virus-1 (HSV-1), through its envelope glycoprotein D uses a TNFR family member, herpes virus entry mediator (HVEM) for host surface binding and infection. Activation of this receptor triggers a signaling cascade that leads to NFB activation and apoptosis inhibition (Medici et al., 2003;Sciortino et al., 2008). Similarly, the major B cell-transforming protein in EBV, LMP1, mimics activated CD40 and engages TRADD and TRAFs, which are crucial adaptors used by TNFR to activate NFB. Thereby LMP1 prevents B cells from localizing to the follicle and protects cells harboring latent virus from interactions with T cells (Uchida et al., 1999). Finally, the Nef protein of HIV (Wolf et al., 2001) and the U(S)3 protein kinase of HSV-1 (Munger and Roizman, 2001) were found to mediate phosphorylation of the BH3-only protein Bad, thereby preventing it from inducing apoptosis. Another possibility for viruses to interfere with apoptosis signaling is to block caspases. As pointed out above cowpox viruses can effectively block initiator caspase-8 of the extrinsic death receptor pathway because this pathway is strictly caspase-dependent (Zhou et al., 1997). The major initiator and effector caspases of the intrinsic mitochondrial pathway, caspase-9 and caspase-3/-7 are held in check by cellular IAPs, in particular XIAP (Fig. 2) (Deveraux and Reed, 1999;Shi, 2002). However, MOMP also leads to caspase-independent death signaling (Fig. 2). Therefore blocking caspase-9 and/or -3 by viruses may not be sufficient to save infected host cells from apoptosis. This explains why an IAPortholog strategy is infrequently used in mammalian viruses (in contrast to baculoviruses infecting insect cells) (Taylor and Barry, 2006). Accordingly, although African swine fever virus and Entomopoxviruses encode viral vIAP such as MsEPV and AmEPV these proteins do not contribute to the virulence of these viruses (Neilan, 1993;Taylor and Barry, 2006) (Fig. 2). Mechanisms by which the intrinsic, mitochondrial apoptosis pathway is activated after viral infection Since viruses have developed so many ways to oppose the intrinsic mitochondrial apoptosis pathway, the question arises, why and when this pathway is activated in infected cells. Three scenarios are possible: (i) the infected cell is killed by an external apoptotic signal, (ii) the infected cell "senses" virus entry or assembly and mounts an antiviral stress response which does not only eliminate the virus but also the infected cell or (iii) the virus actively induces apoptosis of its host by using particular viral components expressed in the infected cells (viral proteases, dsRNA, dsDNA, etc.). Killing of virally infected cells by CTLs and NK cells In addition to the activation of the Fas signaling pathway (described above), CTLs and NK cells also kill virally infected target cells via the perforin/granzyme pathway. After successful activation, these cells release perforin and granzyme A and B from their cytotoxic granules at the immunosynapse where they contact the virally infected cell via MHC-I/peptide/TCR and costimulatory molecular interactions (Chavez-Galan et al., 2009;Ewen et al., 2012;Thiery and Lieberman, 2014) (Fig. 2). Perforin oligomerizes into variously sized high molecular structures in the target cell membrane (Metkar et al., 2015). Most likely through membrane lipid perturbation, similar to what is proposed for Bax and Bak on the MOM (Bleicken et al., 2014), perforin forms a pore, which allows the entry of the serine proteases granzyme A and B (Metkar et al., 2015) (Fig. 2). While granzyme A provokes a still not entirely identified caspase-independent death pathway and is also responsible for inflammatory responses (Pardo et al., 2009a(Pardo et al., , 2009bJoeckel and Bird, 2014), granzyme B induces classical apoptosis in the target cell by two pathways, (i) a direct cleavage and activation of caspase-3 and (ii) the cleavage of Bid (similar to what caspase-8 does) and the triggering of Bax/Bak-mediated MOMP (Pardo et al., 2009a(Pardo et al., , 2009bEwen et al., 2012;Thiery and Lieberman, 2014). Viruses counteract the granzyme B killing pathway by expressing vBcl-2 survival factors. Alternatively they produce inhibitors of granzyme B, such as the L4-100K assembly protein from adenovirus (Andrade et al., 2001), CrmA from cowpox virus (which also inhibits caspase-8) (Komiyama et al., 1996) or Serp2 encoded by the leporipoxvirus myxoma virus (Turner et al., 1999) (Fig. 2). All these inhibitors clearly block granzyme B action in cell free systems. But while the L4-100K protein also potently inhibited granzyme B-mediated cell death (Andrade et al., 2001), this could not be effectively observed with cells overexpressing CrmA or Serp2 or infected with poxviruses (Müllbacher et al., 1999;Barry et al., 2000;Screpanti et al., 2001;Pardo et al., 2009aPardo et al., , 2009b. Recognizing/sensing viruses in endosomal and cytosolic compartments and mounting an antiviral innate immune response In the following section we focus on RNA viruses (Fig. 3) but similar mechanisms have either been reported or are expected to occur for DNA viruses (Goubau et al., 2013;Unterholzner, 2013). Irrespective of whether virus replication and spread are blocked and viruses are finally eliminated by non-apoptotic or apoptotic mechanism, the infected host cells first have to recognize, i.e. "sense" the respective viruses. This places the molecular mechanisms of sensing at the heart of all anti-viral effects, including the initation of cell death. Virus spread is prevented if host cell apoptosis occurs before a virus can form progeny, or if an infected cell is successfully eliminated by CTLs. If however the virus can inhibit cell death by expressing own survival factors after infection or if it kills cells after its reproduction, it can successfully propagate and evade the immune system. The best known antiviral "sensing" mechanism of hosts for RNA viruses triggers the so called type I interferon response (Takeuchi and Akira, 2009;Ivashkiv and Donlin, 2014). Type I interferons such as IFN␣ and IFN␤ are transcriptionally induced after viral infections and play a critical role in mounting innate responses against viruses (Fig. 3). They are secreted and bind to specific interferon receptors on the same as well as on neighboring cells in order to ensure a spreading of the antiviral state to as many cells as possible. Activated interferon receptors then trigger a signaling cascade via the Janus protein kinase-signal transducer and activators of transcription pathway (JAK-STAT) that results in the induction of IFN-stimulated genes (ISGs) which can interfere with the viral life cycle at different steps, induce host cell apoptosis or both (Fig. 3) (Ivashkiv and Donlin, 2014). Two major components of this IFN-induced signaling system are the RNA-dependent protein kinase (PKR) (Garcia et al., 2007) and the 2 ,5-oligoadenylate and RNase-L systems (Hovanessian, 2007). Activation of RNaseL leads to the degradation of viral RNA therefore blocking further RNA replication and transcription. On the other hand PKR activation inhibits host-cell protein translation via the phosphorylation of elongation initiation-factor 2a (eIF2␣) (Garcia et al., 2007) (Fig. 3). In addition, active PKR was proposed to induce host cell apoptosis via NFB and IRF-1 activation, which then upregulated FasL and TRAIL respectively (Tan and Katze, 1999;Jagus et al., 1999). Although TRAIL is induced by IFNs in a variety of paradigms of viral infections and may indeed contribute to apoptosis of host cells, it is unclear if this is really mediated via the PKR/NFB axis (Allen and El-Deiry, 2012). Also, NFB often acts as a cell death protective rather than apoptosis-inducing transcription factor (DiDonato et al., 2012). Moreover, we recently reported that PKR was not required for apoptosis induced by the RNA virus SFV although it was clearly activated and blocked protein synthesis via eIF2␣ phosphorylation (El Maadidi et al., 2014). In order for a host cell to sense a virus it has to recognize it as non-self. This occurs through two principle mechanisms. First, viruses are detected on the host cell surface by so-called pattern recognition receptors (PRR) which bind to pathogen associated molecular patterns (PAMPs) on the virus and other pathogens (Pichlmair and Reis e Sousa, 2007;Kumar et al., 2011). A major class of cell surface transmembrane PRR are the Toll-like receptors (TLRs) whose PAMP-binding domains contain leucine-rich repeats (LRR). TLR3, TLR7, TRL9 detect viral nucleic acid ligands such as long dsRNA, ssRNA or CpG DNA, respectively, mostly in the endosomal compartments (subsequent to endocytosis of the virus). After binding, they recruit adaptor proteins such as TRIF (for TLR3) or MyD88 (TLR7/TLR9), which bind specific members of the TRAF adaptor and activate the IRAK kinase families. This ultimately leads to the nuclear translocation of NFB and IRF3 and IRF7 transcription factors through respective phosphorylation and activation of the IkB kinase (IKK) ␣,␤,␥ and IKK/TBK1 complexes (Pichlmair and Reis e Sousa, 2007;Bowie and Unterholzer, 2008;Kawai and Akira, 2008;Kumar et al., 2011) (Fig. 3). NFB and IRF3/7 then cooperate to induce the transcription of IFN␣ and IFN␤. Recently, it was reported that activation of the TLR3 pathway by the synthetic dsRNA homologs polyIC can induce apoptosis by recruiting caspase-8 to TLR3 (Weber et al., 2010;Estornes et al., 2012). In an earlier report, TRIF, an adaptor for TLR3 was shown to induce apoptosis in a FADD/caspase-8-dependent manner (Kaiser and Offermann, 2005) (Fig. 3). In all cases caspase-8 was activated by proximity-induced dimerization and then cleaved and activated caspase-3 (Fig. 3). This defines a novel, death receptorand mitochondria-independent death signaling pathway involving caspase-8. Viruses evolved strategies to inhibit this signaling pathway downstream of TLRs to avoid the transcription of type I interferons. For instance, the NS3-4A protease of hepatitis C virus (HCV) cleaves the adaptor TRIF (Roy and Mocarski, 2007) while the NS5A protein decreases the expression of TLR4 (Tamura et al., 2011). Vaccinia virus uses its A46 protein to inhibit all TLR-adaptors (MyD88, MAL, TRIF, TRAM), the A52 protein to block TRAF6 and the protein kinase IRAK2, the B14 protein to antagonize the IKK␣,␤,␥ complex and the K7 protein to prevent the IKK/TBK1 complex from activating IRF3/7 (Bowie and Unterholzer, 2008). The latter strategy is also used by HCV NS3, rabies virus phosphoprotein and hantavirus G1. In addition to the recognition of viruses in the endosomal compartment, cells also contain viral sensors in the cytosol (Pichlmair and Reis e Sousa, 2007;Kawai and Akira, 2008;Takeuchi and Akira, 2009). These are the RIG-like helicases (RLH) RIG-I, MDA5 and LGP-2 (Yoneyama et al., 2004;Kato et al., 2006). RIG-I and MDA5 both contain two caspase recruitment domains (CARDs), an ATPase and a helicase domain (Fig. 3). The helicase domain is used to recognize different kinds of nucleic acids. While MDA5 mainly senses long viral dsRNA molecules, RIG-I is often activated by short dsRNA fragments and ssRNAs that have a 5 triphosphate (Pichlmair and Reis e Sousa, 2007). This concept ensures that cellular RNAs whose 5 ends are either capped or contain monophosphates, are not recognized by RLHs. Activated RIG-I and MDA5 then use their CARD domains to interact with the mitochondria-located adaptor protein MAVS (also known as IPS-1, VISA or Cardif) (Kawai et al., 2005;Meylan et al., 2005;Seth et al., 2005;Xu et al., 2005) (Fig. 3). MAVS in turn then triggers IKK␣,␤,␥ as well as IKK/TBK1 phosphorylation that activate NFkB and IRF3 and IRF7 transcription factors for IFN␣ and IFN␤ induction (Takeuchi and Akira, 2009). Similar to TLR3 signaling, sensing through RIG-I/MDA5/MAVS is heavily antagonized by viruses. Influenza NS1 binds the RIG-I/MAVS complex and thereby interferes with its signaling. Moreover, paramyxovirus V and polivirus are able to inhibit the function of MDA5. Finally, HCV NS3-4 abrogates MAVS signaling by cleaving it from the mitochondrial membrane. Similarly, MAVS is degraded by mitochondria-targeted hepatitis A virus 3ABC protein (Bowie and Unterholzer, 2008). The latter examples show that MAVS functions on mitochondria and suggests that this organelle either plays a so far unrecognized role in antiviral type I interferon signaling or that MAVS also mediates apoptosis and thereby crosstalks with Bax/Bak-induced MOMP. Indeed, Besch et al. reported that both RIG-1 and MDA5 could induce apoptosis of human melanoma cells in a type I interferon-independent manner, i.e. at a step before IFN␣/␤ induction (Besch et al., 2009). The molecular mechanisms of this new death signaling pathway had however not been defined at that time. Recently, we published new insights into the mechanisms of host cell apoptosis induced by a single stranded, positive-sense RNA virus which does not encode for any cell death protective proteins (El Maadidi et al., 2014). Semliki Forest virus (SFV) is a neurotropic virus that kills mice during the first 21 days of their life, probably due to the induction of apoptosis of immature neurons (Strauss and Strauss, 1994). It also causes encephalitis in various animals but is mostly apathogenic for humans (Fazakerley, 2002). We and others however found that SFV induces effective apoptosis of a variety of human and mouse cells in vitro via the intrinsic mitochondrial pathway, i.e. requiring Bax/Bak and the induction of MOMP (Ubol et al., 1994;Grandgirard et al., 1998;Murphy et al., 2001;Urban et al., 2008). After infection, SFV produces high amounts of dsRNA during the RNA replication cycle, and although it is not entirely clear whether dsRNA is the trigger for SFV-induced apoptosis and how it could activate Bax/Bak (via which BH3-only protein?, see below) ( Fig. 3), we reported that SFV-induced apoptosis is clearly RNA replication dependent (Urban et al., 2008). Moreover, Bax/Bak double knock-out cells are not entirely protected from SFVinduced apoptosis but exhibit a Bax/Bak-independent cell death that still requires caspases but does not involve death-receptor signaling. We then found that this novel Bax/Bak-independent death signaling pathway requires a MDA5-mediated activation of MAVS on mitochondria (El Maadidi et al., 2014). MAVS recruits caspase-8 from the cytosol to mitochondria forming a novel death inducing signaling complex that is capable of processing and activating caspase-3 and inducing apoptosis (Fig. 3). Our results indicate that not only the TLR3/TRIF axis can induce apoptosis in addition to activating NFB, IRF3/7 and an antiviral type I interferon response (Kaiser and Offermann, 2005;Weber et al., 2010;Estornes et al., 2012), but the same can be achieved by the MDA5/MAVS pathway (El Maadidi et al., 2014). Both Bax/Bakand death receptor-independent death signaling pathways use the initiator caspase-8 on novel death platforms (TLR3/TRIF/caspase-8 or MAVS/caspase-8) to activate caspase-3/-7 (Fig. 3). Whether this innate immune signaling pathway also accounts for interferon responses and/or apoptosis in response to DNA viruses, which use TLR9 (Hemmi et al., 2000) and other intracellular sensors such as cGAS/STING (Sun et al., 2013) or DAI (Takaoka et al., 2007) remains to be determined. Interestingly, it was recently shown that IPS-1/MAVS may also mediate innate immune signaling in response to DNA viruses (Zhang et al., 2011;Unterholzner, 2013). Thus, this novel mitochondria-associated pathway may be used as a general defence mechanism of host cells against viruses. Host cell apoptosis induced by ER stress caused by viral overload Another possibility how cells can sense and react to virus infection is by mounting an ER-stress response (Li et al., 2013a(Li et al., , 2013b. This is particularly the case for RNA viruses such as alphaviruses where the envelope proteins are co-translationally inserted into the ER membrane and travel through the exocytotic system to the cell surface for virus budding (Li and Stollar, 2004;He, 2006;Barry et al., 2010;Jheng et al., 2014). In this case the high amount of ERlocalized viral proteins induce a classical unfolded protein response (UPR). While UPR responses are often cell protective by triggering the transcriptional upregulation of chaperones such as BiP/grp78 which help in the folding of the massive amount of proteins, a persistent, prolonged UPR response, for example via the IRE␣/XBP1 pathway can also lead to apoptosis of the infected host cells (Lin et al., 2007;Sano and Reed, 2013;Lu et al., 2014). In this case, the transcription factor CHOP1 is activated which upregulates the BH3only proteins Bim and Puma, leading to Bax/Bak activation and MOMP via the intrinsic mitochondrial pathway (Reimertz et al., 2003;Puthalakath et al., 2007;Ghosh et al., 2012). Viruses would then counteract host cell death by the expression of Bcl-2 orthologs (vBcl-2s). Viral components that directly impinge on the host cell apoptotic machinery Some viruses have developed strategies to actively kill their infected hosts, probably to avoid the presentation of viral antigens to the adaptive immune system. If this happens viruses have replicated and assembled into new virions so that host cell apoptosis does not eliminate the virus as well. Both the activation of the extrinsic as well as the intrinsic apoptosis pathways have been observed. Targeting the extrinsic pathway, the Nef protein of HIV downregulates CD4 and MHC-1 from the cell surface by crosslinking these proteins to the endocytic compartment (Piguet et al., 1999). As a consequence membrane-bound TNF␣ and LIGHT are constitutively expressed on the surface of T cells, potentially contributing to the cytotoxic effects of HIV on infected T cells and uninfected bystander lymphocytes (Lama and Ware, 2000). Moreover, HCMV and reoviruses were shown to actively induce the expression of TRAIL (Sedger et al., 1999;Vidalain et al., 2000) and HSV and HCMV can enhance the expression of FasL (Raftery et al., 1999(Raftery et al., , 2001. This mechanism is thought to constitute another viral immune evasion tactic to eliminate infiltrating immune cells such as CTLs and DCs by counterattack. To activate the intrinsic mitochondrial pathway, the Vpr protein from HIV-1 induces swelling of mitochondria and MOM permeabilization in lymphoid and transformed cells (Stewart et al., 1999;Jacotot et al., 2000;Muthumani et al., 2002;Deniaud et al., 2004). Severe acute respiratory syndrome coronavirus (SARS-CoV) 7a protein can directly inhibit Bcl-xL and other survival factors (Tan et al., 2007). Although to date viruses have not been found to express BH3-only orthologs, there is increasing evidence that they somehow activate cellular BH3-only proteins to engage the Bax/Bak pore machinery on the MOM. However, the exact molecular mechanism how this is done has remained enigmatic. The Tat protein of HIV-1 was shown to release Bim from its inhibitory contraints on the cytoskeleton so that it can directly activate Bax/Bak (Puthalakath et al., 1999;Chen et al., 2002). Whether Bim indeed is held in check at the cytoskeleton is largely debated. Moreover the mechanism of release has not been investigated. IFN␣/␤ production in response to vesicular stomatitis viral infection has been shown to induce expression of Noxa (Galluzzi et al., 2008), but the transcription factors involved were not identified. Based on the finding that viruses can activate p53 and its homologs such as p73 (Kaelin, 1999), it is possible that Noxa, and maybe also Puma are transcriptionally induced by these factors in response to certain infections (Cruz et al., 2006;Liu et al., 2014). Alternatively, virus-specific protein kinases could phosphorylate BH3-only proteins, as it was shown for the U(S)3 protein kinase of HSV-1 on Bad (Munger and Roizman, 2001). We have recently shown that Bim phosphorylation on three sites by JNK1/2 enhances its proapoptotic activity toward Bax/Bak activation (Geissler et al., 2013). Moreover, we have obtained evidence that dsRNA produced by SFV does not only trigger a Bax/Bak-independent death signaling pathway involving the formation of a mitochondrial MAVS/caspase-8 complex (see above) (El Maadidi et al., 2014) but also uses a so far unknown BH3only protein to activate Bax/Bak and MOMP (Papaianni et al., submitted) (Fig. 3). How dsRNA links to such a BH3-only protein is currently investigated in our lab. Finally, it is possible that virus-encoded proteases may cleave BH3-only proteins such as Bid. These proteases are required for the viral life cycle because they process large non-structural and structural polypeptide precusors into mature viral proteins (Strauss and Strauss, 1994). Since they often show degenerate substrate specificity, they could cleave important components of apoptosis signaling and thereby activate them. For example, PLV viruses express proteases (2Apro and 3Cpro), which activate caspase-dependent apoptosis (Barco et al., 2000;Goldstaub et al., 2000;Calandria et al., 2004). In addition, HIV-encoded protease is known to process and activate caspase-8 both in vitro and in T cells, which then leads to Bid cleavage and mitochondria-mediated apoptosis (Nie et al., 2002). Indeed Bcl-2 overexpression protects cells from the pro-apoptotic effects of the HIV-protease and prevents apoptosis induced by HIV-1 infection of human lymphocytes (Strack et al., 1996). Concluding remarks and outlook There is increasing evidence that innate immune signaling via TLRs and intracellular PRRs do not only trigger an antiviral interferon response and other mechanisms to limit the replication, assembly and spread of viruses, but they also cooperate with the intrinsic Bax/Bak-mediated mitchondrial pathway to induce apoptosis of the infected host cells. As discussed above, some viruses have clearly evolved strategies to counteract both the intrinsic as well as the innate signaling branch so that they can effectively replicate and reproduce. However, other viruses, such as SFV and other alphaviruses, do not express any cell survival factors and may therefore use these pathways to kill their host cells after their successful reproduction, e.g. to evade detection by the immune system. While we clearly understand how virus-encoded decoy receptors, caspase-8 and other caspase inhibitors as well as Bcl-2 orthologs work to inhibit apoptosis of infected cells, we need to further understand how viruses manage to activate BH3-only proteins and/or directly perturb the balance between Bcl-2-like survival factors and pore-forming Bax/Bak proteins on the MOM. Further challenges will be to identify all the components of the TLR3/TRIF/caspase-8 and MAVS/caspase-8 complexes, which seem to constitute novel DISCs enhancing Bax/Bak-mediated MOMP and to investigate if other viruses such as DNA viruses also use innate signaling strategies to kill their hosts. Irrespective of whether viruses use apoptotic or anti-apoptotic mechanisms to win their competition with the host, a better understanding will continue to provide considerable insight into both viral and cellular biology, from initial virus sensing to viral clearance or persistence. It is the hope that this will ultimately lead to the generation of more effective vaccines against viruses.
2018-04-03T06:01:38.452Z
2015-03-01T00:00:00.000
{ "year": 2015, "sha1": "034384006e3925c431e0ded3a13deec6c2ac8f34", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.virusres.2015.02.026", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "450a221c1528cca41fb907e93230576803afb369", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
14129120
pes2o/s2orc
v3-fos-license
QCD radiative and power corrections and Generalized GDH sum rules We extend the earlier suggested QCD-motivated model for the $Q^2$-dependence of the generalized Gerasimov-Drell-Hearn (GDH) sum rule which assumes the smooth dependence of the structure function $g_T$, while the sharp dependence is due to the $g_2$ contribution and is described by the elastic part of the Burkhardt-Cottingham sum rule. The model successfully predicts the low crossing point for the proton GDH integral, but is at variance with the recent very accurate JLAB data. We show that, at this level of accuracy, one should include the previously neglected radiative and power QCD corrections, as boundary values for the model. We stress that the GDH integral, when measured with such a high accuracy achieved by the recent JLAB data, is very sensitive to QCD power corrections. We estimate the value of these power corrections from the JLAB data at $Q^2 \sim 1 {GeV}^2$. The inclusion of all QCD corrections leads to a good description of proton, neutron and deuteron data at all $Q^2$. The generalized (Q 2 -dependent) Gerasimov-Drell-Hearn (GDH) sum rules [1,2] are just being tested experimentally for proton, neutron and deuteron [3,4,5,6]. The characteristic feature of the proton data is the strong dependence on the fourmomentum transfer Q 2 , for Q 2 < 1GeV 2 , with a zero crossing for Q 2 ∼ 200−250MeV 2 , which is in complete agreement with our prediction [7,8], published almost 10 years ago. Our approach is making use of the relation to the Burkhardt-Cottingham sum rule for the structure function g 2 , whose elastic contribution is the main source of a strong Q 2 -dependence, while the contribution of the other structure function, g T = g 1 + g 2 is smooth. However, the recently published proton JLAB data [6] lie below the prediction, displaying quite a similar shape. Such a behaviour suggests, that the reason for the discrepancy may be the oversimplified treatment of the QCD expressions at the boundary point Q 0 ∼ 1GeV, defined in the smooth interpolation between large Q 2 and Q 2 = 0 and which serve as an input for our model. For large Q 2 we took the asymptotic value for the GDH integral and we neglected all the calculable corrections, as well as the contribution of the g 2 structure function. This was quite natural and unnecessary 10 years ago, since no data was available at that time. In the present paper we fill this gap and include the radiative (logarithmic) and power QCD corrections. We found that the JLAB data are quite sensitive to power corrections and may be used for the extraction of the relevant phenomenological parameters. We present here the numerical values of these parameters which naturally depend on the approximation of the QCD perturbation theory. The resulting theoretical uncertainty should be of order of the last term of the perturbative series , taken into account, and should not therefore be more than several percents. Moreover, the perturbative series should contain the renormalon ambiguity due to the factorial growth of the coefficients, resulting in a power rather than logarithmic correction with an unspecified coefficient. It is in fact this ambiguity which allows the interpretation of the dependence of the numerical value of the power correction, earlier mentioned for the case of the F 3 structure function [9], as an ambiguity in the separation of logarithmic and power corrections. We use the values of the power corrections as an input for our model at Q 2 0 ∼ 1GeV 2 and we achieve a rather good description of the proton data at lower Q 2 . We also present the improved description of the neutron and deuteron data and the behaviour of the Bjorken sum rule at low Q 2 . The starting point of our approach is the analysis of the general tensor structure of W µν A , the spin-dependent part of hadronic tensor W µν . It is a linear combination of all possible Lorentz-covariant tensors, which should be orthogonal to the virtual photon momentum q, as required by gauge invariance, and linear in the nucleon covariant polarization s, from a general property of the density matrix. If the nucleon has momentum p, we have as usual, s · p = 0 and s 2 = −1. There are only two such tensors: the first one arises already in the Born diagram and the second tensor is just The scalar coefficients of these tensors are specified in a well-known way, since we have Therefore, due to the factor (s · q), g 2 is making the difference between longitudinal and transverse polarizations, while g T = g 1 + g 2 contributes equally in both cases. Let us consider now the Q 2 -dependent integral It is defined for all Q 2 , and g 1 (x, Q 2 ) is the obvious generalization for all Q 2 of the standard scale-invariant structure function g 1 (x). Note that the elastic contribution at x = 1 is not included in the above sum rule. Then, by changing the integration variable x → Q 2 /2Mν, one recovers at Q 2 = 0 the integral over all energies of spin-dependent photon-nucleon cross-section, whose value is defined by the GDH sum rule [1, 2] where µ A is the nucleon anomalous magnetic moment in nuclear magnetons. While I 1 (0) is always negative, its value at large Q 2 is determined by the Q 2 independent integral 1 0 g 1 (x)dx, which is positive for the proton and negative for the neutron. The separation of the contributions of g T and g 2 leads to the decomposition of I 1 (Q 2 ) as the difference between I T (Q 2 ) and I 2 (Q 2 ) where There are solid theoretical arguments to expect a strong Q 2 -dependence of I 2 (Q 2 ). It is the well-known Burkhardt-Cottingham sum rule [10]. It states that where µ is the nucleon magnetic moment, G M (Q 2 ) and G E (Q 2 ) denote the familiar Sachs form factors, which are dimensionless and normalized to unity at Q 2 = 0, G M (0) = G E (0) = 1. For large Q 2 , as a consequence of the Q 2 behavior of the r.h.s. of Eq. (8), we get In particular, from Eq. (9) it follows that e being the nucleon charge in elementary units. To reproduce the GDH value (see Eq. (5)) one should have which was indeed proved by Schwinger [11]. The importance of the g 2 contribution can be seen already, since the entire µ A -term for the GDH sum rule is provided by I 2 . Note that I T does not differ from I 1 for large Q 2 due to the BC sum rule, but it is positive in the proton case. It is possible to obtain a smooth interpolation for I p T (Q 2 ) between large Q 2 and Q 2 = 0 [7] The continuity of the function and of its derivative is guaranteed with the choice where the integral is given by the world average proton data. This smooth interpolation seems to be very reasonable in the framework of the QCD sum rules method [8,12], as the low energy theorem for the quantity linear in µ A may, in principle, be obtained by making use of Ward identities. It is also compatible with resonance approaches [13], as we observed earlier [8] that the magnetic transition to ∆(1232), being the main origin of sharp dependence in that approach, contributes only to g 2 . However, such interpolation neglects the QCD perturbative and power corrections and on the other hand, it assumes that at the boundary point Q 0 the contribution of g 2 is already extremely small so that Both types of corrections are easily taken into account, although this does not allow a simple analytic parametrization. The starting point of the upgraded model is the corrected expression for the asymptotic expression for I i 1 (i = p, n): where we took into account the one-loop perturbative correction (while the inclusion of higher order ones will be discussed later), as well as the twist-4 contribution [14]. Here c i is the charge factor equal to 2/9 for proton and to 1/18 for neutron, while the matrix elements of the combinations of reduced twist-3 and -4 operators happen to be equal [14] for both proton and neutron: Note that the kinematical target mass corrections happen to be numerically small and we neglect their contribution. Anyway, they may be combined with the genuine twist corrections and the resulting change of the latter is within experimental and theoretical errors. As the expression for the I 2 stays unchanged, the expression for I T above the matching point Q 0 should change accordingly. Let us start from the proton case. The smooth interpolation to the GDH value at Q 2 = 0 is now more difficult and cannot be performed anymore, by making use of simple analytic formulae. Instead, we expand (15) to the power series at the point Q 0 and define the expression at the low Q 2 as: Here N is the number of continuous derivatives of these two expansions, which turns out to be a free parameter of the model, together with the matching value Q 0 . They should be chosen in such a way, that the condition for real photons is satisfied. The procedure we are implementing in such a way, may be considered as a matching of the "twist-like" expansion in negative powers of Q 2 and the "chiral-like" expansion in positive powers of Q 2 , which is similar to the matching of the expansions in direct and inverse coupling constants. In its simplest present version we take only the value I(0) as an input, although the slope and other derivatives calculated within the chiral perturbation theory may be added in future work. As soon as the low Q 2 region exhibits the important contribution of the resonances [13], the suggested procedure may be also considered as a version of quark-hadron duality. It is worthy to note here, that Bloom-Gilman duality for spin-dependent case is strongly violated by the contribution of ∆(1232) resonance [15]. As it was mentioned before, since this resonance does not contribute to the g T structure function [8], it is this function which may be a good candidate to study duality. We have studied Eq. (17) numerically changing the following inputs: i) for different order of perturbative correction (1,2,3 loops) [16]. ii) for different values of the degree of approximating polynomial N in Eq. (16); it is interesting that taking N = 1 does not allow for solution of Eq. (17). iii) for different values of non-perturbative corrections, which we were choosing in order to be close to JLAB data at their highest Q 2 ∼ 1GeV 2 . We observed that the increasing of the order of perturbative corrections lead to systematical decrease of the required non-perturbative one, which is similar to the case of F 3 structure function [9] and may be considered as a manifestation of the ambiguity in separating logarithmic and power corrections. iv) we varied the matching point Q 0 until Eq. (17) is satisfied. We found that Q 0 is systematically (but not strongly) increasing with N. The expression for Q 2 dependent integral Γ p 1 (Q 2 ) = I p 1,non−pert (Q 2 )Q 2 /2M 2 resulting from 3loops perturbative correction with N = 3, << O p >>= 0.11GeV 2 and Q 2 0 = 0.97GeV 2 is shown in Fig. 1. It is reasonably close to the JLAB data [6]. In what follows the thick lines correspond to our new approach and we present also the results from the old approach for comparison. Here we took the asymptotic value for Γ p 1 = 0.147 providing the good description of Γ p 1 (Q 2 ) for Q 2 of the order of several GeV 2 , when the 3-loops radiative correction is included. This procedure may be considered as a sort of preliminary estimate, since the full 3-loops analysis is not available. To generalize our approach to the neutron case, we use the difference between proton and neutron instead of the neutron itself. Although it is possible, in principle, to construct a smooth interpolation for the functions g 1 themselves [17], it does not fit the suggested general argument on the linearity in µ A , since I p−n 1 (0) is proportional to µ 2 A,n − µ 2 A,p , which is quadratic and, moreover, has an additional suppression due to the smallness of isoscalar anomalous magnetic moment. So we suggest the following parametrization for the isovector contribution of I T (Q 2 ), namely I p−n T (Q 2 ), above the matching point, where again only the 1-loop term is presented explicitly Here the transition value Q 2 1 may be determined by the continuity conditions in a similar way. We get the value Q 2 1 ∼ 1.04GeV 2 , which is not too far from that of the proton case. Concerning I n 2 (Q 2 ), which is given by Eq. (8), we have not neglected G n E (Q 2 ) 3 and we have used its very recent determination [19]. The result of this calculation is very slightly modified compared to the case where one assumes G n E (Q 2 ) = 0 and all the subsequent results involving the neutron were obtained with a non-zero G n E (Q 2 ). The asymptotic value of Γ p−n 1 = 0.21 is dictated by the Bjorken sum rule. We also took the same value << O n >>= 0.11GeV 2 as in the proton case. The plot representing Γ p−n 1 (Q 2 ) is displayed on Fig. 2 and agrees well with the very recent experimental data [20]. This may be considered as an argument in favour of the general picture of power corrections obtained in QCD sum rules calculations [14], where the neutron correction is small. However the quantitative comparison with the calculations in the framework of the chiral soliton model [18] would require a more detailed analysis. Now we have all the ingredients to turn to the behavior of neutron integral, which is simply obtained from the difference Γ p . It is shown on Fig. 3 and we notice that the strong oscillation around Q 2 = 1GeV 2 , we had in the previous analysis, is no longer there. We now have all the ingredients to investigate the deuteron integral. Note that in this case the generalization of the GDH sum rule may be naturally decomposed into two distinct regions. The first one is the region of large Q 2 , where nuclear binding effects can be disregard, so that the deuteron structure function is the simple additive sum of the proton and neutron ones. As a result, the Q 2 → 0 limit of this intermediate asymptotics is defined by the sum of the squares of proton and neutron anomalous magnetic momenta. When nuclear binding effects are taken into account one should get instead the square of the sum of these anomalous magnetic moments. As they are known to be rather close in magnitude and of different sign, the result should be therefore very small. The difference between these two regimes should be attributed [21] to the deuteron photodesintegration channel, which is supported by existing explicit calculations [22] in the case of real photons. For virtual photons, this allows to estimate the Q 2 value, where binding effects start to play a role, to be of the order of m 2 π . The simplest way to implement this reasoning [21] is to use the following expression: Here we introduced the nuclear scale Q d0 ∼ m π and neglected the square of the deuteron anomalous magnetic moment. The prediction is shown in Fig. 4 and seems to be in good agreement with the preliminary JLAB data [23]. Let us finally discuss the role of the elastic contribution. It must be definitely included [24] if one uses the operator product expansion (OPE), which is the essential tool in determining the power corrections. However, we use the OPE only above matching point, where the elastic contribution is small. At the same time, below the matching point the object those Q 2 behaviour is studied may be considered as a sort of fracture function, where only a partial summation over final states, excluding the elastic one, is implied. It is this function which may reach the GDH value at Q 2 = 0.
2014-10-01T00:00:00.000Z
2004-10-15T00:00:00.000
{ "year": 2004, "sha1": "e211515282b10fc41def657b1f4eb3e4398f7085", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-ph/0410228", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "b983bf69b53235c9865717cf2f13cb4fffe14e96", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
248874450
pes2o/s2orc
v3-fos-license
Sensory guided selection criteria for breeding consumer-preferred sweetpotatoes in Uganda Prioritizing sensory attributes and consumer evaluation early in breeding trials to screen for end-user preferred traits could improve adoption rates of released genotypes. In this study, a lexicon and protocol for descriptive sensory analysis (DSA) was established for sweetpotato and used to validate an instrumental texture method for which critical values for consumer preference were set. The study comprised several phases: lexicon development during a 4-day workshop; 3-day intensive panel training; follow-up virtual training, evaluation of 12 advanced genotypes and 101 additional samples from two trials in 2021 by DSA and instrumental texture analysis using TPA double compression; and DSA, instrumental texture analysis and consumer acceptability tests on 7 genotypes in on-farm trials. The established sweetpotato lexicon comprising 27 sensory attributes enabled characterization and differentiation of genotypes by sensory profiles. Significant correlation was found between sensory firmness by hand and mouth with TPA peak positive force (r = 0.695 and r = 0.648, respectively) and positive area (r = 0.748, r = 0.715, respectively). D20, NAROSPOT 1, NASPOT 8, and Umbrella were the most liked genotypes in on-farm trials (overall liking = 7). An average peak positive force of 3700 gf was proposed as a minimum texture value for screening sweetpotato genotypes, since it corresponded with at least 46 % of consumers perceiving sweetpotatoes as just-about-right in firmness and a minimum overall liking of 6 on average. Combining DSA with instrumental texture analysis facilitates efficient screening of genotypes in sweetpotato breeding programs. Introduction Roots, tubers, and bananas are the main crops for nutrition and food security in many low-income African countries, including Uganda. Cassava was affected by cassava mosaic disease in the 1990s, while bananas succumbed to banana wilt disease at the beginning of the 21st century (Kagezi et al., 2006), which led to the emergence of sweetpotato (Ipomea batatas L.) as an important food security crop for sustaining the population against imminent hunger. Breeding programs have enhanced the crop by developing varieties with superior agronomic traits and resistance to several environmental stress factors, pests, diseases (Mwanga et al. 2011;Mwanga et al., 2016), and optimized nutritional composition (Gurmu, Hussein & Laing, 2017;Low, 2017). Specifically, several orange-fleshed sweetpotato varieties that are high in carotenoids have been released (Ssemakula, et al., 2014). Some varieties are being biofortified with minerals such as zinc and iron. Unfortunately, the full benefit of these efforts is often challenged by poor adoption among the population due to low consumer acceptance. Sensory characteristics of food are critical to consumer food choice and acceptability (Maina, 2018). Consumers of boiled and steamed sweetpotato in Uganda have expressed a preference for varieties that are sweet, dry, and mealy but not fibrous . Regardless, the process of breeding sweetpotato proceeds with minimal consideration for sensory attributes until shortly before variety release when hedonic evaluation is conducted (Ssemakula et al., 2014). Making sensory attributes among the main criteria for selection earlier on in the process could support the breeding of sweetpotatoes with consumerpreferred traits. To achieve this, steps should be taken to holistically understand the sensory characteristics of sweetpotato using descriptive sensory analysis, and then identify the genetic factors that influence them. Descriptive sensory analysis gives detailed and reliable information about the qualitative attributes of a product (Joanna et al., 2019) thus providing a basis for understanding acceptability. There are currently no standard protocols in place to guide descriptive sensory analysis for sweetpotato breeding in Uganda. There are several critical aspects to conducting descriptive sensory analysis correctly. Usually, descriptive sensory analysis starts with a group of candidate panelists being recruited and screened for sensory acuity and availability (de Kock and Magano, 2020;ISO, 2005). Afterwards, the selected panel generates a lexicon for the products to be evaluated. They are then trained on using the lexicon to ensure reliable results. Besides the analytical ability of the panelists, the environment in which samples are evaluated is an important aspect of sensory analysis. The recommendation for classic descriptive sensory analysis is using sensory booths in a controlled laboratory environment (ISO, 2005). However, such facilities are not always available, expensive to construct, and therefore not an option in case of economic limitations. Furthermore, the recent COVID-19 pandemic restrictions complicated panelists' access to laboratories requiring innovative applications. While the alternative of home use tests has been explored for consumer sensory analysis of sweetpotato , home use test protocols for descriptive sensory analysis was lacking. Breeding programs are tasked with screening hundreds of genotypes per season. While use of sensory panels and consumer acceptability tests are the most accurate measures for human sensory perception and liking (Meilgaard et al., 2007), time and other resources remain a challenge. High throughput instrumental methods which can be used to assess a large number of samples in a short time could facilitate the screening process if these measurements are correlated with human assessment of sensory attributes that are linked with end-user preferences. Texture (specifically, the perceived firmness) is a key sensory attribute that influences consumer acceptance of steamed or boiled sweetpotato . Earlier studies showed a significant relationship between instrumental and sensory panel assessment of steamed sweetpotato (Truong et al., 1997). More recently, an instrumental method for evaluating firmness of boiled sweetpotato was applied to determine texture differences among genetic variants (Banda et al., 2021a, Banda et al 2021b. However, the methods used in the latter study were not specifically related to consumer preferences for texture. The objectives of this study were 1) to develop a lexicon and protocol for evaluation of sweetpotato by a trained descriptive sensory analysis panel and 2) to validate a high throughput instrumental texture method to establish critical values related to consumer preference for application in sensory-guided sweetpotato breeding programs. Material and methods Ethical approval for the involvement of human subjects in this study was granted by Makerere University School of Social Sciences Research Ethics Committee (MAKSSREC 12.19.364), Centre de Coopération Internationale en Recherche Agronomique pour le Développement (French Agricultural Research Centre for International Development), CIRAD, Ethics committee, and retrospectively by the Faculty of Natural and Agricultural Sciences, University of Pretoria (NAS 236/2021). Sweetpotato samples Sweetpotato roots of various genotypes were obtained from the ongoing International Potato Center (CIP) breeding trials in various locations in Uganda, while some were obtained from farmers in the same areas. Roots sourced from the open market usually have a high variation within them and these were avoided. Many genotypes were used for lexicon development and sensory panel training in phases 1-3 (Table S.1 in Supplementary Material) while roots of 12 diverse sweetpotato genotypes of Development and Delivery of Biofortified Crops at Scale (DDBIO) multi-location advanced field trial planted in 2020 under a collaboration between the National Agricultural Research Organization (NARO) and CIP were used for the sensory and instrumental assessments in phase 4. Five of these 12 advanced trial genotypes were assessed by untrained consumers for acceptability as part of a pilot study to develop correlations between consumer liking, descriptive sensory analysis and instrumental texture. The subset of samples was selected based on the number of roots available and variation in flesh colors. Moderately sized sweetpotato roots according to the size distribution of the harvest with no visible damage were used in all cases (Porras et al., 2014). In phase 5, sweetpotato roots were obtained from DDBIO multilocation advanced field trial planted in 2021 in five locations. The trial comprised 15 genotypes including 10 test clones (1.44, D11, D15, D20, D26, NKB3, NKB105, S36, S47 and S97) and 5 checks (Ejumula, New Kawogo, NASPOT 8, NASPOT 10O, NASPOT 11). Test clones are genotypes being studied for selection and potential release as new varieties while checks are released or local varieties of known agronomic performance. This multi-location trial included clones of the 12 genotypes harvested in 2020 under the same trial. In total, 61 unique samples were studied. Another set of sweetpotato roots representing 40 clones, 7 of which were among the raw material for making a new generation of genetic improvements referred to as parents (Beauregard, CEMSA 74-228, Ejumula, NAROSPOT 1, NASPOT 8, Silk Omuyaka and Tanzania), were obtained from Mwanga Diversity Panel (MDP) population planted in 2021. Descriptive sensory profiles by the trained panel and instrumental texture measures collected from these materials were used to establish relationships between sensory texture and instrumental texture parameters. Consumer sensory analysis was conducted with 106 consumers with 7 genotypes planted as part of the on-farm trials in phase 6. The genotypes included two checks, NASPOT 8 and NAROSPOT 1; three test clones, D20, NKB3, NKB105; and two local clones, Muwulu Aduduma and Umbrella. On-farm trials are part of the final steps in a breeding cycle and were designed as participatory plant breeding where farmers are involved in breeding selections. The trained sensory panel Twenty-one individuals (11 men and 10 women; researchers and technicians) working at the National Agricultural Research Organization (NARO) in Uganda that had completed 50 h of prior sensory training with other roots, tubers and banana crops were recruited by the Food Biosciences and Agribusiness Program for the sensory panel in 2019. Participants were informed about the objectives and provided consent during the first training session. Candidates participated in phase 1, a first workshop of lexicon development (August 2019), with 10 continuing with the training that followed in phase 2 (October 2019). Nine panelists attended phase 3, a virtual training and analyzed sweetpotato samples in the office setting (July 2020). Phase 1 and 2: Lexicon development and initial panel training sessions A draft lexicon was developed with 21 participants during a 4-day training workshop. The panel tasted 7 genotypes of contrasting sensory characteristics (Table S.1 in Supplementary Materials). A follow up session, facilitated by the sensory research leader, was held to discuss descriptive terms and to reach consensus among participants (Swegarden et al., 2019). Once the list of descriptive terms was drafted, reference products were identified and presented to the panel. This step helped to reduce the number of attributes in the draft lexicon and to create descriptions for the attributes. Definitions for selected attributes were developed with reference to literature (Meilgaard, Civille and Carr, 2007;ISO, 2008) and modified as needed. Since the panelists came from various professional backgrounds, an easy-to-understand simplified version of each definition was created. Intensity scales anchored with verbal expressions were selected for each attribute and a workflow to guide sensory assessment was developed. The lexicon was further refined and used to train 10 panelists during a 3-day workshop in phase 2. Here, participants evaluated 8 sweetpotato genotypes (Table S.1 in Supplementary Materials) and results were used to monitor panel performance and facilitate further training. Follow-up discussions at this stage facilitated by the panel leader led to finalizing the lexicon. During the workshops, sensory analysis was conducted in a sensory laboratory with individual evaluation booths separate from the preparation area. Panelists evaluated samples and recorded their scores on a paper ballot. They were also provided with a document summarizing the workflow, attributes, definitions, and scales for reference. Drinking water and slices of fresh cucumber were provided as palate cleansers for use before and between samples. Phase 3: Virtual panel training during COVID 19 pandemic and sample evaluations in office settings The panel received additional training evaluating various sweetpotato genotypes for 6 days in February 2020. Shortly thereafter, work with the sensory panel was challenged by the COVID 19 pandemic. Under these circumstances, home and office settings were identified as alternative sites for conducting the sensory tests. The panel was divided into two groups and participated in one 3hour virtual training session held via Microsoft Teams. Members who would be evaluating samples from home attended a special training session on how to prepare samples. Data were collected virtually using Compusense Cloud software (Academic Consortium, Compusense Cloud, Compusense Inc., Guelph, ON, Canada). Four members of the panel were trained to prepare and evaluate the samples at their individual homes using cookware available in their kitchens. Raw sweetpotato roots were delivered to the panelists' homes in labelled paper bags. However, the samples prepared at home were variably cooked (often undercooked), perhaps because they did not have the steaming pots which the research team uses in the laboratory. As a result, there were many cases where samples could not be evaluated. Therefore, data from this group were excluded. Phases 4, 5 and 6: Exploring and developing relationships between sensory firmness and instrumental texture parameters, and sensory and instrumental texture analysis of on-farm trials In April and May 2021, 13 participants attended a 7-day training workshop where they were trained on the terms in the lexicon and how to conduct sensory evaluation of steamed sweetpotato. Among them, 12 trained panelists participated in descriptive sensory profiling of 12 advanced genotypes from DDBIO advanced field trial planted in 2020. From October to early November 2021, they participated in sensory profiling of 40 genotypes from the MDP trial population, and then from late November to December, all 13 panelists evaluated 61 genotypes from the DDBIO multi-location advanced field trial planted in 2021. In February 2022, a 4-day training, attended by trained panelists and 3 new participants, was held and 12 panelists evaluated genotypes used in on-farm trials. Preparation and presentation of samples for descriptive sensory analysis by the trained panel Sample preparation evolved throughout the panel training phases. The roots were prepared in a central kitchen. During lexicon development (phase 1), sweetpotatoes were prepared following the commonly used local method where roots are peeled, wrapped in banana leaves, and steamed for 1 h in a saucepan as detailed in Mwanga et al. (2021) over gas. Several roots of the same genotype constituting a sample were wrapped separately and assigned a unique colored identification string. Despite the advantage of cultural compatibility, it was difficult to control variation in the cooked texture of the roots resulting from differences in their shape and size. As a result, the cooking method was revised for the next phases of training and the advanced trial assessment. From phase 2 onwards, sweetpotato roots were prepared by cutting out a 7 cm portion (Figure S.1 in Supplementary Materials) weighing 160 -240 g before peeling. The number of 7 cm portions prepared was equal to the number of respondents. The portions were peeled and placed upright to steam in single tiered steaming pots (Korkmaz Perla Cous Cous Cookware Set A152, Korkmaz Dis Ticaret Ltd, Turkey) with 2000 ml of water in the bottom pan for 1 h. The steamer layer was lined with a banana leaf on which the portions were placed, covered with another banana leaf and then the lid. At service, the ends of the cooked 7 cm portions were cut off before wrapping each piece in aluminum foil. Wrapped samples were labelled with randomly assigned 3-digit codes and presented monadically to panelists. Samples were cooked in the same order as they were served at 20 -30 min intervals. Once ready, a sample was served as soon as possible such that panelists tasted the samples at a temperature ranging from 50 to 60 • C. From phase 2 onwards, panelists evaluated samples in several sessions which consisted of three or four samples per session. Selected genotypes were served in duplicate across each sample tasting phase (Table S.1 in Supplementary Materials). Instrumental texture analysis of sweetpotato During preliminary experiments, various sample preparation methods, probes and program settings were compared to identify a method which produced the most repeatable and discriminative results . Following the selected method, 3 representative roots were selected from each genotype for instrumental texture analysis. Three pieces of 3x3x2.5 cm were cut from each root and placed on the steamer layer in a steaming pot (see 2.2.3) matted by a layer of banana leaf. The pieces were steamed for 35 min (from when the pot was placed on the gas fire). Afterwards, the pieces were carefully trimmed on either end to remove a slippery layer resulting from amylose leaching leaving a 3x3x2 cm piece. The texture of each piece at room temperature (20 -25 • C) was analysed using a TA-XT texture analyzer (Stable Macro Systems, Godalming, UK) with 10 kg load cell, following a texture profile analysis (TPA) procedure adapted from Truong et al. (1997) and Banda et al. (2021a). In our method, a 60 mm diameter compression plate probe moving at a speed of 100 mm/min compressed the sample (2 cm vertical height) for 5 s (8.35 mm distance), resting for 5 s in between compressions (see Figure S.2 in Supplementary Materials). Truong had previously used the same crosshead speed but a higher (75 %) compression. In preliminary experiments, a lower compression of 25 % was found most suitable for the wide range of sweetpotato texture in the Ugandan breeding program. The parameters that were recorded from the resulting curves were the peak positive forces and the positive areas of the two curves (Figure S.2 in Supplementary Materials). Based on experiments conducted when establishing the method, these parameters were reliable and discriminative ). During these experiments it was also observed that there was a high variation in other instrumental texture parameters such as negative area. Regarding secondary parameters, adhesiveness was not calculated since there was a high variation in readings of negative area. Cohesiveness was calculated as the ratio of area under the second curve to area under the first curve, and gumminess was calculated as the product of peak force in the first compression and cohesiveness. Evaluation of dry matter of raw sweetpotato samples To evaluate the dry matter of sweetpotato genotypes, 2 g of thinly sliced raw sweetpotato roots from each genotype were accurately weighed into a moisture dish in triplicate and the dry matter was evaluated following the method described by Adesokan, Alamu and Maziya Dixon (2020). Consumer evaluation of sweetpotato 2.4.1. Consumer respondents and questionnaire During phase 4, 23 consumers of sweetpotato were invited to evaluate five (NASPOT 8, NASPOT 10O, NASPOT 11, NKB3 and S47) of the 12 genotypes from the DDBIO advanced field trial (Table S.1 in Supplementary Materials) planted in 2020 as part of a pilot study to design a consumer sensory analysis study. The subset of samples was selected based on the number of damage free roots available from the harvest and variation in flesh colors. The participants were recruited by the local leaders and included adult men and women consumers of sweetpotato. The number of participants were limited to the number of roots available for evaluation and number that the interviewers could manage given the curfew limitation of the COVID period. Trained interviewers obtained consent from the participants in English or a local language where necessary. The interviewers then administered a questionnaire and entered responses via Compusense software. All samples were prepared as described in 2.2.4 and presented monadically to respondents who evaluated each sample once. All personal information that was collected was discretely stored by the research team. The full length of the questionnaire is presented as Appendix S.1 in Supplementary Materials. Most of the respondents were men (56.5 %) (Table S.2 in Supplementary Materials) and working in the informal sector (43.5 %). The age range of the participants was 19 to 48 years, with an average of 31 years. Most of the respondents ate sweetpotato for lunch (78.3 %) several times a week (52.2 %). Following this pilot study, the questionnaire was modified (Appendix S.2 in Supplementary Materials) and used for a consumer study in phase 6 where 106 men and women who were regular consumers of sweetpotato (Table S.3 in Supplementary Materials) were interviewed to identify their attitudes towards 7 sweetpotato genotypes. Due to poor internet connection, data were entered on printed ballots then transferred to Compusense. Panelists rated overall liking, color liking and aroma liking of the samples on a 9-point hedonic scale ranging from 1 (dislike extremely) to 9 (like extremely). They also rated sweetness, mealiness and firmness on just-about-right scales ranging from 1 to 5 centered by just-about-right (JAR) at 3. These attributes were previously identified as the most important drivers for consumer preference of boiled or steamed sweetpotato in Uganda . Sample preparation and presentation for consumer sensory analysis During the pilot study, samples for consumer sensory analysis were prepared according to the method outlined in section 2.2.4. However, to align with participatory plant breeding design, women identified from the local community prepared the sweetpotato in a culturally appropriate manner. Each woman was assigned a single variety. The women peeled the raw roots and placed them in water immediately to prevent browning, wrapped the roots in banana leaves and placed the bundle in saucepans matted with grass. They covered the wrapped sweetpotato roots with more banana leaves and added water to the saucepans to steam the sweetpotatoes over a three stone fireplace with burning wood. Roots were deemed ready when the loud bubbling sound of boiling water became quieter indicating reduced water levels. The sweetpotatoes were removed from the fire, unwrapped slightly and lightly pressed using fingers to confirm that they were cooked before serving. In order to overcome the limitation of number of roots, roots of different sizes were cooked. Ready to eat sweetpotato roots were divided depending on size and medium sized roots were quartered while large roots were divided into 8 portions. The portions were wrapped in aluminum foil and labelled with random codes. Respondents were presented with the cooked samples monadically to evaluate in random order. Data and statistical analysis 2.5.1. Descriptive sensory analysis (Phase 1 to 6) All sensory data were first organized in Microsoft Excel (Microsoft 365 Apps for enterprise, version 2102). For descriptive sensory analysis, data from genotypes served in duplicate (sensory replicates) were used to evaluate panel reliability in SPSS version 22 (IBM Corp. in Armonk, NY, 2013) by Fisher's test with genotypes and panelists as fixed factors (Canul et al., 2011). Data for descriptive sensory analysis at phase 3, phase 4, and phase 6 were analyzed in SPSS analysis of variance (ANOVA) models with genotypes as fixed variable and panelists as a random factor. To visualize relationships between the sensory attributes and the genotypes, principal component analysis (PCA) using mean scores for each replicate by the trained panel was run in XLSTAT (2020.5.1, Addinsoft, 2021) using the covariance option. Attributes that were not found to be significantly different among the varieties by ANOVA were excluded from the PCA. During phase 3, genotypes were not found to be different (p > 0.05) in caramel aroma, off -odor, floral flavor, bitter taste, and adhesiveness, hence the exclusion of these attributes from PCA. Attributes that were excluded from the PCA at phase 4 were: caramel aroma, off odor, floral flavor, bitter taste, fibrous appearance, cooked carrot flavor, crunchiness, and adhesiveness. Caramel aroma, off odour, degree of translucency, fibrous appearance, cooked carrot flavor, floral flavor, sweet taste, bitter taste, adhesiveness, and fibrousness were excluded at phase 6. Instrumental texture analysis (Phase 4 to 6) With all texture data, means for each texture parameter, specifically average peak positive force and average positive area for the first and second compressions, were calculated. When analyzing data from descriptive sensory panel and instrumental texture analysis, a panel mean for each sensory attribute evaluated by the panel was calculated per genotype including those that were served in duplicate. In phase 4, instrumental texture parameters were correlated with sensory firmness (descriptive sensory analysis) using Pearson correlation analysis. In phase 5, a linear regression model predicting sensory firmness was developed in XLSTAT with the best model procedure using 59 clones which had complete data from the DDBIO multi-location advanced field trial planted in 2021 as the training set and the 12 genotypes of the DDBIO advanced field trial from phase 4 as the validation set. To solve for multi-collinearity due to the strong correlation between instrumental texture parameters, only peak positive force (firmness) of the first compression was included in the model. Natural logarithms of the values of the predictors (dry matter and peak force) were entered in the model to ensure heteroscedasticity of the output residuals, and RMSE values of both the calibration and validation sets were calculated. The model was validated using descriptive sensory data from the panel and instrumental texture analysis of 39 genotypes from another population, MDP, of the same season. Penalty analysis with consumer acceptability tests (Phase 4 and phase 6) To conduct penalty analysis, the 5 point just-about-right scale was collapsed into 3 categories by combining the lower end points (1 and 2), and upper point scales (4 and 5) (Ortega-Heras et al., 2019). Penalty analysis was conducted separately for each genotype using these categories and the overall liking data. Establishing proposed minimum and maximum values of instrumental texture parameters using consumer acceptability tests in onfarm trials (Phase 6) Several measures from the firmness JAR question were plotted against instrumental texture measures to propose minimum and maximum values corresponding to consumer liking. Linear regression was applied to establish relationships between instrumental texture measurements of genotypes and frequencies of being 'too soft', 'too hard', or 'just-about-right' in firmness. Plots of the frequency of these Take a portion of sample and press between fore finger and thumb to assess moisture release. Moisture release Attribute of food products to release moisture when pressure is applied such as cooked cucumber and French beans 0 = none to 10 = extremely moist Attempt to make a ball from the sample to evaluate cohesiveness (moldability). Cohesiveness (moldability) Ease with which a ball like shape can be molded from sample 0 = falls apart to 10 = moldable Rub a portion of sample between fingers to evaluate mealiness. Crumbliness (mealiness) Ease with which sample breaks into small particles upon rubbing 0 = not mealy to 10 = extremely mealy responses were plotted against instrumental texture measures of firmness (peak force 1) and toughness (positive area 1) to produce linear prediction equations in Excel. In order to select a suitable threshold, plots of overall liking and the frequency of responses to the just-about-right question on firmness were also plotted to identify the frequency associated with a minimum overall liking of 6, which corresponds to 'like slightly'. These were used as cutoff frequencies to calculate proposed values of instrumental texture measures. The frequencies of respondents who found the sweetpotato samples 'too soft' or 'just-about-right' in firmness were entered in their respective prediction equation to calculate the values of texture firmness (peak force) and toughness (positive area). These output values were proposed as minimum values for instrumental texture parameters that would indicate sweetpotatoes of suitable firmness for Ugandan consumers. Initial lexicon draft and changes The initial lexicon comprised 36 terms describing the aroma, appearance, flavor, and texture of steamed sweetpotato (Table S.4 in Supplementary Materials). The lexicon was reduced to the final 27 terms of which four described aroma and appearance while six described flavor and 13 described texture attributes. Several aroma terms were combined under the general category "off-odor". Crumbliness was introduced in this iteration of the lexicon. Initially all texture terms were assessed in the mouth. However, during lexicon refinement, the panel indicated that attributes such as hardness, cohesiveness, crumbliness/ mealiness and moisture release were more easily assessed by hand. Ugandans usually eat sweetpotato by hand. Reference products The list of reference products, their preparation methods and associated attributes are shown in Table S.5 in Supplementary Materials. These items were all obtained from local produce markets and supermarkets and as such are readily available. They include farm produce, a wide variety of vegetables and legumes, teas, and common snacks which are consumed in Ugandan households. Final lexicon: definitions, scale anchors, and methods of assessment The list of attributes in the final lexicon with definitions, scale anchors, and method of assessment are shown in Table 1. The definitions included here are not technical but rather simplified versions with local examples to make it easy for the panelists to understand. Sensory profiles of sweetpotato genotypes and performance of trained panel after virtual training and sample evaluation outside the laboratory (office setting) Generally, the results from the trained panel suggested that there was variation in the sensory profiles of the genotypes evaluated. Fig. 1 shows the PCA map drawn with 2 main components: F1 and F2 explaining 61 % and 26 % of the total variance among samples using 21 attributes. The first component separated sweetpotato genotypes with more moisture and cohesiveness on the right side of the plot (Resisto and MDP 510) from those that were crumblier on the left (Huarmeyano and SPK004). fibrous. The replicates of NASPOT 11 and MDP 452 are located closely on the map in Fig. 1 indicating good panel consistency. There was no difference between replicates of these genotypes (p > 0.05) except that the mean score for smoothness among the replicates of MDP 452 was significant (mean and standard deviation 4 ± 2 replicate 1 vs 6 ± 1 replicate 2) (Table S.6 in Supplementary Materials). Sensory profiles of 12 genotypes from the DDBIO advanced field trial planted in 2020 The PCA of the 12 genotypes from the advanced trial are presented in Fig. 2. The plot shows the first three main components explaining 87 % of the variation among the genotypes according to 19 sensory attributes. Most texture attributes loaded on the first component. The first component, F1, separated genotypes by texture attributes with firm and crumbly genotypes on the left (S36, New Kawogo) and soft and moist genotypes on the right (NKB3, NKB105). S36, an orangefleshed genotype, was particularly firm while NKB3 and NKB105 were moist and soft. The second component, F2, separated varieties by orange color intensity and sweetpotato flavor. Deeply orange genotypes such as Ejumula, S36, NKB3 and NKB105 appeared towards the top of the plot, and NASPOT 11 and New Kawogo which are white, and cream fleshed towards the lower side of Fig. 2. NASPOT 10 O and D26 did not have significant loadings on the first two components (A) and had higher loadings with the third component (B). Fig. 3 shows a PCA map with only sensory texture attributes. The two main components explain 88% of total variation. F1 separates moist genotypes (D15, NKB105, NKB3) on the left from S36 and New Kawogo on the right, which are mealy and firm. F2 separated genotypes by cohesiveness with highly cohesive genotypes (D26, NASPOT 11, NAS-POT 8, and D15) appearing on the top half of the plot and the less cohesive genotypes (S36,NKB3 and NKB105) being in the bottom half. Correlation between sensory firmness and instrumental texture analysis of 12 sweetpotato genotypes from DDBIO advanced field trial planted in 2020 The dry matter and instrumental texture parameters for the 12 genotypes in the advanced trial are shown in Table S.7 in Supplementary Materials. The dry matter ranged from 28 % (NKB3) to 38 % (NASPOT 11). S36 and D26 had the highest peak force (firmness) and positive area (toughness), while NKB3 and NKB105 had the lowest. There was good correlation between sensory firmness and parameters of instrumental texture especially peak positive force (firmness) and positive area (toughness) of the first curve (Table 2). Peak positive force was positively correlated with sensory firmness in mouth (r = 0.695) and sensory hardness by hand (r = 0.648). The correlation coefficients between positive area and sensory firmness in mouth (r = 0.748) and sensory hardness by hand (r = 0.715) were higher than those with peak positive force indicating a slightly stronger relationship. There was no correlation between dry matter and sensory firmness in this study. Linear regression model describing the relationship between sensory firmness and instrumental texture parameters A multiple linear regression model (S.1 in Supplementary Materials) was developed to explain variation of sensory firmness in mouth among sweetpotato genotypes by dry matter and peak positive force of the first compression. Fig. 4 shows a plot of sensory firmness versus predicted firmness (A) of the selected model. The model explained 65 % variation for the calibration set and 67 % variation in the validation set. The RMSE Table 2 Correlation between sensory hardness, dry matter and various parameters of instrumental texture using 12 genotypes from DDBIO advanced field trial planted in 2020. values for the calibration and validation sets were 0.9 and 0.7, respectively. A plot of sensory firmness versus predicted sensory firmness from the model when validated using results from an additional 39 genotypes is shown in Fig. 4 panel B. In this case, the RMSE was 1.0. indicating that the model could predict the sensory firmness with an accuracy of plus or minus one unit on the 11-point intensity scale used by the trained panel. Sensory profiles of 7 genotypes evaluated in on-farm trials during phase 6 The sensory profiles developed by the trained panel of the 7 genotypes evaluated in on-farm trials are shown on the PCA, which explains 93 % of the total variation among the genotypes (Fig. 5). Orange fleshed genotypes (D20, NASPOT 8, NKB3 and NKB105) appear on the right of the first component, while white, cream and yellow fleshed genotypes (NAROSPOT 1, Umbrella and Muwulu Aduduma) are on the left. Umbrella and the replicates of NAROSPOT 1 were also firm and crumbly compared to NKB3 which was moist. Consistent with observations from previous phases, sweetpotatoes with pumpkin aromas and flavors appeared separately from those with higher intensities of sweetpotato aroma and flavors were on the map. Overall liking and penalty analysis of 7 genotypes evaluated in onfarm trials The mean overall liking for the different sweetpotato genotypes ranged from 5 (NKB3) to 7 (D20, NASPOT 8, NAROSPOT 1 and Umbrella) (Table 3). Using the threshold of a 2 unit penalty value (mean drop = 2) and 30 % minimum just-about-right response frequency, all the genotypes were penalized for not being mealy enough. Only NKB3 Fig. 4. Plots of (A) sensory firmness versus predicted sensory firmness from the developed multiple linear regression model using material from DDBIO population and (B) sensory firmness versus predicted sensory firmness from the developed linear regression model using MDP population. and NKB105 were penalized for not being sweet enough. In regards to firmness, only D20, NAROSPOT 1 and Umbrella were considered to be firm enough. Proposed minimum levels for instrumental texture parameters Figs. 6 to 8 show the frequency plots for responses 'too soft', 'too hard' and 'just-about-right', respectively, in relation to instrumental texture measurements and the equation of the trendlines. The percentage of respondents who perceived any of the seven samples to be'too soft' (Table 3) was wide (22 % to 83%) compared to those who perceived any sample to be 'too hard' (0 % to 9 %). The associations between the instrumentally measured firmness (peak force) and the percentage of consumers that found the sweetpotatoes 'too soft' or 'justabout-right' in firmness were strong (R 2 = 0.96 and R 2 = 0.85 respectively). Similarly, there was a strong positive relationship between instrumental firmness and overall liking of the samples (Fig. 9, R 2 = 0.92), whereas the relationship between instrumentally measured firmness and respondents who found any sample 'too hard' was weak (R 2 = 0.48). Due to the low frequency of samples perceived as 'too hard', it was not considered for establishing maximum levels for instrumental texture firmness. Figure S.4 shows the relationship between overall liking and responses to just-about-right question on firmness. Using these relationships, cut-off values for frequency of respondents perceiving samples to be 'too soft', 'just-about-right' firm, and 'too hard' corresponding with a minimum overall liking of 6 were 50 %, 46 % and 4 %, respectively. Using the prediction models, 50 % response of 'too soft' corresponded with an average peak force of 3435 gf (~3400 gf) and positive area of 6426 gf.s (~6400 gf⋅s). About 46 % consumers would perceive sweetpotato firmness to be 'just-about-right' when peak force is 3682 gf (~3700 gf) and positive area is 6323 gf⋅s (~6300 gf⋅s). Thus, the minimum values of peak force and positive area were set at 3700 gf and 6400 gf⋅s, respectively for large scale screening of sweetpotato genotypes for consumer preferred firmness. Even though the relationship between the proportion of consumers who perceived sweetpotato as being too firm and instrumental texture parameters was linear, the small range of responses made it difficult to determine the relationship at higher proportions. It was therefore difficult to make conclusions on the maximum values of instrumental texture parameters in the current study. Discussion In the early stages of lexicon development, panelists used a Luganda term, "kiwutta" to describe the texture, taste, and flavor of some sweetpotatoes. Upon further discussions with the panel, it was identified that "kiwutta" referred to poor quality characterized by translucent appearance, extreme hardness, moisture release, crunchiness, and high sweetness and intense floral flavors. Its derivative term muwutta was used to refer to a glassy texture by consumers of boiled potato (Mudege et al., 2021) observed when potato does not go through glass transition upon heat treatment (boiling) thus maintaining the uncooked hard texture associated with its glassy state. Another Luganda hedonic term, "kukumuuka" was associated with sweetpotato that was not translucent and whose texture was dry and powdery akin to crumbly/mealy texture like the meaning of the term as used by potato consumers reported by Mudege and colleagues. There were two iterations of lexicon developed for steamed sweetpotato with the second refining and reducing the number of descriptive terms. Comprehensive sensory characterization of sweetpotato using many descriptors is ideal but would be challenging for routine use with breeding trials because there are many samples to analyze. Fewer attributes reduce response burden on the panelist and could thus also contribute to improved panel performance (ISO, 2005). Since there was no significant variation among sweetpotato genotypes regarding caramel aroma, off odor, floral flavor, bitter taste, adhesiveness, fibrous appearance, and cooked carrot flavor, the exclusion of these attributes from routine analysis could be considered, especially when the objective is to differentiate between genotypes in advanced stages of breeding such as the ones used for the current study. It is possible that important negative characteristics such as fibrousness, bitter taste and off-flavor are effectively screened against earlier in the breeding pipeline. However, this may only be limited to CIP's sweetpotato breeding strategy and there is need for contextual consideration when deciding to adapt this strategy. Another approach would be to include either one of the attributes that appear closely related according to the PCA ( Fig. 1 and Fig. 2) such as hardness by hand or firmness/hardness in mouth, crumbliness by hand or crumbliness in mouth, fracturability or firmness/hardness in mouth. However, the close association between such sensory attributes in sweetpotato need to be confirmed by studies which include more genotypes. The lexicon was modified progressively as panelists were trained with exposure to sensory characteristics of different sweetpotato genotypes, like the concept of a 'living lexicon' as shown by Chambers et al. (2016) for coffee. Although, the genotypes used for this lexicon may not cover the entire product space of sweetpotato (for example, no purple fleshed sweetpotatoes were included), the selection was sufficient for our study in Uganda since genotypes outside this range are currently rare. Therefore, this lexicon can be used as the basis for sensory profiling of white, cream, yellow, and orange-fleshed sweetpotatoes. It can be modified for use with purple sweetpotato and closely related commodities in contexts different from that of the current study. The lexicon shares common terms with others such as the lexicon for boiled sweetpotato by Leighton et al. (2010) with some slight variations in the nomenclature. Texture attributes such as firmness, fibrousness, cohesiveness, and moistness are common to those identified for fried sweetpotato (Sato et al., 2018;Dery et al., 2021) and baked sweetpotato (Leksrisompong et al, 2012), demonstrating a degree of similarity in sweetpotato across agronomic environments and preparation methods. The methods of assessment or references were modified further to suit evaluation of steamed sweetpotato and include items familiar to the panelists' backgrounds such as local food products and vegetables. While assessing the sensory quality of potato, Bough, Holm and Jayanty (2019) observed more variation between genotypes than preparation method and this may be the reason why the lexicons have similar terms. This implies that our lexicon is quite versatile and could be adapted for use in other regions and for other sweetpotato products, which is especially important due to the ongoing diversification of the sweetpotato product range in sub-Saharan Africa, including increased promotion of puree and puree-based products. There is potential for this lexicon to be used to evaluate sweetpotato genotypes and determine their suitability for various uses. Moreover, we include reference products in our definitions which are easy to interpret in case of use with another panel even if it is in a different location (Suwonsichon, 2019). This sweetpotato lexicon includes terms that have also been used to describe other roots and tuber crops such as dessert bananas (Buguad et al., 2011) due to the similarity in structure and composition of starch and other carbohydrates of these foods which determine their texture and taste (Joanna et al., 2019). The sensory panel was able to discriminate samples after virtual training and evaluation of samples outside the laboratory when prepared in a centralized cooking area. The different genotypes (Huarmeyano, SPK004, Resisto, MDP 452 and MDP 510) were differentiated based on their overall sensory profiles, and the duplicates of NASPOT 11 and MDP 452 were close together indicating the good panel performance in this setting. Nonetheless, the validity of conducting descriptive sensory analysis outside the lab cannot be concluded from the current study since panel performance was not compared with the conventional method. Future studies seeking to recommend alternative spaces in which to conduct descriptive sensory analysis should compare with laboratory evaluations. One important way of integrating descriptive sensory analysis with breeding is by developing rapid instrumental methods that can help in screening potential genotypes (Bough et al., 2019). The peak positive force and positive area of the first cycle of the TPA method for evaluating instrumental texture showed positive linear correlation with sensory firmness in this study. A previously established robust method for evaluating firmness of boiled sweetpotato using a wedge fracture test correlated sweetpotato texture with optimal cooking time (Banda et al., 2021a). In that study, the compression test did not discriminate between sweetpotato genotypes. In contrast, by applying a modified version of the compression test and sample preparation method, the current study found that the peak positive force and positive area of the first compression were useful for discriminating sweetpotato genotypes by their instrumental firmness. The linear regression model developed to predict sensory firmness using instrumental texture consistently confirmed that there was a good relationship between the two measures of sweetpotato firmness. The model had good precision but only explained 65 % of total variation among genotypes by sensory texture. Nonetheless, with this prediction the instrumental texture method developed in this study will still be Table 3 Penalty (mean drop) and corresponding respondent frequencies from 5-point just-about-right (JAR) questions, and mean overall liking rating (9-point hedonic scale) of seven genotypes evaluated by a consumer panel (n = 106) in onfarm trials. useful in helping breeders to objectively screen sweetpotato genotypes by sensory firmness earlier in the breeding process. It is possible that there are other instrumental texture or biochemical measures that could explain sensory firmness which could be added to the model and increase the variation explained and improve prediction of the model. The current method was correlated with sensory firmness and future studies should develop alternative methods that could predict textural aspects of sweetpotato related to other human perceptions of sweetpotato texture such as mealiness and fibrousness. Measures that investigate the visco-elastic properties of the sweetpotato such as stress- relaxation could be useful. Instrumental texture analysis such as the method developed in this study is simple and practical for breeding programs which could be a limitation of other texture measures. The established instrumental texture analysis method should be used to complement but not replace sensory profiling by trained descriptive sensory panels in breeding programs. Its application would be particularly useful in earlier stages of breeding when there are many genotypes to screen. It may still be necessary to use descriptive sensory panels to profile genotypes later in the breeding cycle when there are fewer genotypes and enough roots. According to the consumer evaluation and penalty analysis, D20, NASPOT 8, NAROSPOT 1 and Umbrella were the most preferred varieties and NKB3 was the least preferred variety. NASPOT 8 has previously been indicated as one of the most preferred varieties based on sensory quality . D20, NKB105 and NKB3 were test clones in the current study and results showed that D20, an orange fleshed variety was well liked by consumers, comparing well with the leading market varieties and one local variety (Umbrella) while performing better than another local variety (Muwulu Aduduma). In addition to validating an instrumental method for evaluating firmness of steamed sweetpotato, the study proposes a minimum (3700 gf) average positive peak force of the first cycle for use as a selection criterion when screening sweetpotato genotypes in breeding trials for consumer acceptance. Following this criterion, among the 12 genotypes of the DDBIO advanced field trial planted in 2020, only three test clones: D26, S36 and S47 would qualify to proceed to the next breeding stage while NKB3 and NKB105 could have been excluded from the on-farm trial that followed. Use of this proposed minimum value of peak force screening criteria proposed in this study requires validation in a study with genotypes of a wider range of firmness, especially firmer genotypes. Limitations of the study Breeding trials typically produce few roots which limits the availability of sample material for the various quality analyses conducted to inform screening of sweetpotato genotypes. Consumer testing of food products typically requires each consumer to be presented with similar size and shape of the product. Here roots of varying sizes and shapes were cooked, and the medium and large roots were divided differently in order to serve the large number of consumers. Consumer acceptability tests were also conducted with residents of one community and the preferred ranges of sweetpotato firmness could vary by region. Nonetheless, the study establishes protocols for efficient breeding based on consumer preferred texture parameters. Further work is required to develop screening criteria for appearance and flavor characteristics. Conclusion This study established a trained descriptive sensory analysis panel for sweetpotato in Uganda. A complete sensory lexicon, preparation and evaluation protocol for sweetpotato evaluation was developed. A method for instrumental texture analysis of steamed sweetpotato using TPA was validated and used to establish the lower critical value for Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
2022-05-19T15:15:05.782Z
2022-05-01T00:00:00.000
{ "year": 2022, "sha1": "49bd2fe645d826e71cb7a7fe9f92d8d448c5d053", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.foodqual.2022.104628", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "52a9be12dc25ab14f9669bb6166704b0ff1e7c87", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
235278407
pes2o/s2orc
v3-fos-license
Warehouse management and informatization in industrial application under the context of “Internet +” Manufacturing is the cornerstone of comprehensive national power, and ware housing is an important link of manufacturing production activities and a key node of t he logistics process. Under the guidance of the Made in China 2025 strategy, the const ruction of an efficient and rational information warehouse has become an important wa y to transform and upgrade the manufacturing industry. To achieve this goal, a scientif ic and rational management model is required, and accounting is an integral part of it. With the depth of the “Internet+” era, it is of great practical significance to promote the level of accounting informatization. This paper focuses on small and micro manufactu ring enterprises, with the goal of enhancing their accounting informatization level, and optimizes the business process design of warehousing, hoping to provide some referen ce for manufacturing enterprises to build intelligent warehousing systems. Background Manufacturing is the cornerstone of comprehensive national power, and the strategy of "Made in China 2025" proposes to make China step into the ranks of manufacturing power through 10 years' efforts. With the development of China's logistics industry, the role of warehousing in the logistics system has been increasingly emphasized. Excellent warehousing can help enterprises to improve the efficiency of physical flow, achieve effective management and use of resources, is one of the effective ways to reduce costs to enhance competitiveness. In the context of Industry 4.0 and "Internet+", building an efficient and rational information warehouse has become the pursuit of many manufacturing enterprises, which has put forward higher requirements for the management structure of enterprises, including the increasingly important process of accounting information. Currently there are still many small and micro enterprises using the traditional manual warehouse management, even if the installation of ERP suite (Enterprise Resource Management Plan), but only in the manual recording of data and then entered into the system, not really information technology. The current manufacturing industry in the common automated warehouse system, because of its huge hardware costs and complex operation, making many small and micro enterprises cannot afford, the urgent need to find a way to achieve the effect of information warehousing, but also for most small and micro manufacturing enterprises can afford the solution. This paper is a study of specific enterprise cases, to explore the current situation of small and micro manufacturing warehousing management, to put forward the corresponding solution ideas, and combined with the theoretical results to design a new set of warehousing solutions, in an effort to enhance the level of information technology warehousing. Difficulties of accounting informatization in small and micro manufacturing industry 2.1. Poor data and information management skills Data information is accurate and timely to ensure the full effectiveness of accounting management, but the current information supply capacity of small and micro enterprises is not satisfactory. In terms of data quality, the information technology of small and micro manufacturing enterprises in China is mainly focused on the administrative level, while the manufacturing process, which is very important to the manufacturing industry, is not fully covered by information technology, and the data collection method is backward and even relies on manual entry, which has a greater negative impact on the decision usefulness of accounting information. In terms of the effectiveness of data and information, the manufacturing industry attaches more importance to production and sales, often ignoring the construction of the enterprise's own management system, so that many enterprises have the problem of unreasonable information process design. Information transfer needs to go through complex roundabout or even repeated intermediate links, which greatly reduces the rate of information transfer, but also increases the risk of error. Figure 1 is a common warehousing business flow chart, small and micro manufacturing enterprises warehousing management information technology degree is low, there are generally information collection and transmission rely on manual, material identification is not standardised, confusing goods stacking, low precision product management and other issues. Outdated information systems Small and micro manufacturing enterprises for cost-saving considerations, and are not willing to spend a high amount of money on management accounting information software system. Although some enterprises have purchased special ERP software, it only meets part of the basic accounting functions of bookkeeping, accounting and report preparation, and many small and micro enterprises even remain in the stage of EXCEL records. The low investment makes the enterprise management accounting information system cannot achieve the integrated real-time data processing and analysis functions. The ideal information storage system for micro and small manufacturing industries is not to invest in huge hardware costs at the same time, you can use software projects and with a certain amount of manual operation, to achieve similar management effects with fully automated three-dimensional warehouse. Need to cover the finished product in and out of the warehouse, goods management, material distribution, warehouse inventory, quality tracing and warehouse returns and other links, to adjust the existing rough management for accurate management mode. To make the warehouse data information and ERP system of real-time correlation, to achieve information scanning, data analysis, report generation of intelligent, as far as possible to free manpower and improve data accuracy and timeliness. The overall strategy of accounting informatization in small and micro manufacturing industry 3.1. Using new technology to improve information production capacity In the era of big data, enterprises have put forward higher requirements on the accuracy and effectiveness of data information. The modern data processing methods represented by cloud computing and data mining have greatly enhanced the data supply capability of enterprises. These technical methods, which integrate artificial neural networks, Internet mining, management operations research and other new knowledge, enable enterprises to no longer limit their data supply and information production to the past and internal accounting information, but to quickly process the massive amount of data from inside and outside the enterprise in a comprehensive manner, and to dig deeper into the value in order to improve the scientific nature of forecasting and decision-making. Leveraging "Internet+" to promote process reorganisation At present, most manufacturing enterprises do not build a perfect management accounting information system, relying only on a single tool for application, but with the help of Internet cloud computing technology, it can enhance system integration and realise accounting platform operation. The platform is the integration of manufacturing accounting information system with other systems of the enterprise and even remote systems outside the enterprise, such as human resources management system, manufacturing DNS platform, customer relationship management system, etc. It can promote the high integration of management accounting and ERP (enterprise resource planning), and truly achieve advanced accounting objectives such as comprehensive budget management, supply chain cost management and risk control. 3.3. Transform the accounting objectives of manufacturing industry with "core competence" as the centre Cost advantage is one of the most important winning strategies of China's manufacturing industry in international competition, but with the gradual loss of cost advantage in recent years, the way to improve competitiveness of manufacturing industry is no longer limited to cost reduction. However, most manufacturing enterprises still regard "cost" as their central objective and only use accounting for basic profitability analysis, operational analysis and cost control, without bringing the management function of accounting into play. Only by transforming "cost reduction" into "innovation and efficiency" can economic growth be promoted to realise the "Made in China 2025" strategy, which requires the manufacturing industry to this requires the manufacturing industry to shift from a "cost"-centric orientation to a "core competence"-centric one. The cultivation of core competencies requires accounting in strategic cost management, supply chain cost optimization management, budget control, performance evaluation, enterprise competitiveness analysis, etc. to explore new competitive advantages in the manufacturing industry and enhance the core competitiveness of enterprises. The optimization of small and micro manufacturing warehouse information technology ideas To achieve accounting information, on the basis of the automated warehouse, but also with the effective collection of preliminary information and late intelligent decision-making, a set of information management system is needed to control all aspects of storage. According to the investigation of the current situation of small and micro manufacturing warehouse management, the following optimization ideas are proposed. Establishment of barcode information collection system RFID technology and barcode identification technology is the current two mainstream information acquisition system. RFID technology can fully automatic identification read information, but due to its multiple hardware and high cost and other defects, let small micro manufacturing industry, so choose relatively low cost barcode identification technology to build warehouse information acquisition system. Different enterprises can bind the bar code of materials according to their actual situation. After pasting the bar code one by one, the materials can be scanned and recorded. Not only the management accuracy is refined, but also the whole process of material storage can be controlled. Warehouse data fully interfaced with ERP To achieve the "Made in China 2025" strategy proposed by the manufacturing industry from weak to strong change, you need to rely on the Internet platform, to create warehousing information and ERP real-time docking of intelligent storage system. Material information is collected through barcodes and entered into the system, and the process is controlled so that the actual quantity and status of materials in the warehouse is in line with the data recorded in the intelligent storage system. The dynamic update of data allows the general staff and management to grasp in real time, providing the right guidance for production operations and data support for financial inventory. Human-assisted automation systems Small and micro manufacturing enterprises cannot afford a fully automated warehouse system, and the second best way to build a semi-automated system will need to have a manual link to each automated link. For example, although the establishment of a bar code information collection system, but in the material in and out of the warehouse, the transfer of goods also need staff to human handling and scanning, which can be in the conditions of the installation of huge hardware facilities as far as possible, to achieve the same effect with the fully automated three-dimensional warehouse. And people are an unstable factor, in order to achieve the intended goal of intelligent storage, it is necessary to have a high degree of staff execution to implement the strategic intent, which requires the development of appropriate staff assessment and reward and punishment measures to achieve the effect of supervision and incentive, so that human implementation and automation system integration. System construction to allow room for growth Considering that small and micro enterprises have more room for growth, the above system should be built not only for the current state of the enterprise, but also for the future development of the enterprise, leaving enough room for growth. The system needs to be designed and built to be flexible, to keep the business parameters 'flexible' and easy to change and upgrade, not too 'rigid', and to trust the business to move in a more automatic and intelligent direction. Figure 2 shows a diagram of the improved warehousing business process.
2021-06-03T00:51:20.174Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "ebe575ad1bbbf2fe9ee20bd84b25dfd5024961b4", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/769/4/042073", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "ebe575ad1bbbf2fe9ee20bd84b25dfd5024961b4", "s2fieldsofstudy": [ "Computer Science", "Business", "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
237101927
pes2o/s2orc
v3-fos-license
Iliosacral Bone Tumor Resection Using Cannulated Screw-Guided Gigli Saw - A Novel Technique Background Adequate margins are technically difficult to achieve for malignant tumors involving the sacroiliac joint due to limited accessibility and viewing window. In order to address the technical difficulties faced in iliosacral tumor resection, we proposed a technique for precise osteotomy, which involved the use of canulated screws and Gigli saw (CSGS) that facilitated directional control, anteroposterior linkage of resection points and adequate surgical margins. The purpose of the current study was to evaluate whether CSGS technique facilitated sagittal osteotomy at sacral side, and were adequate surgical margins achieved? Also functional and oncological outcomes was determined along with the noteworthy complications. Methods From April 2018 to November 2019, we retrospectively reviewed 15 patients who underwent resections for primary tumors of pelvis or sacrum necessitating iliosacral joint removal using the proposed CSGS technique. Chondrosarcoma was the most common diagnosis. The osteotomy site within sacrum was at ipsilateral ventral sacral foramina in 8 cases, midline of sacrum in 5 cases, and contralateral ventral sacral foramina and sacral ala with 1 case each. The average intraoperative blood loss was 3640 mL (range, 1200 and 6000 mL) with a mean operation duration of 7.4 hours (range, 5 to 12 hours). The mean follow-up was 23.0 months (range, 18 and 39 months) for alive patients. Results Surgical margins were wide in 12 patients (80%), wide-contaminated in 1 patient (6.7%), and marginal in 2 patients (13.3%). R0 resection was achieved in 12 (80%) patients and R1 resection in 3 patients. There were three local recurrences (20%) occurred at a mean time of 11 months postoperatively. No local recurrence was observed at sacral osteotomy. The overall one-year and three-year survival rate was 86.7% and 72.7% respectively.Complications occurred in three patients. Conclusions The current study demonstrated that CSGS technique for tumor resection within the sacrum and pelvis was feasible and can achieve ideal resection accuracies. The use of CSGS was associated with high likelihood of negative margin resections in the current series. Intraoperative use of CSGS appeared to be technically straightforward and allowed achievement of planned surgical margins. It is worthwhile to consider the use of CSGS technique in resection of pelvic tumors with sacral invasion and iliosacral tumors, however further follow-up at mid to long-term is warranted to observe local recurrence rate. Supplementary Information The online version contains supplementary material available at 10.1186/s12957-021-02349-5. Introduction Malignant bone tumors surrounding the sacroiliac joint often have poor prognoses due to late diagnoses and significant challenges in surgical managemen t [1][2][3]. About 25-32% malignant pelvic tumors have sacral infiltratio n [1,3,4]. In a bid to decrease rates of local recurrence and achieve wide resection margins, tumors extending to the sacrum were initially managed via hindquarter amputation, at severe cost of patient functionality and quality of lif e [5]. Encouragingly, limb-preserving procedures -primarily in the form of en-bloc tumor resection and reconstruction -emerged as viable surgical alternatives for pelvic sarcomas with sacral infiltration. However, en-bloc resection of iliosacral tumors poses significant challenges due to complex local anatomy, difficulty in nerve root preservations, control of intraoperative bleeding and functional reconstructio n [6,7]. Resections of pelvic tumors with overly generous margins are generally avoided in limb salvage due to potential anatomic and functional disruptions, thus risking inadequate resection margins and local recurrenc e [1,2,8]. Inadequate margin is more often at sacral side due to sacral anatomy and surgical exposur e [9]. Local recurrence was reported to be 21-47% when iliosacral tumors infiltrated the sacru m [1,10,11]. As such, tumor resections involving the sacroiliac joint require meticulous preoperative planning, effective and precise osteotomy of affected bone ensuring maximal preservation of neurovascular structures, and holistic postoperative care with regular follow-up. While multiple reports provide insight on oncological outcomes of malignant tumors spanning the iliosacral joint, there is paucity of technical articles describing effective methods for precise sacral osteotomy -a key component of successful iliosacral tumor resection. Hitherto case reports and case series detailing classifications of sacral tumor locations and their respective positions for sacral osteotomy did not sufficiently portray its technical difficultie s [12,13], which are often related to limited accessibility and observation. In clinical practice, resection of sacral tumors (of varying locations) demands precise osteotomy through areas such as the sacral ala, medial to the sacral foramina, and the sagittal mid-line of the sacru m [2]. However, techniques involving osteotomes, bone saws, and burrs often prove challenging in the resection of iliosacral tumors via combined antero-posterior approachin part due to poor linkage between two osteotomy sites, thus risking high intraoperative blood loss and inaccurate resection margins even with adjuvant of intraoperative navigatio n [14]. In the present study, we propose a technique for precise sacral osteotomy in iliosacral resections performed on 15 patients, with early oncological and functional results. This technique involves the use of cannulated screws and Gigli saw (CSGS), which provides benefits of directional control during resection, anteroposterior linkage of resection points, and safe resection margins . The purpose of the current study was to evaluate whether CSGS technique facilitated sagittal osteotomy at sacral side, and were adequate surgical margins achieved? Also functional and oncological outcomes was determined along with the noteworthy complications. Clinical Series From April 2018 to November 2019, patients with iliosacral bone tumors were retrospectively reviewed at a single-tertiary centre. Criteria for inclusion were: (1) histopathologic diagnosis of primary malignant bone tumors involving SIJ, (2) surgical resection necessitating one sagittal osteotomy in sacrum and CSGS technique was used, and (3) follow-up of at least one year for living patients. Additionally, we recorded the locations of tumor epicentre, AJCC tumor staging, and anatomical extent of tumor mass invasion. Institutional review board approval and patient consent were obtained prior to initiation of the study. Demographics of included patients are shown in Table 1. Patients with osteosarcomas and Ewing's sarcomas underwent neoadjuvant and postoperative adjuvant chemotherapy, whereas those with chondrosarcomas only underwent surgical resection of tumors. MRI after neoadjuvant chemotherapy was performed to determine the soft-tissue margins. Surgical Technique Resections were classified according to the Peking University classificatio n [1] of surgical approaches for pelvic tumors with sacral invasion (Fig. 1): Briefly, pelvisacral (Ps) I, II, and III resections refer to sagittal osteotomies through the ipsilateral wing of the sacrum, through the sacral midline, or lateral to the contralateral sacral foramina, respectively. A Ps a resection describes a pelvic osteotomy through the ilium, whereas a Ps b resection describes a concurrent resection of the acetabulum with osteotomies performed through the pubis and ischium or the pubic symphysis All procedures were performed begin with the patient in a prone position (posterior approach): A midline posterior reverse Y-shape incision was first performed, followed by L4/5 pedicle screws and contralateral S2alar-iliac screw fixation. Exposure of the posterior cortex of sacrum and sacroiliac joint was achieved via gluteaul flap dissection. To the greatest extent, exposure of posterior 1/3rd of the ilium can be achieved (sagittally guided by the greater sciatic notch). If the tumor margins do not exceed the aforementioned anatomical landmark, a Gigli saw can be placed at the planned iliac osteotomy site with confirmation of safety margins Penetration of the cannulated screw through the anterior cortex was confirmed by finger palpation by a pedicle sound (C). D and E showed a Gigli saw was introduced through the canulated screw and the planned osteotomy site (dash line on figure E). Then the iliac osteotomy was carried out through posterior approach (F and G). The tumor was removed after resection of ipsilateral half of L5/S1 disk (H). The specimen was showed (I and J). Postoperative X-ray showed osteotomy sites and reconstruction around the tumor (Figs. 2 and 3). Otherwise, an anterior approach would be needed to expose and dissect structures anterior to the acetabulumup to the symphysis pubis (Fig. 4). Sacral laminectomy was performed for identification and ligation of affected nerve roots (S1, S2) for all included patients due to sagittal invasion of tumor to the ipsilateral sacral foramina. If the ipsilateral side S3 and below nerve can be preserved, piezoelectric osteotomy can be performed from middle of S3/4 to lower edge of sacroiliac joint to mobilize the S3 nerve root. The bottom of sacral canal at L5/S1 level can be exposed with dura retractor. Next, a pedicle probe was used to locate the sacral midline at L5/S1 level for resection of type P-s II, with subsequent insertion of cannulated screw through (without penetration of the anterior cortex) under fluoroscopic guidance (Fig. 2C). Penetration of the cannulated screw through the anterior cortex was confirmed by finger palpation by a pedicle sound. According preoperative plan, the canulated screw can be placed at lateral recess for resection medial to the ipsilateral sacral foramina (between type P-s I and P-s II resection). Thereafter, a Gigli saw was introduced through the cannulated screw, allowing for sagittal osteotomy of S1-3 to be performed on the ipsilateral side (thus preserving the ipsilateral S3-5 nerves and bony structures of S4-5 with coccyx) or the sagittal osteotomy of S1-5 depending on the extent of tumor invasion as shown in Supplemental Digital Content. Then, iliac osteotomy was carried out as previously reported for total sacrectomy through single posterior approach onl y [15]. Ipsilateral half of L5/S1 disk removal can be performed with specimen being mobilized and distracted, thus achieving a standard SIJ (type IV) resection by posterior approach only. A modified canulated screw was developed as well with purpose of direction-control of Gigli saw (Fig. 5) An animation of said CSGS technique for sacral osteotomy is available as supplementary material. For patients with sacral tumor extending anteriorly beyond the posterior 1/3rd of the iliac bone, a combined anterior and posterior approaches were necessary. The canulated screw was usually placed at the L5/S1 level and the posterior wound was then closed temporarily prior to commencement of the anterior approach, performed with the patient lying in the lateral position. During the anterior approach, the tumor was exposed via soft tissue dissection, along with dissection of the iliacus and gluteus muscles. Commonly, iliac vessels and the lumbosacral trunk are dissected and protected. The anterior aspect of the sacrum promontary was then exposed. For patients with tumor involvement of the ilium without acetabular involvement (Ps-a), a distal cut was made via supraacetabular osteotomy. Patients with acetabular involvement (Ps-b) underwent osteotomies at the ischium and pubis, or through the pubic symphysis. During this step, the wound of posterior approach was then reopened. The canulated screw was then slowly advanced through the anterior sacrum under direct observation ( Fig. 4-C). Once through, a Gigli saw was then introduced through the cannulated screw in preparation for osteotomy. Osteotomy was performed once confirmation of osteotomy site and protection of dura and nerve roots were completed (Fig. 4-D). All resected sections were oriented, landmarked, then sent for surgical margin evaluation by an experienced histopathologist. Reconstruction was achieved using titanium mesh cage and pedicle screw-rod system across all patients with intact acetabulum, whereas patients with acetabular resection underwent fixation comprising of interpedicular screw fixation combined with hemipelvis reconstruction, extending up to the L3 vertebra e [16]. Postoperative course At 4 weeks postoperatively, patients were allowed to touch-toe weight bear, with gradually increased weightbearing from 6 weeks postoperatively onwards. Hip flexion beyond 90°was only allowed 6 weeks postoperatively for patients with acetabulum reconstruction. Postoperative follow-up of oncological outcomes, complications and function were conducted at regular intervals of 3 months for a minimum of 2 years. Resection margins were categorized by both Enneking syste m [17], wide, wide-contaminated, or marginal; and TNM R syste m [18]. Evaluation of functional outcomes was performed using the Musculoskeletal Tumor Society (MSTS) rating scale, and the MUD scoring system devised by Huang et a l [19] for comprehensive evaluation Statistical Analysis SPSS v. 19 (IBM, Armonk, New York, USA) was used for analysis. Continuous variables were compared using independent-samples t-tests. Local recurrence-free, disease-free, and overall survival were calculated to estimate the survival with Kaplan-Meier survival analysis. A p-value < 0.05 was considered statistically significant. Oncological Outcomes Most patients presented with localized disease (n = 10, 66.7%), whereas 5 patients (33.3%) had metastases at time of diagnosis -three of which were found to have pulmonary metastases. Surgical resection margins were wide in 12 patients (80%), wide-contaminated in 1 patient (6.7%), and marginal in 2 patients (13.3%). R0 resection was achieved in 12 (80%) patients and R1 resection in the other three patients. In patient no. 3, the R1 margin was found at sacral osteotomy. The R0 resection was achieved at sacral side in all the other patients. At mean follow-up of 23.0 months (18-39), nine patients (60%) were alive with no evidence of disease, one (6.7%) was alive with pulmonary metastasis, one (6.7%) was alive with local recurrence, and four patients (26.6%) had died of disease. There were three local recurrences (20%) occurred at a mean time of 11 months after surgery, although R0 resection was achieved in two of the three patients. All three patients received limbsalvage procedures. No local recurrences were found among the four patients who underwent amputations. No localized recurrence was observed at the sacral osteotomy sites. Patient No.1 diagnosed with chondroblastic osteosarcoma developed extensive tumor recurrence at 10 months postoperatively soon after adjuvant chemotherapy, and eventually died of disease at 16 months due to rapid systemic progression. In patient No.2, a 1.5cm soft tissue local recurrence adjacent to bladder was found at a routine follow-up 17 months after surgery. He received stereotactic radiotherapy and remained alive without disease at most recent followup. Patient no.10 was found to have local recurrence at 9 months postoperatively with tumor invasion of the iliac lymph nodes and pubic bone, and extension of tumor thrombus into the iliac vein and inferior vena cava despite prior removal of tumor thrombus in iliac vein during primary surgery. Targeted therapy was started and disease was stable till most recent follow-up. Functional Outcomes Functional outcomes were collected for all patients at a minimum of 12 months follow-up, with no patients lost to follow-up ( Complications A total of 3 patients (20%) had postoperative complications (Table 2). One patient suffered from wound dehiscence, which was managed with a combination of intravenous antibiotics, operative washout, and wound debridement with subsequent wound closure. Another patient with wound dehiscence was also managed similarly, with additional rotational flap performed for adequate closure. The latter patient received postoperative radiotherapy at 60Gy due to inadequate margins at periacetabular region. Both patients underwent wound closures only upon confirmation of negative bacterial and wound cultures. One rod breakage of SRS was observed in one patient with reconstruction of SRS and mesh. Discussion Despite advancements in surgical techniques and medical technology, precise surgical resection of iliosacral bone tumors still prove daunting even for experienced orthopaedic surgeons. In our previous experiences, sacral osteotomies using osteotome, burrs and piezoelectric osteotomy prove challenging due to limited accessibility into the pelvisacral space and poor directional control. This often resulted in unsatisfactory resection margins with excessive damage to neighbouring bone and anatomical structureswith the sequelae of increased local recurrences and greater patient morbidity. To circumvent these recurring issues, a novel technique for osteotomy of sacral tumors was trialled using Fig. 6 The Kaplan-Meier survival curves shows overall survival, local-recurrence-free survival and disease-free survival cannulated screws and Gigli saws for iliosacral tumor resections. The proposed technique was mainly based on the principle of a man-made bony canal that can accommodate a Gigli saw. The placement of canulated screw can be guided by intraoperative navigation or PSI. The advantage of Gigli saw was completeness, which was extremely helpful at sacral resection. It was useful particular when sagittal hemisacrectomy was required. Intraoperatively, placement of the cannulated screw through the sacrum provided adequate access into the pelvisacral space, with subsequent resection using Gigli saw granting directional control as we navigated around tumor margins. The effectiveness of this technique was assessed via analyses of surgical margins, oncological, and functional outcomes of 15 patients with varying extents of pelvisacral resectionranging from Ps IIa (tumor involvement of ipsilateral sacral foramina with sparing of the acetabulum) to Ps IIIb (tumor involvement lateral to contralateral sacral foramina with involvement of acetabulum) and osteotomy between Ps-II and Ps-III. The classification system requiring more accurate in executing osteotomy. In our patient cohort, chondrosarcoma was found to be the predominant pathology (46.7%), a consistent finding with multiple publication s [2,6,7,20,21]. Despite complexity of iliosacral sarcoma resections, satisfactory surgical margins were achieved with a 80% negative margin (R0) rate, compared with that reported in literature for iliosacral resections ( Table 3). The authors attribute the above to the use of cannulated screws and Gigli saw, which allows for better linkage of anterior and posterior aspects of the sacrum, easy access into the pelvisacral space and precise direction of osteotomy within the affected sacrum. Intraoperative navigation or PSI has gradually been accommodated in complex osteotomy in pelvic tumor surgery. The positive margin was reported to be 11.1-18.2% with use of navigation or PSI, which was lower than that of 25.0-34.4% by free-hand [7,14,22,23]. Our results showed a positive margin of 20% lower than those not using intraoperative adjuvant techniques, which can be attributed to the use of CSGS technique. However, the positive margin rate may potentially be further decreased if the proposed CSGS technique are used in conjunction with computer-assisted techniques, thereby improving both identification of planned osteotomy site and execution of bony cutting. The overall local recurrence in the current study was 20% among the range of literature reported recentl y [1,14,21]. One patient (no. 10) with contaminated wide margins suffered from local recurrence at 6 months follow-up. The recurrent tumor was found at the pubic osteotomy site, concomitant with an iliac tumor thrombus. A tumor thrombectomy was performed concurrent with resection of the primary tumor. Patient no.1 with wide resection margin experienced extensive tumor recurrence at 10 months follow-up. For this patient, the authors believe the cause of extensive tumor recurrence (despite wide resection margins) to be aggressive clinical behaviour of chondroblastic osteosarcoma, resulting in widespread extension of tumor within the entire primary operation field resulting in rapid systemic progression. Functional outcomes for our patients who underwent limb salvage procedures are promising despite significant morbidity associated with iliosacral resections and subsequent reconstruction. In our series, the mean MSTS93 score of 45.1% is comparable with other studies in literatur e [1,6,21,24], whereas the mean MUD score is 63%. Patients who underwent acetabular resection were found to have lower mean MSTS93 and MUD scores compared to patients with acetabular-sparing procedures. This can be attributed to significant gait abnormality due to loss of hip abductor muscles from resection of the ilium. Additionally, patients also suffer from partial loss of lower limb function due to sacrifice of sacral nerve roots from the lumbosacral trunk. In terms of bowel and bladder function, sacrifice of unilateral sacral nerves resulted in loss of sensation on the ipsilateral side, akin to results seen in sagittal sacrectomy as previously reporte d [19,25].
2021-08-17T13:44:22.695Z
2021-08-16T00:00:00.000
{ "year": 2021, "sha1": "e0654e62bd641c05e96d59da2312ebdadeb061b4", "oa_license": "CCBY", "oa_url": "https://wjso.biomedcentral.com/track/pdf/10.1186/s12957-021-02349-5", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e0654e62bd641c05e96d59da2312ebdadeb061b4", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
254112755
pes2o/s2orc
v3-fos-license
Pseudoscalar or vector meson production in non-leptonic decays of heavy hadrons We have addressed the study of non-leptonic weak decays of heavy hadrons (Λb,Λc,B\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Lambda _b, \Lambda _c, B$$\end{document} and D), with external and internal emission to give two final hadrons, taking into account the spin-angular momentum structure of the mesons and baryons produced. A detailed angular momentum formulation is developed which leads to easy final formulas. By means of them we have made predictions for a large amount of reactions, up to a global factor, common to many of them, that we take from some particular data. Comparing the theoretical predictions with the experimental data, the agreement found is quite good in general and the discrepancies should give valuable information on intrinsic form factors, independent of the spin structure studied here. The formulas obtained are also useful in order to evaluate meson-meson or meson-baryon loops, for instance of B decays, in which one has PP, PV, VP or VV intermediate states, with P for pseudoscalar mesons and V for vector meson and lay the grounds for studies of decays into three final particles. In external emission, a qq state from the W decay vertex can lead to a pseudoscalar meson (P) or a vector meson (V), and then the other two final q andq states again can produce a pseudoscalar or a vector meson. We have thus four a e-mail: liangwh@gxnu.edu.cn b e-mail: oset@ific.uv.es possibilities PP, PV, VP and VV for production. In internal emission, a q state from the first decay vertex and aq from the second decay vertex merge to produce either a pseudoscalar or a vector, and the remaining qq pair can again produce a pseudoscalar or a vector. Once again we have four possibilities, PP, PV, VP and VV for production. Certainly there are many other decay modes, and most of them originate from these basic structures after hadronization including an extra qq pair with the quantum number of the vacuum. Final state interaction of this pair of emerging mesons can give rise to resonances dynamically generated and the process provides a rich information on the nature of such resonances [19]. The primary production of the PP, PV, VP, VV pairs is thus important for the study of many other processes stemming from hadronization of these primary meson-meson states. There are other issues where this is important. One of them has to do with the possible violation of universality in e + e − , μ + μ − production inB 0 → γ * K * 0 decay [24], which is stimulating much work [25]. Loops involving mesons,B 0 → D − s D + , followed by D − s → γ * D − s , D + D − s →K * 0 , have come to be relevant on this issue [26] and one can have them with the primary production,B 0 → D − s D + , D * − s D + , D − s D * + , D * − s D * + , with all the loops interfering among themselves. One needs these primary amplitudes including their relative phase. On the other hand, there is a very large amount of decays of this type measured, and tabulated in the PDG [27], including B, B s , D, D s decays, and the internal or external emission modes. The problem arises equally in baryon decays as b or c , both in internal or external emission. A correlation of all these data from a theoretical perspective is worth in itself and this is the purpose of the present work. One common thing to these approaches is that different structures appearing in different reactions are identified and conveniently parameterized in terms of parameters (the most popular the Wilson coefficients) that are finally obtained from experimental data. In the present approach, we do not evaluate these matrix elements from QCD motivated models or elaborate quark models. Our aim is different: we identify reactions that have the same quarks in the initial and final states, and the same decay topology, and only differ by the spin rearrangements in the mesons. We then assume the radial matrix elements to be similar in these reactions and carry out the nontrivial Racah algebra on the weak Hamiltonian to describe the reactions and relate them. Another aspect of our approach is that it allows us to establish a relationship with approaches based on heavy quark symmetry [48,49] and improve upon them, in particular in the B → VP reactions, where the strict heavy quark symmetry gives zero for the matrix element. Our approach leads to predictions in fair agreement with experimental data, in particular for final states in the charm sector, which is not so well studied. The approach, however, leads to large discrepancies when one has pions in the final state, which we associate with the failure of the basic assumption of equal radial matrix elements for the same flavour quarks, since the small pion mass leads to large momentum transfers in the reactions with the corresponding reduction of these matrix elements. An added value to the present work is the prediction of decay rates for b and c baryons into one baryon and a meson, which unlike B decays, have not been much studied theoretically. Formalism We shall apply the formalism to both external emission and internal emission and for baryon and meson decays. We shall concentrate on the decay which are most favoured by Cabbibbo rules. and that the transitions within the same column are Cabibbo favored, the bc transition is needed for the b decay, and then the W − couples tocs ≡ D − s . We shall also discuss the D * − s production and the formalism is equally valid for π − or ρ − production. The next step is to realize that in 0 b the ud quarks are in isospin I = 0 and spin S = 0, and they are spectators in the reaction. The final quarks produced are then c and ud (I = 0, S = 0), which form the + c . Since the ud quarks are spectators, we look at the matrix elements in the weak transition for the diagram of Fig. 3. We shall use the fact that the D − s and D * − s have the same spatial wave functions and only differ by the spin rearrangement, an essential input in heavy quark spin symmetry (HQSS) [50,51]. We also do not attempt to calculate absolute rates, which are sensible to details of the wave function and form factors, but just ratios. Given the proximity of masses of D − s , D * − s , we use again arguments of heavy quark symmetry to justify that the spatial matrix elements will be the same in D − s or D * − s production and only the spin arrangements make them differ. We advance however, that this is the only element of heavy quark symmetry that we use. We shall see later that there are terms of type p/m Q (m Q is the mass of the heavy quark) that are relevant in the transitions and they are kept, while they would be neglected in calculations making an extreme use of heavy quark symmetry. The weak Hamiltonian is of the type γ μ (1 − γ 5 ) in each of the weak vertices and then we have an amplitude where c corresponds to thec state that forms the D − s . In order to evaluate these matrix elements, we choose for convenience a reference frame where the D − s is produced at rest. In this frame the b and c have the same momentum, p, given by Furthermore, neglecting the internal momentum of the quark versus p, which is of the order of 5000 MeV, we can write since these ratios are related to just the velocity of c or b . In that frame the quarks of D − s will be at rest and we take the usual spinors with where p, m and E p are the momentum, mass and energy of the quark. The spinors v r for the antiparticles in the Itzykson-Zuber convention [52] are, We take the Dirac representation for the γ μ matrices, By using Eq. (3), we can rewrite where M , E refer to the mass and energy of the b or c in the D − s rest frame, and B Q , p Q , the B factor of Eq. (5) and the b or c quark momentum, respectively. We also note that in the spinor and γ μ convention that we use we have γ 5 u r = v r , such that and we can just use spinors corresponding to particles instead of antiparticles when these occur, just changing a global sign, since we only have one antiparticle,c. The next consideration is that we must combine the spins of S 1 , S 2 to form a pseudoscalar or a vector and then we must implement the particle-hole conjugation. For this let us recall that a state with angular momentum and the third component j, m behaves as a hole (antiparticle in this case) of j, −m with a phase Since (−1) 2m = −1 for the quarks, we incorporate the sign of Eq. (9) and the phase of Eq. (10) considering the spinor of spin S 2 in Fig. 3 as a state that combines with S 1 with spin third component −S 2 and phase (−1) 1/2+S 2 , Then 1/2, S 1 | 1/2, −S 2 | (−1) 1/2+S 2 will combine to give total spin j = 0, 1 for pseudoscalar or vector production. The next step is to realize that the state |1/2, S 1 |1/2, −S 2 , which will form the D − s , is at rest and the γ μ , γ μ γ 5 matrices reduce to γ 0 ≡ 1, γ i γ 5 ≡ σ i in the bispinor χ r space, such that we are led to evaluate the matrix element where the M | · · · |M matrix elements are evaluated in the D − s rest frame. This is done in Appendix A. where, as shown in Appendix A, |t| 2 is given by with j = 0 for D − s production and j = 1 for D * − s production. The momentum P D ( * )− s is the D − s or D * − s momentum in the decay of the b in its rest frame, and A and B in Eq. (14) are given by Eq. (8), the prime magnitudes for c and those without prime for b . We recall that p in Eq. (14) is the momentum of b or c in the rest frame of D ( * )− s given in Eq. (2). We observe that in the absence of the p terms, the production of the vector state has a strength a factor of three bigger than the pseudoscalar one. However, the B terms are relevant and are responsible for diversions from this ratio, as we shall see in the Sect. 4. External emission in B decays We will be looking at the decaysB 0 The quark diagram for the transitions is shown in Fig. 4, where jm denotes the spin and its third component of the meson D ( * )− s that comes from the W − conversion intocs, and j m denotes the spin and its third component of the meson D ( * )+ that comes from the combination of cd. We must couple the bd quarks to spin zero,cs to j m and cd to j m . This is done explicitly in Appendix B. The results that we obtain there are summarized as follows. According to Appendix B, we obtain (A) j = 0, j = 0: (B) j = 0, j = 1: (C) j = 1, j = 0: with A, B, A , B , p given by and we must change D + to D * + in case of D * + production. The momentum p, as discussed before, is the momentum of B or D + (D * + ) in the rest frame of the D − s (D * − s ), given by where by D ( * ) s we indicate either D − s or D * − s and the same for D ( * )+ . The energies in Eq. (20) are E i = M 2 i + p 2 . In this case, following the normalization convention of the meson fields in Mandl and Shaw [53], the width is given by with P D ( * ) s the D ( * )− s momentum in theB 0 rest frame, We can see that the cases (B) and (C), corresponding to D − s D * + and D * − s D + productions, have the same strength, which is in very good agreement with experiment [27], as we shall see in the Sect. 4. Internal emission We study now another topology of the weak decay process, the internal emission, and again we differentiate the case of baryon decay from the one of meson decay. b decay in internal emission We look now at the process depicted in Fig. 5 for the decay b → η c (J/ψ) . Once again we look at the most favored Cabibbo-allowed process. The b quark converts to a c quark where, A, A , B, B are the coefficients of Eq. (8) for the b and respectively. The matrix element of Eq. (24) can be written as Note that in the absence of the p-dependent terms, the ratio of production of j = 1 to j = 0 is a factor 3, apart from phase space. This is what was obtained in Ref. [54] with the strict application of heavy quark spin symmetry. The p-dependent terms, however, change this ratio, as we shall see. It is also remarkable that this result is the same one obtained for external emission (see Eqs. (14) and (27)), even if the original matrix elements are different in both cases. Internal emission for meson decays Now we look at the diagram of Fig. 7. In the former section we have coupled the cc to jm. Here we must couple in addition bd to 00 and sd to j m . Once again we take the different terms and project over 00 for theB 0 and j m for the finalK 0 . Details of the calculations are shown in Appendix D. The results obtained are the following. |t| 2 is given by for j = 1, j = 0; 2 (AA ) 2 These results are the same as those obtained for external emission in Eqs. (16)- (19) even if the original matrix elements are quite different. b , c decays in external emission We apply the above formulas to b and c decays, both with the Cobibbo most favored as well as with Cabibbosuppressed modes. The decay width in the Mandl-Shaw normalization is given by Eq. (13) which we reproduce here as generic for all the decays studied here, where i , f refer to the initial ( b or c ) and the final ( c or ) in b → D ( * )− s c or c → π(ρ) , and P f is the momentum of the final m f meson, We apply it to The Cabibbo-suppressed rate can be calculated using the same formulas, but multiplying by We can then obtain the widths for all these decays using the same global constant, related to the spatial matrix elements of the quark wave functions, which we do not evaluate, but assume equal for all cases. While this is very good when dealing with D − s or D * − s production, it is not so much for the other cases, so we should keep this in mind when comparing with data. In Table 1 we give the results. We separate the cases of b and c decays since they involve different Cabibbo-Kobayashi-Maskawa matrix elements and the spatial wave functions can also be different. We also make a different block for the decays of b in the light sector for the same reasons. The experimental data in this table and the following ones are taken from averages of the PDG [27]. In the first block we show the results for where the latest two modes are Cabibbo-suppressed (we do not count for this purpose the b → c transition which is common to all these decay modes.). The theoretical errors contain only the relative errors of the experimental datum used for the fit (in some cases later on where the experimental numbers have a different + or − error, we take the bigger relative error for simplicity in the results.). We fit the b → D − s c decay rate and make predictions for the other decay modes, we can only compare with b → D − c which is Cabibbo-suppressed. We find results which are barely compatible within uncertainties. We should stress that on top of the factors found by us, one should implement extra form factors which stem from the spatial matrix elements involving the wave functions of the quarks. These depend on momentum transfers and hence the masses. Our position is that, given the small mass differences between D − s , D * − s , D − and D * − , these intrinsic form factors should not differ much. In any case, the differences found between our theory and the experimental results could serve to quantify ratios of form factors in those decays. Yet, in the present case, we can only conclude that they are very similar for these decays. Another comment worth making is that Eq. (14), in a strict use of heavy quark symmetry, neglecting terms of type p/m Q , would give a rate three times bigger for the decay into a vector than for the related pseudoscalar. Yet, the theoretical ratio obtained is 1.23, indicating the relevant role played by the p/m Q terms (B p, B p terms) in the decay rates. In the second block we fit the rate for b → K − c to the experimental datum and observe one result which is common to all the results that follow: the rate for b → π − c is grossly overestimated. Two reasons can be given for it. First, the π − has been considered as a qq state, but being a light Goldstone boson, its structure should be more complex. Second, the light π − mass will have as a consequence that the intrinsic momentum transfers and will be much larger and consequently the form factors much smaller. This is one of the cases where the discrepancies found here can be used to determine empirically the intrinsic form factors of the reaction. In the last block, where we fit c → K + , we observe again the overestimation of the c → π + mode. The prediction for c → ρ + is consistent with the experimental upper bound. At this point it is mandatory to make one more observation concerning some of the decays in Table 1. We take the b → D − c reaction for the discussion and the arguments can be applied also to the For these decays there is an alternative decay topology to the external emission involving transfer diagrams, which for the b → D − c case we depict in Fig. 8. The new mechanism is also Cabibbo-suppressed in the upper W − vertex, as the mechanism in external emission of Table 1 (see Fig. 2 replacingcs bycd). But there is a very distinct feature in the new topology: A d quark from the b is transferred to the D − meson and the d quark originating from W − →cd is trapped by the final c state. These transfer reactions occur in nuclear reactions and also reactions using quark degrees of freedom. Normally these mechanisms involve large momentum transfers and they are highly penalized. Reduction factors of three orders of magnitude or more are common in nuclear reactions involving transfers of nucleons from the projectile to the target [55]. In quark models of hadron interaction [56] such mechanisms are taken into account by means of the "rearrangement" diagrams which are also drastically reduced compared to the direct diagrams [57,58]. In our case, to have an idea of the momentum transfers involved, let us go to the frame where the meson produced is at rest, the momentum of b and c is 5670 MeV/c (see Eq. (2)). Then in the mechanism of Fig. 8 we have to make a large momentum transfer to bring the quark d of the b to the D − at rest, and similarly a large momentum transfer to bring the quark d produced at rest in the W − →cd vertex to the fast moving c in that frame. The matrix elements accounting for such mechanisms involve form factors with large momentum transfers that make these mechanisms extremely small. It is interesting to compare our results with those of Ref. [18], where using a quark-diquark picture and light front dynamics the baryonic decay rates of Table 1 have also been evaluated. It is not possible to compare absolute values because they have been fitted to different observables, but some of the ratios can be compared. Our emphasis has been in relating the vector or pseudoscalar decay modes. In this sense the ratio between the branching ratios for c → K + and c → K * + is 3.39 in our case versus 0.53 in Ref. [18]. The ratios between two vector channels are more similar, in this sense the ratio between the branching ratios for c → ρ + and c → K * is 27.8 in our case versus 21.5 in Ref. [18]. In the case of b decays the ratio of rates between b → K − c and b → K * − c is 3.21 in our case versus 0.55 in Ref. [18]. However, the s c are more similar, 0.81 in our case versus 0.67 in Ref. [18]. It is clear that the models are providing different results and this makes the measurement of the missing rates more urgent to keep learning about the theoretical aspects of these reactions. b , c decays in internal emission In this case we have There are no decays of this type for + c with in the final state. In Table 2 we make predictions for three decay modes, fitting b → J/ψ for which there are experimental data. Once again we see that the ratio of rate for b → J/ψ and b → η c is only 1.49 instead of the factor three that one would obtain with strict heavy quark symmetry. Once again, the B p, B p terms are responsible for this difference. B decays in external emission Let us see Because we have only mesons, the width in the Mandl-Shaw normalization is given by with where M 1 , M 2 are the masses of the final mesons. The results for B decays in external emission are shown in Tables 3, 4, 5 and 6. In Table 3 we distinguish again the heavy sector from the light sector. In the heavy sector, fittingB 0 → D * − s D * + to the experiment, we obtain results in good agreement with experiment. In the case ofB 0 → D − s D + , there is an overestimate of about 50% counting the extremes of the error bands, which indicates again the reduction effect that the form factor would have by going from the masses of D * − s D * + to the lighter ones of D − s D + . It is interesting to remark that, within same masses, the rates for PV and VP decay modes (D * − s D + and D − s D * + ) are the same. We see this in experiment within errors. Even more, the ratios of the centres of the results are 1.08 for experiment and 1.06 for the theory. More interesting is to realize that these two modes are proportional to (B p + B p) 2 (see Eqs. (17), (18)) and would be strictly zero in the heavy quark counting. Let us also stress that in this counting the rate for VV decay to PP decay would be a factor of three. Experimentally it is 2.45, indicating the more moderate role of the B p, B p terms in this case. In the light sector we see again the gross overestimate of the rates in the modes with a pion in the final state. More surprising is the discrepancy of a factor 1.7, counting the extremes of the errors, for theB 0 → ρ − D * + decay, although given the large experimental errors speculation is not appropriate at present time. Concerning the two pionic modes, it is still rewarding to see that the ratio of the rates of the two pionic decay modes is in good agreement with experiment. This should be the case if the discrepancies of the absolute rates are due to the intrinsic form factor, because this should be very similar for π − D + or π − D * + . The results in Table 4 are related to those in Table 3, only the spectatord quark is substituted by aū. Once again, in the heavy sector we find a good agreement with experiment. There is still a small overestimate of the rate for B − → D − s D 0 like in the related former case ofB 0 → D − s D + , but counting extremes in the errors the discrepancy is only of 10%. In the light sector we find again the discrepancy in the modes with a final pion. Interesting is the rate for B − → ρ − D * 0 , where counting the extreme of the errors the discrepancy is only of 10%, unlike the larger discrepancy in B 0 → ρ − D * + that we discussed before. The ratio of the experimental rates for B − → D − s D * 0 and B − → D * − s D 0 is 1.08 versus 1.06 for the theory, and the ratio of the experimental rates of B − → D * − s D * 0 to B − → D − s D 0 is 1.9 while in the counterpart ofB 0 decay it is 2.45. However, the ratios can be made compatible playing with uncertainties. In the light sector we find again the large rate for the pionic decay modes and the B − → ρ − D * 0 rates are compatible within uncertainties. The ratio of the two pionic decay rates is roughly compatible with experiment within errors. At this point it is important to have a look at the results of Tables 3 and 4 from a different perspective. As we have mentioned, the difference between Tables 3 and 4 is that we have replaced the spectatord quark by aū. In this sense, within the pure mechanism for external emission, we should expect the same rates, up to a minor effect of the difference of masses in the phase space. This is actually the case in Tables 3 and 4 in the first block, both theoretically and experimentally. However, in the second block the experimental numbers are about double in Table 4 than in Table 3. This requires an explantation. Indeed, while the first block of decays proceeds through external emission, the second block can also proceed via internal emission, as shown in Fig. 9 for B − → π − D 0 . Internal emission is color suppressed and is expected to be reduced by about a factor of three relative to external emission and thus about one order of magnitude in the rate. This is the general rule experimentally, but in processes where the two mechanisms are possible we expect an interference, and assuming constructive interference we would have a factor (1 + 1 3 ) 2 1.8 in the rates of the second block in Table 4 versus the counterpart in Table 3. This is actually the case and has been studied in Refs. [2,[59][60][61][62][63]. Two amplitudes a 1 , a 2 are considered to account for the effective charge current (i.e. the external emission in our case) and effective neutral current (i.e. the internal emission in our case). The amplitudes a 1 and a 2 are fitted to experiment for theB 0 → π − D + , ρ − D + , π − D * + and ρ − D * + , and both the relative magnitude and the sign are obtained for a 1 Fig. 9 Internal emission mechanism for B − → π − D 0 decay a This datum is obtained by averaging D 0 → K * − π + ; K * − → K − π 0 and D 0 → K * − π + ; K * − →K 0 S π − a 2 , with a 2 /a 1 = 0.25 ± 0.07 ± 0.06 [2,62]. In addition to some coefficient close to 1 in the a 2 /a 1 term, this reproduces the experiment in a picture qualitatively similar as the one exposed above based on the color counting and constructive interference. Different values for |a 2 /a 1 | are obtained in Ref. [36], with a relatively small phase in the ratio a 2 /a 1 , but qualitatively similar. Assuming similar relative contributions of the internal emission mechanism in all cases, the ratios of rates in the second block of Table 4 would still make sense, exception made of the pionic production mode for the reasons exposed along the work. Yet, the ratio between the related π − D 0 and π − D * 0 modes should also be meaningful and we see that it is in good agreement with experiment within errors. The rates in Table 5 are also related to those in the former two tables. By fixing the rate forB 0 s → D * − s D * + s , the rates forB 0 s → D * − s D + s + D − s D * + s are compatible within errors, and the one forB 0 s → D − s D + s is a bit overestimated even counting errors. In the light sector we find again a gross overestimate for the pionic decay modes but the results forB 0 s → ρ − D * + s obtained fixing the rate ofB 0 s → ρ − D + s are compatible with experiment. Once again, the ratio of the two pionic decay modes is compatible with experiment. Actually, the fact that the ratio of decays to π P and π V are compatible with experiment in spite of the very different expressions of |t| 2 comes to reinforce our statement that it is the intrinsic form factor (independent on whether we have P or V in the final state) which is responsible for the overcounting of the rates in the theory. In Table 6 we show predicted ratios for several decay modes of B − c . Since we expect the rates for decay modes with pions in the final state to be grossly overcounted, it is not surprising to see that ratios of heavy decay modes to those with pions in the final state are rather small compared with experiment. Yet, the only ratio that we can compare involving heavy decay modes (line three of the table) is in very good agreement with experiment. D decays in external emission We look at To evaluate the rates with π 0 , ρ 0 , η, η production, we must look at the dd for π 0 , ρ 0 , and ss for η, η . We have with the η, η mixing of Ref. [64] Thus in the case of π 0 , ρ 0 production, we must multiply by 1 2 the standard formula, in the case of η production we must multiply by 1 3 and in the case of η production by 2 3 . The results for D decays in external emission are shown in Tables 7, 8 and 9. In Table 7 we show results for D 0 decays. We separate the sectors with π or ρ in the final state because of the different masses. In the pionic sector we find that the D 0 → π + π − mode, with two pions in the final state is overcounted, but the D 0 → π + ρ − mode comes out fine when the D 0 → π + K * − is fitted to experiment. The D 0 → π + K − mode is also overcounted. In the ρ sector, once again the ρ + π − mode is overcounted, following the general trend. B 0 → D 0 π 0 (2.11 ± 0.14) × 10 −3 (2.63 ± 0.14) × 10 −4 The D + decay modes of Table 8 are closely related to those of the D 0 decay discussed before. In the pionic decay sector we fit D + → π +K * 0 and then the D + → π + ρ 0 rate is basically compatible within errors with experiment, and the D + → π +K 0 mode is a bit overcounted. The D + → π + π 0 mode, with two pions in the final state, is also overcounted following the general trend. In the ρ sector we do not have experimental rates to compare and we leave there the predictions. In Table 9 we see results for D + s decay. There we have fitted the D + s → π + η mode and the rates for D + s → π + η , D + s → π + φ modes are a bit smaller than those of experiment. Following the general trend it is better to assume that the D + s → π + η rate, with smaller masses, would be a bit overcounted and the rates for the bigger mass modes would be in better agreement with experiment. The Cabibbosuppressed modes of D + s → π + K 0 and D + s → π + K * 0 would be in fair agreement with experiment within errors. In the ρ sector, if we fit D + s → ρ + φ, the D + s → ρ + η rate is a bit small compared with experiment and the D + s → ρ + η rate smaller by more than a factor of two, counting errors. B decays in internal emission We look at the cases Note again that in the case of π 0 , ρ 0 production we must multiply the standard formula by 1 2 and in the case of η, η production by 1 3 and 2 3 respectively, considering the dd and ss content of these mesons. Note also that the case 2) is unrelated to the other, because we have a different radial wave function, but we can calculate the ratio of the two, and relate to the rates of case 5). Tables 10, 11 and 12 show the branching ratios forB 0 , B − andB s decays in internal emission, and Table 13 shows the rates of branching ratios forB − c decays in internal emission. In Table 10 we show results forB 0 decays with internal emission. In the η c , J/ψ decay sector, the rates obtained are in quite good agreement with experiment, with theB 0 → η cK 0 rate a bit overcounted, following the trend for all PP decays, due to the smaller masses involved, which would produce larger momentum transfers, and, thus, reduced intrinsic form factors. The modes with ψ(2S) in the final state can be considered in just rough agreement. In the D 0 , D * 0 decay sector, once again the pionic modes are grossly overcounted and the predictions forB 0 → D * 0 ρ 0 are compatible with the upper experimental bound. In Table 11 we show results for B − decays in internal emission. The results are related to the former ones since we have just changed ad spectator quark by aū quark. The results are similar to those ofB 0 decays, with a bit of overcounting for the B − → η c K − rate. The other rates involving η c or J/ψ are basically compatible with experiment within errors. We omit to present results in Table 11 for the B − → D 0 π − , D * 0 π − , D 0 ρ − , D * 0 ρ − decays. Indeed, these modes proceed both via internal but also external emission and there is a constructive interference between these modes. We discussed this issue when referring to Table 4 in Sect. 4.3. The external emission mode, color favored, is dominant, but the color suppressed internal emission mode, through interference increases the decay width of these modes by about a factor of two. In Table 12 we show results forB s decays in internal emission. In the J/ψ, η c sector the results obtained are fair compared with the experiment. Those in the D 0 , D * 0 decay sector are also fair. In Table 13 we show results of rates involving B − c decays. The only one measured is in fair agreement with experiment. At this point we would like to make some discussion about the momentum transfer involved in the reactions and the repercussion in form factors. In Table 14 we show some reactions involving pions in the final state and the reaction used to normalize the data, together with the overcounting factor in the π decay mode. The momentum transfer from one hadron to another is calculated in the rest frame of the decaying particle. In the second and third blocks, where the π overcounting factor is of the order of 100, we see that the momentum transfer is very large and the difference between momenta in the π − D + and ρ − D + decay modes of of about 70 MeV/c. This seems to indicate that we are at the tail of the form factor where it decreases rapidly and a difference of 70 MeV/c can make such a difference. On the contrary in the first block, where the overcounting factor is of the order of 15, the difference of momenta between b → π − c and b → K − c is only about 30 MeV/c which makes the changes in the Table 14 The momentum transfer (q) in reactions involving pions in the final state and the approximated π overcounting factor (OCF) form factor less drastic. For the case of c → π + and c → K + the difference of momenta is of the order of 83 MeV/c, larger than before, but the total momentum transfers are substantially smaller, so we are in a region where the form factor do not fall down so fast. In the fourth block the momenta are similar as in the c decay modes. The difference of momenta between D 0 → π + K − and D 0 → π + K * − is 150 MeV/c and the total momentum transfer is much smaller than in theB decay modes, so we obtain an overcounting factor of about 5, much smaller than in theB decays. The overcounting factor is about a factor of two for the D + → π +K 0 and D + → π +K * 0 , but counting errors in the rates, the difference between these two cases is not very large, and the important thing is that qualitatively we can understand the reason for these different overcounting factors. In the fifth block for theB 0 → π 0 D 0 andB 0 → ρ 0 D 0 decay modes, we find a surprise, since the difference of momenta between them is of the order of 70 MeV/c, like that in the second block and we should also expect an overcounting factor of the order of 100. Yet, the overcounting factor is only of the order of 8. The difference between these decays is thatB 0 → π − D + proceeds via external emission andB 0 → D 0 π 0 proceeds via internal emission. We find a plausible explanation for this: In external emission the momentum transfer, q, is carried by a single W (see Fig. 4), while in internal emission this momentum can be shared by two W qq transitions (see Fig. 7). It is well known in nuclear physics, applying Glauber theory, that in such cases, the optimal rate appears when the momentum transferred is equally shared in the two scattering points [65,66]. Then, assuming a simple form factor e −α 2 q 2 , typical of quark models, we would have e −α 2 (q/2) 2 e −α 2 (q/2) 2 = e −α 2 q 2 /2 in internal emission versus e −α 2 q 2 in external emission. So, the effect of form factors should be more drastic in external emission. Finally in the sixth block we show for reference the momentum transfers in the B − → D − s D 0 and B − → D * − s D 0 . We see that the momenta are smaller than in the pionic modes studied before, and this should make the predictions in that sector more reliable. D decays in internal emission We have the following cases. 1. D 0 →K 0 π 0 ,K * 0 π 0 ,K 0 ρ 0 ,K * 0 ρ 0 ; 2. D 0 → π 0 π 0 , ρ 0 π 0 , π 0 ρ 0 , ρ 0 ρ 0 ; (Cabibbo suppressed) 3. D + →K 0 π + ,K * 0 π + ,K 0 ρ + ,K * 0 ρ + ; 4. D + → π 0 π + , ρ 0 π + , π 0 ρ + , ρ 0 ρ + ; (Cabibbo suppressed) We should note that the decay modes for D + are the same ones as in external emission of Table 8 and there can be a mixing. The amplitudes with external emission are bigger, since the mode is color favored, and this mode will dominate. However, as we saw in the discussion concerning theB 0 → π − D + and B − → π − D 0 decays, the interference of the two mechanisms can lead to larger decay rates than with the external emission alone. However, in the present case by comparing the D + → π +K 0 and D 0 → π + K − , if the pattern of interference was like in B → π D decays we should expect a bigger rate for D + → π −K 0 , which proceeds via the two mechanisms. Yet, experimentally the rate for D 0 → π + K − is bigger than for D + → π + K 0 and the same happens with the D + → π + ρ 0 versus D 0 → π + ρ − , where the second rate is bigger than the first, even if we multiply by a factor 2 the D + → π + ρ 0 rate to account for the reduction factor of 1/2 mentioned above. It is clear that the pattern of interference is different for D mesons and B mesons. Yet, as in the case of Tables 3 and 4, the ratio of rates in Table 8 should be fair. For D 0 and D s decays, the modes obtained here are different than for external emission, but they can be reached by strong interaction rescattering. By looking at Tables 3 and 10 forB 0 decay in external and internal emission, we see that the former have one order of magnitude bigger rates than the latter. It is then quite likely that the internal emission modes are easier reached by external emission and strong rescattering, and, thus, predictions made from the internal emission formulas would be misleading. We thus refrain from showing results for these modes. One might argue that the B decay modes from internal emission could also be obtained by external emission followed by strong interaction rescattering. Yet, this is far more unlikely than in D decays. Indeed, takeB 0 → D − s D + in external emission and B 0 → η cK 0 in internal emission. D − s D + and η cK 0 are coupled channels, but transition from one to the other requires the exchange of a D * s vector meson in the extended hidden gauge approach and is penalized by the large mass in the D * s propagator [67][68][69]. On the contrary, if we take D 0 → π + K − from external emission and D 0 → π 0K 0 from internal emission, the π + K − → π 0K 0 transition requires the exchange of a ρ and gives rise to the standard chiral potential. The B decay modes by internal emission are thus, genuine modes, with expected small interference from external emission followed by rescattering. Conclusions We have made a study of the properties of internal and external emission in the weak decay of heavy hadrons from the point of view of the spin-angular momentum structure, differentiating among the vector and pseudoscalar decay modes. The rest of the structure is given by intrinsic form factors related to the spatial wave functions of the quark states, which do not differentiate the spin of the mesons formed. In this sense, for similar masses of the decay products, like η c , J/ψ or D, D * , we can obtain rates of decays up to a global factor. Yet, we are not using heavy quark symmetry, and actually we show that the B → PV, VP decay modes are proportional to (B p) 2 or (B p) 2 , which are terms of type ( p m Q ) 2 , with m Q the heavy quark mass, and would be neglected in a strict heavy quark symmetry counting. We show that these modes have a similar strength than the B → PP, VV modes which survive in the heavy quark limit, and these predictions are corroborated by experiment. The derivation of the final formulas requires a good deal of angular momentum algebra that we have written in the appendices. Yet, the final formulas are rather easy and we can show that |t| 2 is formally the same for internal and external emission. We applied the formulas to correlate a large amount of data in b , c decays and B or D decays that involve more than 100 reactions. We have taken a given datum for a certain decay rate and then have made predictions for the related reactions. The agreement in general is quite good and the discrepancies are systematic. The most remarkable one is that decay modes involving pions in the final states are overcounted in our approach. We gave an explanation for that, because the small mass of the pion leads to larger momentum transfers that reduce the intrinsic form factors related to the spatial wave functions of the quarks involved, which are independent of the spin rearrangements, since all the quarks are in their ground state and only spin rearrangement differentiates the pseudoscalar from a vector, say η c , J/ψ or D, D * . The results obtained go beyond the evaluation of ratios and the predictions made for decay rates. We have evaluated the amplitudes for B → PP, PV, VP, VV with the momentum and spin structure and proper relative phase, and thus, this information is valuable if one wishes to evaluate loops that contain these intermediate channels, as one would like to do in studies related to the possible lack of universality. The discrepancies for the case of pion production modes can be used to find information on the intrinsic form factors involved in the reactions beyond the spin-angular momentum structure that we have studied in detail. The formulas obtained are ready to compare with future measurements that would involve polarizations of the vector mesons produced. In most cases we have made predictions for rates that have not been yet measured. The results obtained here can be compared with future measurements. The rates obtained can also be used in analyses that require estimates of some rates to induce other rates. Finally the formalism deduced here also lays the grounds for further studies in which one can have internal or external emission, as we have done, and in the final state we hadronize creating a qq pair with the quantum numbers of the vacuum, which together with the primary q q pair formed leads to two mesons. In this case one would address decay processes with three particles in the final state and help correlate a larger amount of decay modes already observed. As was shown in Eq. (12), we must evaluate the matrix element Using the spinors of Eqs. (4), (8) and the γ μ matrices of Eq. (7), and the property we get the result 1 where with A, A or B, B coming from Eq. (8) for b and c respectively. We proceed now to evaluate the terms t i of the former equation. We use angular momentum algebra following Rose convention and formalism [70]. Using explicitly the Clebsch-Gordan coefficients (CGC), we find the sum in this last equation zero for j = 1 and for j = 0. 2 Term t 2 We write with σ μ in spherical basis (− 1 in terms of the spherical harmonics. Then Using the Wigner-Eckart theorem which implies M − μ = M , one has Combining now to angular momentum jm, we find which implies m = 0, and summing explicitly over S 1 as in the case of t 1 , we get which can also be written, using Eqs. (A8), (A9), as and now combining to jm we have which implies m = S 1 − S 2 and μ = S 2 − S 1 = −m. Permuting the order of the arguments in the second CGC, and then where we have fixed S 1 − S 2 = m. Hence we have which can be rewritten in terms of the polarization vector of the j = 1 D * s state as since for a vector polarization m in spherical basis one gets the expression of Eq. (A18). which can be written in spherical basis as Combining spins to produce the jm state we have which implies S 1 −S 2 = m, S 2 −μ = S 1 and hence μ = −m. Then we can permute arguments in the second CGC and find and keeping m fixed Hence we get which can be rewritten in terms of the D * − s polarization , as For later use in meson decay we can write it from Eq. (A21) as 5 Term t 5 We can use the results of t 3 and immediately write but for later use in meson decay we can write it as with m the polarization of the vector meson. 6 Term t 6 which can be written in spherical basis as as one can see explicitly by writing σ i in terms of σ μ (see also Ref. [70]). Combining to jm, following the steps of t 3 , we have which implies S 1 − S 2 = m and S 2 + μ = S 1 , hence μ = m. Permuting the arguments of the second CGC, we have and then keeping m fixed so which implies ν = M − M, or equivalently which can be written in terms of the polarization vector by analogy to Eq. (A26) as This term does not interfere with any of the other terms and one easily finds that pol 7 |t| 2 with all terms Next we perform the sum and average of |t| 2 which will appear in the decay width of the b state. We have the two cases: (A) j = 0. We have contribution from t 1 , t 2 , (A33) (B) j = 1. We get contribution from t 3 , t 4 , t 5 , t 6 , t 3 and t 4 interfere with themselves, and there is no further interference. We then get We can see that, up to the p terms the strength from D * − s production is three times bigger than for D − s production. Appendix B: External emission in B decays We evaluate the matrix elements for the case of In this case in addition to coupling thecs pair from the W vertex to jm, we must couple the quarks forming theB 0 to |00 and the final cd pair to j m . We have the diagram of Fig. 4 and we take the terms evaluated in the former section. 1 Term t 1 We project over spin zero for theB 0 and j m for the final cd state. Since the bd state couples to zero spin, the third component of thed spin must be opposite to the one of the b quark, M, henced has third component −M. The phase (−1) 1/2+M from particle-hole conjugation appears twice and can be ignored. Furthermore, we will fix m , which is M − M and sum over the other spin components. Then we have, taking t 1 from Eq. (A7) and projecting over spin, Hence, the term is written as 3 Term t 3 Taking the term t 3 from Eq. (A19) and proceeding as in term t 1 , we obtain 4 Term t 4 We start from t 4 of Eq. (A23) and project over spins. We have (B13) Hence, altogether we find In order to get the interference with t 5 , it is convenient to write it with the explicit polarization vector of j and j . For this we start from Eq. (B12) and realize that (−1) −m δm, −m corresponds to the product of the vectors · written in spherical basis. Hence we can write Note that 2 i i j j = 2δ i j δ i j = 2δ ii = 6, which is the same result that we get when we take |t 4 | 2 from Eq. (B14) (there the polarizations have been already combined in the amplitude to have j, j combined to zero spin). Term t 5 We start from the term t 5 of Eq. (A25) and project over spins. We cannot now combine j, j to give zero, because we have two vectors p −m p M−M , which can combine to orbital angular momentum L = 0 or L = 2, and it is ( j ⊗ L) ⊗ j that must couple to spin zero. Because of this, it is better to write Eq. (B18) in terms of the polarization vectors, which is quite simple because (−1) −m p −m is · p in spherical basis and (−1) M−M p M−M is · p. Thus The decomposition of i p i j p j into s and d-waves shows that the s-wave, 1 3 p 2 δ i j , has the same structure · as for t 4 in Eq. (B15) and thus interferes, while the d-wave term will sum incoherently. 6 Term t 6 We start from the term t 6 from Eq. (A29) and project over spins. Then It is interesting to see that the case of j = 1, j = 0 gives the same contribution as that of j = 0, j = 1, which is corroborated by experiment. D) j = 1, j = 1: Here we have a contribution from t 4 , t 5 , t 6 . As mentioned before, t 4 and t 5 interfere partially but they do not interfere with t 6 .
2022-12-01T15:47:09.219Z
2018-06-01T00:00:00.000
{ "year": 2018, "sha1": "17ce204cdb43a168cfff1b1f1d4219932d670f2b", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1140/epjc/s10052-018-5997-4.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "17ce204cdb43a168cfff1b1f1d4219932d670f2b", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
214266270
pes2o/s2orc
v3-fos-license
Magnetically targeted nanoparticles for imaging-guided photothermal therapy of cancer Over the past several decades, nanocarriers have constituted a vital research area for accurate tumor therapy. Herein, magnetically targeted nanoparticles (IRFes) for photothermal therapy were generated by integrating IR780, a molecule with strong emission and absorption in the NIR spectrum and the ability to produce heat after laser irradiation, with Fe3O4 nanoparticles (NPs). These IRFes were guided to the tumor site by the application of an external magnetic field. In particular, the strong NIR absorption of IR780 was used for NIRF imaging, and we also demonstrated effective magnetic targeting for the photothermal ablation of tumors. In vitro cell viability and in vivo antitumor experiments showed that these IRFes can ablate 4T1 cells or transplanted 4T1 cell tumors when exposed to 808 nm laser irradiation and a magnetic field. In vivo experiments showed that IRFes only act on tumors, do not damage other organs and can be used to image tumors. These results demonstrate the enormous potential of local photothermal therapy for cancer under the guidance of external magnetic fields and reveal the prospect for the use of multifunctional nanoparticles in tumor therapy. Introduction Cancer seriously threatens all human lives and is expected to increase to approximately 14.6 million cases worldwide by 2035. 1,2 The traditional treatments for cancer patients are chemotherapy, radiotherapy and surgery, which are used separately or in combination. 3 Conventional cancer therapeutics are usually administered systemically through the circulation in the form of free drug delivery, oen with low efficacy and awful side effects. [4][5][6][7] Photothermal therapy (PTT) ablation of tumor cells is a new topical treatment that has advanced rapidly in recent years. [8][9][10] It is based on the principle of directing highly enriched exogenous photothermal therapeutic agents toward tumor sites, where hyperthermia is generated under the excitation of near-infrared light (650-900 nm) to induce acute necrotic apoptosis and immune response to inhibit the growth of tumors. 11-14 NIR light-induced PTT is a noninvasive local tumor treatment approach that ablates cancer cells by penetrating deeply but rarely damaging normal organs. 11,[15][16][17] Tumor tissues have very chaotic and infrequent vascular structures, which makes dissipating heat difficult and renders them more sensitive to hyperthermia than normal tissues. 18 The key to the efficacy of PTT is the temperature gradient inside the tumor cells and the changes to surrounding tissues. When the temperature is in the range of 37-41 C, PTT can accelerate blood ow and increase cell membrane permeability. Cancer cells can be selectively impaired at temperatures between 40 to 48 C, at which point harmful consequences such as protein folding and deformation, disruption of cellular membranes, increased sensitivity to radiotherapy and chemotherapy and irreversible damage occur. Within a short time (approximately 5 min), temperatures between 48 and 60 C can trigger irreversible damage, serious and irreparable protein damage and DNA deformation and additional damage. Traditional hyperthermia therapy transmits heat to the tumor via radiofrequency radiation, 18,19 ultrasound (US), 20 or microwave. Laser-triggered tumor ablation has unique advantages over traditional tumor treatment methods including great controllability, limited damage to surrounding tissues, few systemic side effects, short recovery times and a commensurate reduction in hospitalization times, and visual in-process monitoring. The effectual transmission of nanotheranostic agents to the target region is a necessary condition for cancer therapy. The passive targeting of some nanotherapeutics induced by enhanced permeability and retention (EPR) effects enable specic agent accumulation at the tumor site. In addition, targeting molecules conjugated to the surface of nanoparticles can recognize and bind to specic ligands of cancer cells to further enhance tumor treatment localization. 21 For example, one of the typical superparamagnetic materials, Fe 3 O 4 , has been widely used for constructing magnetic targeted therapeutic systems. It has unique strengths; on the one hand, it is more controllable and predictable under the action of an external magnetic eld, and on the other hand, it can specically deliver therapeutic agents, accumulate in high quantities in tumors and minimize the side effects in normal tissues. Additionally, both IR780 and Fe 3 O 4 on NPs can absorb NIR light energy and convert it into heat. Poly(lactide-co-glycolide) (PLGA) has attracted much attention because of its excellent biocompatibility and attractive loading capacity and has been developed as a nanocarrier for multifunctional therapeutics. Several materials that can interact with NIR light, such as graphene oxide, 22 gold nanorods and small molecules, including indocyanine green (ICG) 23 and IR780, 24 which, upon discovery, were applied to the phototherapy of tumors. IR780, a lipophilic material with strong emission and absorption capacity in the NIR range and can produce heat by laser irradiation, is becoming the focus of cancer treatment and imaging. However, inferior aqueous solubility, rapid clearance and extremely low levels of uptake by tumors have severely restricted the clinical application of IR780. 25,26 Some studies have proposed that IR780 be packaged into various nanomaterials to resolve these difficulties. 27 In our study, we encapsulated IR780 into PLGA NPs, which has a core shell structure that can enhance antitumor efficacy by simultaneously encapsulating various hydrophobic and/or hydrophilic materials. Herein, we report the magnetic targeting and photothermal diagnostic treatment with nanoparticles consisting of superparamagnetic iron oxide (Fe 3 O 4 ) and IR780-coated PLGA NPs (IRFes) on imaging and cancer treatment. Subsequently, PLGA nanoparticles were further demonstrated to have excellent biocompatibility and NIR-based imaging aer magnetic targeting in vitro and in vivo. Moreover, effective photothermal ablation of tumors was achieved by utilizing the photothermal effect of IR780 that was generated by 808 nm NIR laser irradiation and the magnetic targeting induced by Fe 3 O 4 . Therefore, IRFes showed not only effective magnetic targeting of PTT to tumors but also NIRF imaging characteristics. Thus, IRFes, as effective therapeutic nanoagents, simultaneously showed great potential for NIR-imaging-guided diagnosis and magnetically targeted PTT under an external magnetic eld. Materials PLGA (lactide : glycolide ¼ 50 : 50, M w ¼ 10 000 Da) and IR780 iodide and polyvinyl alcohol (PVA) were purchased from Sigma-Aldrich (USA). Oleic acid-treated Fe 3 O 4 nanoparticles, with a mean diameter of 10 nm, were purchased from Ocean Nano Tech Inc. (USA). Female BALB/c mice (6-8 weeks old and weighing 20 g) were treated on the basis of the guidelines of the Department of Laboratory Animals, Central South University, China, and as approved by the Ethics Committee of the Second Xiangya Hospital, Central South University. Preparation of IRFes Simply, 100 mg of PLGA and 3 mL of methylene chloride were mixed, and aer stirring well, 3 mg of IR780 and 0.2 mL of a Fe 3 O 4 NP suspension (30 mg Fe per mL) were added into the mixture in order, then 15 mL of 4% w/v cold PVA solution were added and emulsied by an ultrasonic processor at a power of 130 W and frequency of 20 kHz for 2 min. The resulting emulsion was added to 20 mL deionized water and stirred at room temperature (RT) until the methylene chloride became volatile. Finally, the created NPs were centrifuged at 12 000 rcf for 7 min and washed three times with deionized water. All operations were performed in the dark. Characterization First, transmission electron microscopy (TEM, Hitachi H-7600, Japan) was used to affirm the existence of Fe 3 O 4 particles in the NP shells. The size and surface charge was detected by a Malvern size analyzer (Malvern Nano ZS, UK), the IRFes was dispersed in PBS and the concentration was 1 mg mL À1 (pH 7.4). To explore the magnetic features, we set a magnet next to a glass vial lled with an IRFes solution. The IRFes were injected at one end of a hose (1 mm inner diameter) and were collected at the other end. The injection ow rate was approximately 50 mL min À1 to simulate the intravascular uid state of the tumor. A magnetic attractor was applied to one side of the hose to observe the response of the nanoparticles to a magnetic eld. A Cary 5000 UV-vis-NIR spectrophotometer (USA) was used to detect the absorption spectra of the IRFes and verify the existence of IR780. The IR780 encapsulation and loading efficiencies were calculated according to the following formulas: where W E is the totality of all the IR780 molecules encapsulated in the IRFes, W t is the totality of all the input IR780, and W T is the weight of the PLGA NPs. Colloidal stability of IRFes The nanoparticles were resuspended in PBS or 10% fetal bovine serum (FBS) at a concentration of 5 mg L À1 . The same amount of suspension was dispensed into several centrifuge tubes and allowed to stand in a cell culture incubator at 37 C. A centrifuge tube containing sample was randomly selected for the continuous detection of nanoparticle hydrodynamic size and zeta potential using a Malvern size analyzer. Thermal stability and photostability of IRFes To study the thermal stability of the nanoparticles, 1 mL of 2 mg mL À1 IRFes and indocyanine green (ICG) were subjected to cycles of 808 nm laser irradiation with 4 ON/OFF repeatslaser irradiation (ON, 3 min) and gradual cooling (OFF, 10 min); the temperature of the solution was detected every 30 s, and the IR780 concentration was 0.016 mmol mL À1 , which was the same as that of ICG. The photostability of the IRFes and IR780 was analyzed by NIR uorescence imaging. Specically, 0.2 mL of IRFes or IR780 was placed in a 24-well plate at physiological temperature (37 C), and the 24 h uorescence image was captured aer stimulation with an excitation wavelength of 790 nm. A Lumina IVIS Spectrum imaging system (PerkinElmer, USA) was used to analyze the images to accurately quantify the uorescence signal. In vitro PTT effect One milliliter of IRFes, IR780, Fe 3 O 4 NPs, the mixture of free IR780 and Fe 3 O 4 NPs, phosphate buffered saline (PBS), and phthalocyanine green (ICG) were poured into an Eppendorf tube respectively and irradiated with an 808 nm laser at an intensity of 1.0 W cm À2 for 5 min. ICG and PBS were considered the positive and negative control, respectively. The concentration of the IRFes was 2 mg mL À1 . The concentration of IR780 was 3.54 mg mL À1 , and the concentration of Fe was 22.5 mg mL À1 . The molar volume of ICG was 0.016 mmol mL À1 , which was equal to that of IR780 in our experiment. The temperature was determined using an infrared thermal imaging camera (FLIR C2, USA). Then, 1 mL of each IRFes concentration (0, 0.5, 1.0 and 2.0 mg mL À1 ) was placed under the 808 nm laser, and the temperature was recorded every 30 s. Cell experiments Cell cytotoxicity induced by IRFes exposure was measured by MTT assay. 28 Mouse breast cancer 4T1 cells were plated 1 Â 10 4 cells per well on a 96-well plate for 12 h at 37 C 5% CO 2 in an incubator. Different concentration of IRFes (0.2, 0.4, 0.6 and 0.8 mg mL À1 ) or without IRFes were added to the cells and cultured for 6 h. At the same time, the magnet targeting group were treated with four round magnets (diameter ¼ 1.0 cm, maximum magnetic eld strength ¼ 6.0 Gs) under the four corners of each 96-well plate in the rst 2 h. Then, subjected to laser light at 1.0 W cm À2 for 5 min. The strengths of the magnetic elds were reduced with distance. The depth of the liquid in each well was 3 mm, and the bottom thickness of 96well plate was 1 mm. The magnetic eld strength was 5.2 Gs 1 mm away from the magnet. The magnetic eld strength was 4.7 Gs at a distance of 4 mm. Finally, the cell viability aer each different treatment was calculated. The level of IRFes phagocytosed was investigated by confocal laser scanning microscopy (CLSM, Zeiss LSM 510) to further evaluate the magnetic targeting efficacy. Medium (0.1 mL) with 0.2 mg mL À1 IRFes was added to 4T1 cells. The abovementioned magnets were placed under the dish with cells and allowed to stand for 1 h. The 4T1 cells not subjected to a magnet eld were used as controls. The dish thickness was approximately 1 mm, and the liquid depth in the dish was 3 mm. All cells were washed thoroughly 3 times with PBS and dyed with 4,6diamidino-2-phenylindole (DAPI). To directly inspect photothermal efficiency, the 4T1 cells were stained with propidium iodide (PI), for dead cells, and DAPI, and the observations were analyzed by cell treatment groups: (I) laser irradiation, (II) IRFes, (III) IRFes with laser irradiation, (IV) IRFes with laser irradiation and magnetic targeting were stained with PI and DAPI. The concentration of IRFes was 0.2 mg mL À1 . The cells were subjected to the magnet for 2 h and then irradiated with a laser of 1 W cm À2 for 5 min. The cells were examined using an inverted uorescence microscope (DMIL, Leica, Japan) to distinguish any dead cells. Fluorescence imaging A Lumina IVIS Spectrum imaging system (PerkinElmer, USA) was used to obtain the in vitro uorescence images (l ex ¼ 790 nm and l em ¼ 810 nm). IRFes and Fe 3 O 4 @PLGA NPs (without IR780) in the presence of different concentrations of IR780 (10, 20, 40 and 80 mg mL À1 ) were suspended in 24-well plates. Female BALB/c mice (6-8 weeks old and weighing 20 g) were purchased from the Laboratory Animal Center of Central South University (China) and received care in compliance with the rules of the Department of Laboratory Animals, Central South University, China. All procedures were approved by the Ethics Committee of the Second Xiangya Hospital, Central South University, China. When in the logarithmic growth phase, the 4T1 cells (1 Â 10 6 ) were digested and injected into the right ank of the mice by subcutaneous injection to establish a breast cancer model, and the volume of tumor reached 100 mm 3 aer one week. To obtain in vivo uorescence images, a Lumina IVIS spectrum imaging system (PerkinElmer, USA) was also used to visualize The 200 mL pf IRFes (10 mg mL À1 ) that were injected into the tail vein of the mice from 2 groups (n ¼ 5): (1) with magnetic targeting and (2) without magnetic targeting. To detect the in vivo magnetic targeting effect of IRFes, a magnet with a maximum magnetic eld strength of 40.6 Gs and magnetic elds that were gradually attenuated with distance were applied next to the tumor for 4 h. The maximum magnetic eld strength was 34.2 Gs when the distance was 3 mm away from the magnet. In each group, the depth of the tumor was approximately 5 mm. Subsequently, important organs and tumors were imaged, and the uorescence data were analyzed. In vivo antitumor therapy When the tumor volume reached 60 mm 3 , the tumor-bearing mice were randomly separated into ve groups (n ¼ 5 per group): (1) saline, (2) IRFes, (3) laser irradiation, (4) IRFes with laser irradiation, and (5) IRFes with laser irradiation and magnetic targeting. They were injected with different treatments intravenously. All NPs were administered at a concentration of 10 mg mL À1 , and the IR780 concentration was 0.177 mg mL À1 . A magnet with a maximum magnetic eld strength of 40.6 Gs was placed in group (5) close to the tumor 24 h aer injection, where it remained for 4 h and was exposed to a 808 nm laser of 1.0 W cm À2 for 5 min. Temperature detection was performed with an infrared thermal imaging camera (FLIR C2, USA) every 30 s, and a time-temperature curve was generated. Tumor volume and mouse body weight were measured every other day aer photothermal treatment for 14 days, and tumor growth and mouse body weight curves were plotted. On the 14th day, the mice were sacriced through use of excessive pentobarbital sodium, the tumor tissues were removed, and the tumor volume was measured to evaluate the antitumor effect of the treatments in all the groups. Tumor tissues and important organs (heart, liver, spleen, lung, and kidney) were used to create pathological slides and stained with hematoxylin and eosin (H&E). Analyses of tumor tissue apoptosis and proliferation were performed using TUNEL and Ki-67 immunouorescence staining. Characterization The IRFes emulsion was successfully prepared, and it appeared uniformly green. A TEM image showed that the IRFes had smooth and uniformly spherical shapes that showed the presence of iron particles, as indicated by many black particles that were proportionally well embedded into the spherical shell (Fig. 1a). A dynamic light scattering (DLS) system detected that the size distribution of the IRFes was almost symmetrical, and the mean diameter was 334 nm (PDI ¼ 0.042) (Fig. 1b). The surface zeta potential was À1.55 mV (Fig. 1c). To check the magnetization of the IRFes, we added an additional magnetic eld. Aer 3 min, a large number of nanoparticles in PBS had gathered in the direction of the magnet, and almost all of them were aggregated aer 1 h, indicating that these nanoparticles display remarkable magnetic responsiveness (Fig. 1d). Furthermore, as shown in Fig. 1e, the nanoparticles in the owing state of the tube were also attracted by the magnet to the magnetic eld side, and many black nanoparticles could be seen in the magnetic eld side aer 60 s, demonstrating that the nanoparticles would have superior magnetic responsiveness in the uid of the tumor blood vessels and were unchanged in the liquid state. The absorption spectra of IRFes (10 mg mL À1 ) with the various components are shown in Fig. 1f. IR780 with an encapsulation efficiency of approximately 48.26 AE 2.11% and a loading efficiency above 1.77%. The Fe content was 92.28 AE 3.20 mg mL À1 . The spectrum of the PLGA NPs had a straight line at 400-900 nm, indicating that they did not absorb light. The absorbance curves of the Fe 3 O 4 NPs at 400-900 nm were oblique and without an obvious peak, which indicated that they had some ability to absorb light, while the free IR780 had a high and sharp peak at 780-790 nm, which indicated strong absorption and emission capabilities in the near-infrared region. However, the absorption peak of the IRFes, obtained by UV spectrometry, showed a redshi of approximately 795 nm compared with the absorption peak of 780 nm for the free IR780, which was attributed to the introduction of the auxiliary color group, Fe 3 O 4 , affirming the successful loading of IR780 and Fe 3 O 4 onto the IRFes. These ndings indicate that IRFes are excellent photoabsorbing agents in the near-infrared region. Colloidal stability of IRFes The IRFes were resuspended in PBS or 10% FBS for 7 days, and the size and zeta potential were determined every day by a Malvern size analyzer used to study colloidal stability. The size distribution measured by a dynamic light scattering (DLS) system was neither larger nor smaller in PBS compared to the distribution in 10% FBS (Fig. 2a). In addition, the zeta potential changed little over time (Fig. 2b), revealing that the NPs had outstanding colloidal stability. Hence, it was highly anticipated that the stability of the NPs aer intravenous injection would have prolonged circulation in vivo. Thermal stability and photostability of IRFes The thermal stability of the IRFes was studied by comparing the IRFes to the traditional PTT reagent ICG, which has been approved by the US Food and Drug Administration. As shown in Notably, IRFes could increase beyond 42 C aer 4 cycles of laser irradiation, which hinted that the IRFes can be efficient PTT agents. Nevertheless, the effect on the free ICG was diminished, and the temperature decreased below 42 C. Therefore, the IRFes have become promising therapeutic agents for tumors, relying on their preeminent PTT properties and prominent light photostability. In addition, the poor stability in aqueous solutions, inferior photodegradation and high levels of thermal degradation led to the cessation of IR780 iodide use in clinical applications as an NIR tracer. 25,29 Some studies have been dedicated to encapsulating IR780 iodide into a variety of nanomaterials to overcome those shortcomings. 25,26,29 To determine whether PLGA has the ability to defend IR780 from degradation, we measured alterations in the near-infrared uorescence signal of IR780 and the IRFes over time. The NIR uorescence signal from IR780 and the IRFes gradually declined with time ( Fig. 2d and e). However, the signals from the IRFes demonstrated a much slower decline than that shown by IR780 at different time points, namely, 0, 4, 12, and 24 hours. Notably, compared to pure IR780 in solution, IRFes encapsulating IR780 into the PLGA shell seemed to effectively prevent the internal IR780 from degrading and enhance its long-term stability. IRFes have been used for all subsequent studies on the basis of their stable physicochemical and photothermal properties. In vitro PTT effect The temperature changes aer irradiation are shown in Fig. 3a and b. The point at which the IRFes maximum temperature reached 55 C was greatly increased as it was for the mixture of IR780 and Fe 3 O 4 NPs, for which the maximum temperature was 53.3 C. Likewise, the maximum temperature increases were observed for IR780, and ICG at 50.5 and 48.2 C, respectively. Moreover, compared to other components, the temperature increase of the Fe 3 O 4 NPs was gradual and steady to 38.2 C. Compared to the ICG response to laser irradiation, IR780 was more sensitive and rapid, with a steep temperature curve that increased more rapidly. Due to the superposition of the heating effect of IR780 and the Fe 3 O 4 NPs, the temperatures of IRFes were slightly higher than that of IR780 individually and showed almost the same PTT efficiency and efficacy as that of IRFes, which further proved that the PLGA shell rarely affected the PTT effect. Notably, the highest temperatures for 0, 0.5, 1.0, and 2.0 mg mL À1 IRFes were 25.4, 35.2, 42.2, and 55.0 C (Fig. 3c), respectively, indicating that the PTT properties of the IRFes were signicantly concentration-dependent. The photothermal conversion efficiency (h) was evaluated by 808 nm laser irradiation (1.0 W cm À2 ) of 1.0 mg mL À1 IRFes, according to our previous study. 30 The linear regression curve between ln(q) and the cooling time could be used to infer s, and the h value of the IRFes was determined to be 37.5% (Fig. 3d). In vitro antitumor activity Low cytotoxicity is the most vital feature of an in vivo tracer. The cell viability that was determined without the presence of IRFes was not signicantly reduced regardless of the exposure to laser irradiation (Fig. 4a, p > 0.05). No apparent toxicity was observed, and the dose-dependent 4T1 cell viability remained above 90%, even at concentrations as high as 0.8 mg mL À1 (p > 0.05). The results showed no explicit relationship between IRFes concentration and cell viability, suggesting rare NP-induced cytotoxicity, signicantly supporting their further use in vivo. At the same IRFes concentrations, the cell viability of the NIRirradiated group was signicantly reduced compared to that of the unirradiated group, indicating a favorable photothermal cell toxicity of the NPs (*p < 0.05). In addition, compared to nonmagnetic targeting, magnet targeting attracted more IRFes to the cells, where they rmly remained and were phagocytized, leading to an observable decrease in cell viability (*p < 0.05) and indicating that the binding of magnetically targeted IRFes for PTT greatly lowered cell viability. Then, the phagocytosis assay of the IRFes labeled with red uorescence DiI dye was performed using CLSM. As shown in Fig. 4b, the DiI uorescence of the magnet targeting group was signicantly brighter than that of the group without magnetic targeting, as magnetic targeting greatly promoted the aggregation and endocytosis ability of the IRFes. Next, we investigated the in vitro photothermal effect of the IRFes in 4T1 cells. As shown in Fig. 4c and d, the cytotoxicity of only the IRFes group and only the laser group was exceedingly low, indicating that they had almost no ability to kill cells alone. However, once the IRFes group was exposed to the 808 nm laser, the number of dead cells increased signicantly and the cell viability was reduced to 28% (*p < 0.05), as the IR780 of the IRFes converted the absorbed 808 nm laser into heat and burned numerous cells. Particularly, when adding an external magnetic eld, the cell viability of group IV was as low as 5% (*p < 0.05), which again proved the outstanding magnetic targeting of IRFes and accelerated their aggregation to the cells. Fluorescence imaging The NIRF images and uorescence SI of the IRFes at different concentrations at room temperature (25 C) are shown in Fig. 5a and b. Without IR780, the IRFes had no detectable uorescence signal; however, as the concentration of IR780 in the IRFes was increased, the NIRF signal intensity also gradually increased. These ndings indicate that the IRFes possess brilliant and remarkable NIRF properties, which makes them advanced NIR tracers for tumors. Studies have shown that the addition of a magnetic eld mediates specic site targeting and is an effective targeting method. 31 Based on the DIR-labeled IRFes, we studied the distribution of the IRFes in mice by imaging both in vivo and ex vivo organs. By observing and detecting the DIR uorescence, we could compare the differences between the distribution and duration of the IRFes in various organs and, in particular, tumor tissues. The mice were intravenously injected and divided into 2 groups: one with magnetic targeting and the other without magnetic targeting. As shown in Fig. 5c, when magnetic targeting was added, the uorescence intensity of the tumor was obviously stronger than that of the group without magnetic targeting. Moreover, the enhanced signals were maintained in the tumor for as long as 24 h, which indicates that with the added magnetic mediation, more IRFes were targeted to the tumor site. Then, we imaged and quantied the ex vivo organs and tumors. The NIRF images and the uorescence SI are shown in Fig. 5d and e; the early high concentration accumulation in the lung and liver met expectations because the macrophage system was expected to clear foreign matter within 24 h. 32 In addition, a tremendous increase in tumor uorescence was observed in the magnetic targeting group, in contrast to that observed in the group without magnetic targeting, nevertheless, no signicant changes were found in other organs. This phenomenon was related to the EPR (enhanced permeability and retention effect). The microvascular endothelium in normal tissues is dense and structurally intact such that macromolecules and lipid particles cannot easily penetrate the vessel wall, while solid tumor tissue has wide gaps in the blood vessel wall, incomplete structure, and lack of lymphatic reux, resulting in selective high permeability and retention of macromolecular substances and lipid particles. The nanocarrier macromolecules were concentrated in the tumor region through the EPR effect in a passively dispersed manner, where they played a passive targeting role. In addition, although the EPR takes effect relatively slowly, while nanocarriers enter the tumor site, it is rigorously maintained over time and regardless of concentration. 17 Under the dual targeting effects of magnetic attraction and EPR, the intracellular uorescence intensity of the magnetic targeting group was higher than that of the nonmagnetic targeting group. These results indicate that magnetic targeting is conducive to the accumulation of magnetic NPs in the direction of tumors, making them potential therapeutic reserve forces for prospective combination therapies of tumors. In vivo cytotoxicity For further use of IRFes as magnetic targeting photothermal therapy agents, it is necessary to clearly discern their potential for inducing toxicology in vivo. Consequently, a toxicity evaluation of the IRFes in vivo, including the biodistribution and histological analyses and body weight measurements. BALB/c tumor-bearing mice were intravenously injected with a dose of IRFes (200 mL, 10 mg mL À1 ). First, biodistribution was determined, as described in the previous section, through uorescence imaging (Fig. 5c-e). Second, the histological detection of the major organs (heart, liver, spleen, lung and kidneys) stained with hematoxylin and eosin (H&E) of the mice 14 days aer injection with IRFes showed no notable inammatory lesions or organ damage for all major organs, compared with these conditions in the control mice. Most important, no necrosis was found for any group (Fig. 6a). Third, body weight undulation is a reference index for inspecting the toxicity of IRFes. No signicant body weight loss was observed for any of the experimental groups over a 14 day period (Fig. 6b), proving that a given dose of IRFes rarely induced toxicity in vivo and that IRFes can serve as therapeutic agents by injection in vivo. In vivo anticancer efficacy On the basis of the outstanding in vitro magnetic targeting PTT results, we then researched the in vivo PTT therapeutic effect of the IRFes. When the tumor volume reached 60 mm 3 , the tumorbearing mice were randomly divided into 5 groups (n ¼ 5): (1) saline; (2) laser irradiation; (3) IRFes; (4) IRFes with laser irradiation; and (5) IRFes with laser irradiation and magnetic targeting. The IRFes were intravenously injected into the tumorbearing mice (200 mL, 10 mg mL À1 ). As shown in Fig. 7a and b, the infrared thermal images of the IRFes show the results of in vivo PTT. When only under laser irradiation, the local temperature of the tumor did not increase signicantly, as the same as the group injected with only saline. In group (3), the temperature did not change remarkably, and the maximum temperature was 27.0 AE 0.5 C. The temperature of the other body parts of the mice not exposed to laser irradiation increased negligibly. With prolonged laser irradiation exposure, the surface temperatures of mice tumors in groups (4) and (5) gradually increased, regardless of whether magnet attraction was added or not, and the maximum temperature reached 50.0 AE 1.5 C and 56.0 AE 1.1 C aer 5 min. On the basis of the temperature of the thermal effects, the thermotherapy was divided into warm thermotherapy (40-43 C) and hyperthermia (43-70 C) categories. Warm thermotherapy was used throughout the body, and hyperthermia was used locally in the tumor. The experimental results indicated that the magnetic targeting of the IRFes could convert the absorbing NIR light into heat energy, which resulted in the tumor surface temperature increasing to over 50 C, and this high temperature was limited to the tumor; therefore, the therapy was deemed a local hyperthermia treatment. When the temperature exceeded 43 C for only 5 min, tumor cell necrosis was induced. Hence, it was determined that the magnetic targeting IRFes had the ability to increase the local NP concentration and were splendid photothermal agents in vivo. Then, we evaluated the antitumor effectiveness of PTT by tumor volume and histological analysis. The tumor sizes were measured every other day aer exposure to the treatments. Fig. 7c and d shows the changes in mouse tumor volume for the different treatment groups. In the (1) saline-treated group, compared with that of the 0 day injection, the tumor volume increased 13-fold by day 14. In addition, it was obvious that the tumor volume and tumor growth rates for groups (2) and (3) increased 12-fold and 11.88-fold, respectively, which was a negligible difference, suggesting that neither 808 nm NIR laser irradiation nor nanoparticles alone suppressed tumor growth. In contrast, the tumors in the mice of group (4) were distinctly small, having increased only 4-fold, showing that the EPR effect of the IRFes also inhibited tumors under 808 nm laser irradiation. Taken together, these results indicate that IRFes absorb near-infrared light and release a large amount of heat in local tumors, ndings that are consistent with the results of the in vitro experiments. Temperature exceeding 42 C induced coagulative necrosis of the tumor cells. Most importantly, the tumors in group (5) exposed to IRFes + laser irradiation + magnetic targeting showed the optimal treatment effects: the tumor volume increased relatively approximately two-fold because the magnetic targeting led to the accumulation of more IRFes to local tumors and the EPR effect prolonged the retention of these IRFes, thereby enhancing photothermal therapy; this explanation corresponds to the results from the uorescence imaging experiment. The synergistic effect of both the aggregation caused by the magnetic targeting and the photothermal therapy manifested in low dose IRFes therapy agents with the ability to inhibit tumor growth tremendously. Additionally, the tumor sizes in the mice of group (5), on the 14th day were much smaller than those of the other groups, as shown in the tumor photographs in Fig. 7c. The photothermal efficacy was further veried by H&E staining, TUNEL and Ki-67 immunouorescence assays (Fig. 7e). The H&E staining observed in the tumor sections further conrmed that the cells in the tumors from the mice injected with the NPs and exposed to 808 nm laser irradiation were severely damaged, with no structures apparent in the homogeneous red staining, whereas the tumor cells in control groups largely maintained normal morphologies with complete membrane and nuclear structures. The TUNEL assays conrmed that there was more green uorescence in group (5) cells than was observed for the other groups. The Ki-67 staining of proliferating cells was visualized as green uorescence, and the results showed that group (5) under 808 nm laser irradiation and magnetic targeted therapy had the least green uorescence. The results also showed that magnetic targeted combination therapy could effectively and efficiently suppress the growth of tumor cells and enhance the apoptosis of tumor cells. The distinct antitumor effect of the IRFes in vivo was due to the following: the magnet led to greater accumulation and treatment application of the IRFes in the tumors, and/or even at low concentrations, the IRFes assimilate near-infrared light rapidly and transform it into enormous heat energy, thereby increasing the cumulative necrosis of tumor cells. Conclusions In summary, a multifunctional nanocomposite, IRFes, with outstanding biocompatibility in a physiological environment was successfully prepared by a simple single emulsion method. Accordingly, these IRFes were used as a novel treatment agent for in vivo uorescence imaging. Importantly, magnetic targeting of the IRFes more effectively killed cancer cells and enhanced the photothermal therapy that kills cancer cells when exposed to a 808 nm NIR laser; however, neither the IRFes nor the laser alone signicantly inuenced cancer cells. This versatile IRFes nanocomposite has tremendous potential for directing the magnetic targeting of PTT by spatially/temporally controlling NIRF imaging. The construction of intelligent IRFes nanotherapeutic agents will open up a new way to efficiently monitor the cancer treatment reaction process and effectively protect surrounding normal tissues from damage. Conflicts of interest The authors declare that they have no conicts of interest.
2019-11-28T12:48:51.587Z
2019-11-19T00:00:00.000
{ "year": 2019, "sha1": "73e7d62df68f7fbf10f6d5b224d1f340babdd7f5", "oa_license": "CCBYNC", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2019/ra/c9ra08281f", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7942574575f5a05955c30ab894b1496136ef95a2", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science" ] }
216515829
pes2o/s2orc
v3-fos-license
Emission Embodied in International Trade and Its Responsibility from the Perspective of Global Value Chain: Progress, Trends, and Challenges : In the context of economic globalization and production fragmentation, the boom in intermediate and processing trade has made EEIT (emission embodied in international trade) accounting and the recognition of its responsibility more and more complicated, and the drawbacks of traditional gross value statistics more and more conspicuous. The rapid development of global value chain theory in recent years has given rise to a decomposition framework of the trade flow in a country’s export, based on the global value chain, which o ff ers new methods to study EEIT and allocate its responsibility. The combination of global value chain accounting and EEIT research can o ff er new ways to research EEIT transfer and allocate its responsibility. Utilization of this technique can help understand each country’s “common but di ff erentiated responsibility” in emission reduction. Finally, aiming at the knowledge gaps in current analysis, this paper attempts to discuss the trends, and possible challenges, in research on EEIT, and its responsibility based on the global value chain theory. Introduction Owing to the rapid development of the global economy and industrialization, the large quantify of emissions resulting from human activities have become a leading cause of global climate change. In August 2018, the IPCC released the special report "Global Warming of 1.5 • C" [1]. This jump in global warming encouraged the acceleration of global action to combat climate change, and supported the move towards more stringent global emission reduction steps. Combating climate change is a significant issue in the current international political economy, with international trade in goods, services, capital, and its attendant emission responsibilities, acting as the bones of contentions among countries. The goods, services, and capital of each country are transferred among countries through international trade. Moreover, emission embodied in these trade flows, i.e., EEIT (emissions embodied in international trade) is also transferred among countries-changing the global economy and environment. Any effort to reduce emissions implies designing policies that allocate responsibility to actors involved in causing these emissions [2], and research on EEIT is indispensable for distinguishing these responsibilities. Therefore, EEIT and emission responsibility have evolved into two important issues in the international trade and carbon emission research areas, attracting huge attention. The measurement of EEIT mainly uses life cycle assessment (LCA) and input-output analysis (IOA). LCA is a bottom-up method based on the product life cycle, providing specific information for policy-makers. LCA requires vast detailed data and high data integrity; however, there is no unified database at present. Moreover, a systematic phase error exists in LCA as a consequence of the boundary partition insufficiency, making it difficult to calculate emissions using LCA. IOA, by contrast, can completely capture the input-output information of the entire production process and has relatively low data requirements. Therefore, IOA is the main method used in current EEIT research [3,4]. Numerous studies use IOA to calculate EEIT, and almost all of them rely on traditional trade statistics. These traditional trade statistics use the gross value of goods and services as the statistical caliber, while processing trade makes intermediate goods and services may flow across the boundaries multiple times, resulting in the "double counting" problem [5][6][7]. Current emission responsibility is allocated according to EEIT calculations based on traditional trade statistics-and is, thus, susceptible to the "double counting" problem, so it clearly cannot fairly and truly consider the real benefits and environmental costs of international trade among countries. This problem is particularly acute in the case of developing countries, a majority of which are dominated by the processing trade. To establish a fair and effective responsibility allocation mechanism, it is significant to clarify how much EEIT is caused through which route in each country, and global value chain (GVC) theory is an effective method for determining this information. The rise and evolution of GVC has become an important feature of current international trade, as a consequence of the gradual deepening of the global division of labor. The previously mentioned deficiencies in traditional trade statistics render them unable to reflect the current situation of international trade featured by GVC. New methods are in need to measure international trade flows in the context of fragmented global production [7]. The WTO and OECD jointly presented such a new method based on the concept of "trade in value-added (TiVA)" [8]. Moreover, since then, the study on GVC accounting based on TiVA has been further developed. The deficiency of traditional trade statistics grows in tandem with trade globalization, and the progress of GVC accounting provides new solutions for relevant research. In September 2018, China's State Council Information Office issued "The Facts and China's Position on China-US Trade Friction", deeply analyzing the real China-US trade situation from the GVC perspective [9]. The GVC theory has become increasingly important in the negotiation of international trade disputes and global climate change. Predictably, the relationship between international trade, EEIT, and GVC will become increasingly close in the future. There are many studies about EEIT and emission responsibility, and some of them offer detailed reviews and analyses of relevant research [10][11][12][13][14][15][16][17][18][19][20]. However, most of the existing reviews rarely make a comprehensive analysis of both EEIT and its responsibility, instead only focusing on one of them. Emission responsibility is closely intertwined with EEIT accounting, especially in the context of economic globalization and production fragmentation. A comprehensive analysis of EEIT, emission responsibility, and GVC theory may generate new solutions for global actions to reduce emissions and combat climate change, but a detailed review of related research does not yet exist. This paper provides a detailed review of EEIT, emission responsibility, and GVC related literature from a global perspective to clarify progress in the domain and discuss the trends and challenges of research on EEIT and its responsibility from the GVC perspective. The remainder is organized as follows: Progress in EEIT research is analyzed in Section 2; progress in emission responsibility research is shown in Section 3; progress in GVC accounting and its utilization in EEIT and emission responsibility is provided in Section 4; Section 5 provides the conclusion and further perspectives. Emission Embodied in International Trade The concept "embodiment" was firstly presented by the International Federation of Institutes for Advanced Studies (IFIAS), which used "embodied energy" to represent the energy required directly and indirectly to allow a system to produce a specific good or service. "Embodied emission" is an extension of this concept and is used to measure the emission produced by a product or service throughout its whole production process. With the rapid development of international trade and the mass adoption of global emission reduction, EEIT has become a more and more common research subject. Zhang et al. (2019) provided a bibliometrics analysis of EEIT research and pointed that relevant studies mainly focused on two aspects: The calculation of EEIT and the decomposition analysis of EEIT driving factors [21]. Relevant literature is reviewed according to this classification in the following. The Calculation of EEIT The calculation of EEIT is still at the estimation level, and IOA is the most common method. IOA is firstly proposed by Leontief and is used to analyze the balance of product supply and demand among various sectors by compiling the IO tables. In the late 1960s, IOA was applied to energy and environmental research and became a fundamental approach in this domain [22][23][24]. The IO model can be roughly divided into the single-region input-output (SRIO) model and the multi-region input-output (MRIO) model. The SRIO model is used to study the emissions embodied in the foreign trade of a country (or region) and treats all other countries and regions as a single entity. According to the processing method of imports, the SRIO model can be further classified as the competitive IO model and non-competitive IO model. Based on the assumption that imports and domestic products are at the same level of production technology, the competitive IO model makes no distinction of imports in intermediates and final use products. Because of the easy and direct acquisition of competitive IO tables, the competitive IO model was commonly used in early studies. For example, Schaeffer et al. (1996), Julio et al. (2004), and Mongelli et al. (2006) used this method to calculate the EEIT of Brazil, Spain, and Italy, respectively [25][26][27]. The non-competitive IO model distinguishes import products from domestic products, and in the absence of data, an assumption about the component of imports is in need when using this model. However, import data is rarely divided along these lines, so compiling sorted data (or applying the assumed proportion of intermediate and final use imports to unsorted data) requires a large workload. For a competitive IO model to ignore the influence of imports in the production process when calculating EEIT is generally believed to be unreasonable, leading to a significant increase in the use of non-competitive IO model in calculating EEIT. [28][29][30][31][32][33][34]. analyzed the embodied emissions of China's exports, respectively based on competitive and non-competitive IO models-revealing that the estimated results of the competitive IO model were higher than those of non-competitive model. This difference was attributed to the transition of intermediate products between export and import [35]. The SRIO model makes another potential assumption that the energy consumption coefficient (emission coefficient) of imported products is the same as the energy consumption coefficient (emission coefficient) of domestically produced products. Simply put, it assumes that the imported products are produced using the same production technology and energy input as domestically produced products. Lin et al. (2010) used China's emission intensity instead of the relevant import producers' data to calculate the emissions avoided by import [36]. Although the emissions avoided by China's import can be estimated based on this assumption, the amount tends to be overestimated, since China's emission intensity is significantly higher than that of most producers of imports to China. As the products imported by a country come from many countries and regions in the world, different production technologies among them lead to different energy consumption and emission coefficients. For the sake of higher accuracy, scholars have improved the SRIO model in some ways, such as dividing imports into intermediate inputs and final consumption goods or calculating the emissions embodied in imported products by using the importing country's energy consumption and emission coefficients. Yan et al. (2010) used the typical country alternative method, namely, using the emission intensity of a typical country or the average of several typical countries, to calculate the emission embodied in imports of China [37]. directly used the relevant data of exporting countries to calculate the emission embodied in the import of China [34]. These improvements calculate the import embodied emissions more accurately, to some extent-however, the presupposition of import homogeneity implicit in the SRIO model leads to inevitable inaccuracies in its results, most notably when estimating EEIT transfers between countries with obviously different technology levels and energy structure. Compared to the SRIO model, the MRIO model, which is featured in more studies, describes the industrial relations and trade links among various sectors of multiple countries from a global perspective. This model, however, is more complex than the SRIO model and requires higher data quality. The MRIO model can be divided into the Bilateral IO model and the Multinational IO model. The Bilateral IO model studies the EEIT between two specific countries, which is more targeted and has certain reference significance for international trade policy-making. Shui et al. (2006) adopted this model to study the EEIT in China-US bilateral trade from 1997 to 2003 [38]. Li et al. (2008) used this method to study the EEIT between China and the UK [39]. Yin et al. (2010) and Zhan et al. (2014) also studied the EEIT between China and the United States [40,41]. Ding et al. (2018) calculated EEIT of China with 219 trading partners to explore the impact of international trade on China's emissions and applied the "no-trade scenario" hypothesis to explore China's bilateral trade influence, as well as the possible contribution on the global emissions [42]. Due to the international transfer of embodied emissions brought on by the current trend of trade globalization, however, a simple Bilateral IO model cannot meet the demands of complex EEIT measurement in multinational trade, so there is a tendency toward the use of Multinational IO models in the current mainstream research. The multinational IO model calculates EEIT based on the consideration to differentiate the technology differences between domestic products and imported products from different regions, dividing imported products into final consumption and intermediate inputs [4]. Ahmad [48]. In Multinational IO models, final consumption is the only exogenous demand. Hence, the intermediate consumption of domestic products and imports are endogenous variables, leading to the result that the emissions embodied in imports are attributed to the country/region that finally consumes the imports. However, in the complex global production system, imports may pass through multiple countries/regions before being used in final consumption, so tracking and counting the relevant data is difficult. In addition, there are still many uncertainties in the processing of the Multinational IO table compilation. Sato (2014) made a detailed comparison of the results of research on the estimation of embodied emissions in China's import and export trade. There are significant differences between the research results using SRIO models and MRIO models, and also certain differences among research results using SRIO tables. Additionally, the embodied emission accounting boundary differs according to the model used, while using when using the same model, the IO table of different data sources and processing method, imports, emission and energy-related statistics in different ways, the number of industrial sectors, and assumptions of data processing and calculation, etc., can make differences in results [17]. Through a detailed review of relevant literature, this paper finds that current research is still deficient in two aspects: First, it is difficult to distinguish the sources and destinations of emissions embodied in processing trade, because of incomplete data statistics regarding the processing trade, and relevant research is still scarce. [32,35,49]. Processing trade accounts for a large share of the trade in emerging economies (and the analysis of emissions embodied in processing trade exports) can more reasonably distinguish domestic and foreign emissions. Du et al. (2020) constructed an extended IO model that distinguishes the processing trade and normal trade based on the benchmark IO tables and the customs statistics and re-examined the embodied pollutants in China's exports. Moreover, they pointed out that the embodied air pollutants in China's exports would be overestimated by 12-22%, without accounting for trade heterogeneity [50]. Studies calculating these embodied emissions are of great significance for the reasonable measurement of a country's trade benefits and corresponding emission responsibilities. Second, current EEIT measurement is mainly based on traditional gross value statistics. Following the globalization and fragmentation of production, processing trade and intermediate trade account for a greater share of international trade, and the drawbacks of traditional gross value statistics, namely, the "double counting" problem, are further revealed. The EEIT measurement based on total value statistics cannot truly reflect the environmental cost of a country's participation in international trade, and, is thus, not conducive to establishing a fair and reasonable emission responsibility allocation mechanism. The GVC accounting framework is a good way to avoid "double counting", based on value-added accounting. In addition, the development of the GVC accounting framework international value-added trade to be traced according to its source, destination, and transfer path in detail-providing effective solutions to research on processing trade and the emissions embodied in it. The study of EEIT combined with GVC theory is an important direction for future research. The research on EEIT has entered its peak period, and a large number of related studies have emerged. The increased attention paid to EEIT measurement has caused a corresponding gradual increase in attention to its influencing factors by some scholars. Driving Factors of Emission Embodied in International Trade The methods for determining the driving factors of EEIT mainly include Index Decomposition Analysis (IDA) and Structural Decomposition Analysis (SDA), and both methods have been widely used in research on the driving factors of energy consumption and carbon emission [51][52][53][54][55][56]. The advantage of the IDA model is that it has relatively low data requirements and operates in a relatively simple way. It cannot, however, further decompose the final demand structure, intermediate input technology, and other factors. The SDA method is based on the IO model and is intrinsically related to the studies on the industrial linkage and the final demand effects. It provides decomposition analysis in more detail, although it has higher data requirements. IDA was used earlier than SDA in energy research, and although they differ in their origins and methodology, they share the basic concept of decomposing composite indicators into impacts related to a number of predefined factors [57]. Rose et al. (1996) pointed out that the two methods are related [58]. Hoekstra et al. (2003) believed that the two methods are consistent in the decomposition method, but different in the model construction of a comprehensive index [59]. Lenzen (2016) considered SDA the extension of IDA in mathematical formula [60]. compared the origin and application of the two methods in detail, pointing out that the fundamental difference between the two methods is related to their origin and theoretical core. IDA originates from energy system analysis, so it models energy consumption and emissions from an energy system perspective. SDA is based on the IO model and models energy consumption and emissions from an economic perspective. This means IDA has a stronger link to energy system research, such as energy balances and energy flows in the economy, and is often used to study changes in energy consumption or emissions and their drivers. SDA, by contrast, has a stronger connection with the economic system (such as the supply and demand connection of the economy), and so is usually used to study the side effects of production technology and demand, as well as trade-related issues [61]. Most studies on the driving factors of EEIT are based on one of these two methods [37,46,57,[61][62][63][64][65][66][67][68][69][70][71]. The factors driving EEIT in relevant studies can be simply summarized into five categories: trade scale, production structure, energy efficiency, emission intensity, and technical factors. It's generally believed that the trade scale factor is the main factor promoting the increase in EEIT, the energy efficiency and the technical factors are the main factors reducing EEIT, and the production structure, the energy efficiency, and the emission intensity factors are the main factors causing the difference between developed and developing countries. The study of the decomposition of EEIT driving factors is conducive to the adjustment of trade structure and reduction of EEIT, which is of great significance in turn to global emission reduction. Currently, IDA and SDA related theories and calculation methods are relatively developed. In studies on the driving factors of energy consumption and emissions, both decomposition methods can achieve complete decomposition. On this basis, EEIT driving factors can be further refined in future studies. Furthermore, most present studies are at the national level. As production fragmentation continues, however, industries and regions will become the direct object of international trade, so there is a growing demand for research at the industrial and regional levels following this trend. In addition, the GVC accounting framework based on TiVA and its decomposition framework provide a solution both to clarify the real value-added in international trade and its embodied emissions and to distinguish the amount and transfer path of emissions a country generates for the use of other countries via international trade. These studies of the path decomposition of EEIT based on GVC theory can additionally provide a sufficient and necessary supplement for the clear and systematic characterization of EEIT transfer and its features, which may also be an important trend of future studies on EEIT. There are many studies on the measurement of EEIT and its driving factors. Especially in the context of global emissions reduction which has arisen in recent years, the debate on the allocation of emission responsibilities among countries has popularized this research in the field of carbon emission and global climate change. Sato (2014) and Davis et al. (2010) pointed out that the problem of EEIT is not just its scale, but the lack of a mechanism to measure the emissions generated in one country and consumed in another country [17,72]. Simply put, the allocation of responsibility for EEIT is a much more important problem than the amount of EEIT generated, so emission responsibility is another important topic in the study of carbon emissions and climate change. Research on Global Emission Responsibility Since the adoption of the United Nations framework convention on climate change (UNFCCC) in 1992, the issue of emission responsibility allocation has aroused wide concern among countries and become an important topic in the field of climate change. The discussion on emission responsibility has become increasingly fierce following the development of post-Kyoto climate negotiations. The Rio declaration on environment and development, the UNFCCC, the Kyoto Protocol, and other international environmental conventions all agreed that developed and developing countries carry "common but differentiated responsibilities" in combating climate change. The specific implementation of these "common but differentiated responsibilities" has, thus, become the focus of international debate. Any research on emission responsibility involves a series of theoretical problems: When products are produced to meet foreign needs, who is responsible for the environmental problems stemming from the production of these exported products? Is it the exporting country's responsibility to urge the exporting company to improve its production process? Or is it the importing country's responsibility to create environmentally friendly consumer preferences? Or can responsibility be split proportionately between exporting and importing countries? [73] Scholars continue conducting in-depth research, forming four corresponding principles of emissions responsibility which can be applied to various objects of accountability, namely, production-based responsibility (PBR), consumption-based responsibility (CBR), income-based responsibility (IBR) and shared responsibility (SR). In this paper, the relevant literature is classified in detail according to this classification. Production-Based Responsibility The UNFCCC first put forward the concept of PBR in 1992, asserting that direct emitters are responsible for their emissions. According to this principle, the emission responsibility for export products shall be borne by the exporters. Because each country is responsible for all of its domestic emissions, this principle is also known as "territory responsibility" or "apanage responsibility". Currently, PBR measures a country's emissions mainly according to the IPCC National Greenhouse Gas Inventory of 2006. PBR is more easily calculated and applied than other theories of emission responsibility. These strengths have caused it to be adopted by most current environmental assessments and decision-makers to allocate emission responsibility. PBR also, however, has inherent defects in fairness and inefficiently reduces global emissions. The emission reduction model established by the UNFCCC and the Kyoto protocol, based on this principle, has been widely criticized, and international climate negotiations have been repeatedly stalled [74]. The drawbacks of PBR in existing research can be roughly divided into a few types. First, the fairness of this principle has been widely questioned. Under this principle's accounting mechanism, developing countries bear responsibility for a large number of emissions for developed countries because of their relatively low-end international division of labor status and economic structure, resulting in emission responsibilities are far beyond their scope and capacity. Many scholars pointed out that although final demand is one of the main drivers of environmental pressure, PBR makes no differentiation of final consumers, which is unfair to the developing countries [75][76][77][78]. Second, this principle increases carbon leakage as it incentivizes the developed countries to transfer carbon-intensive industries or production chains to developing countries without emission reduction constraints. These developed countries subsequently meet their own needs via import substitution, weakening the effect of emission reduction. These factors make PBR inconducive to global emission reduction [20,79,80]. Third, according to PBR, the emissions produced in international public airspace or waters by international transportation are not included in any country's emission responsibility. Emissions of this type account for about 3% of global emissions, and this defect is bound to become more and more prominent owing to the rapid development of international trade [20,80,81]. Fourth, the principle may harm global emission reduction and the effectiveness of the implementation of climate change agreements and is inconducive to guiding a low-carbon consumption and living style. When there are national borders between production and consumption activities, consumers of importing countries are largely unaware of the impact of their consumption activities on other countries, global resources, and the environment. Thus, PBR, in that case, does not promote low-carbon consumption patterns. Rothman et al. (1998), based on a study of the environmental Kuznets curve, pointed out that PBR is not conducive to guiding environment-friendly consumption patterns in developed countries, where high emission consumption patterns are maintained through imports [82]. In addition, PBR is detrimental to net carbon exporters, reducing their incentive to participate in global emission reduction [73,74,80]. To overcome the above disadvantages of PBR, especially the carbon leakage accelerated by PBR, new responsibility allocation mechanisms are needed. Some scholars argued that new emission responsibility allocation mechanisms, including responsibility for indirect emissions, are needed to correct PBR's main problem, that only direct emissions are included in the PBR emission calculation process, and indirect emissions are ignored, leading to carbon leakage [20,72]. Eder et al. (1999) indicated that indirect emissions can be generated by two distinct and opposite driving factors-supply and demand [83]. The supply-driven emissions correspond to downstream responsibility, under the assumption that the original supplier is responsible for all emissions generated downstream by its initial input, also called income-based responsibility. Whereas, demand-driven emissions correspond to upstream responsibility, indicating that the consumers should shoulder all responsibility for emissions generated upstream by their demand, which is also widely known as consumption-based responsibility. Consumption-Based Responsibility Based on the "ecological footprint" theory, CBR holds that final consumption is the most important driver of environmental pollution, so the solution to environmental problems requires the formation of environment-friendly consumption preferences. Products and services exist to meet the needs of consumers, and the corresponding emissions should be borne by consumers [84,85]. Many scholars used CBR to calculate the emission responsibilities of various countries [86][87][88][89][90][91][92][93]. Relevant studies showed that developed countries and regions bear more emission responsibility under CBR than under PBR, while developing economies, like China and India, face significantly less emission reduction pressure under CBR. Most scholars believed that CBR more fairly allocates emission responsibility than PBR by assigning more responsibility to high-consumption countries, most of which are developed countries. argued that CBR is conducive to clarifying the impact of developed countries' final demand on the emissions of developing countries. Thus, it reveals the international transfer of emission responsibility and reduces carbon leakage. It's also conducive to the formation of comparative advantages in low-carbon products and the formation of environmentally friendly consumer preferences [87]. CP/RAC (2008) pointed out that CBR incorporates all consumption-related emission sources, making up for the deficiency in PBR's allocation method. It also increases the willingness and enthusiasm of developing countries to participate in global emission reduction and facilitates international cooperation, for example, technology transfers and the clean development mechanisms, between developed and developing countries (CDM). Furthermore, CBR contributes to the formulation of sustainable consumption and production policies and climate policies at the national, and according to Larsen et al. (2009), regional levels [94,95]. However, there are also many doubts about CBR. Spangenberg et al. (2002) believed that emissions are not completely determined by consumption, but are also affected by producers' decisions, which also affect emissions and have a great impact on consumers' purchase decisions [96]. Bastinanoni et al. (2004) and Cadarso et al. (2012) indicated that producers lack the direct impetus to reduce emissions under CBR. Consumers tend not to buy low-carbon products without enough incentive policy, which further leads to underpowered emission reduction by producers. The producers may abandon the use of cleaner and more efficient production methods, weakening the effect of global emission reduction [81,97]. Peters (2008) and further pointed out that even if a country takes measures to restrain the emission reduction from the consumption side, these domestic measures cannot restrain the export sector of other countries, and since the exported products are not consumed in the exporting country, the exporting country will not take the initiative to control this sort of emission [79,98]. Furthermore, under CBR, developing economies tend to produce carbon-intensive export products in pursuit of profit expansion, which is not conducive to global emission reduction. Additionally, the calculation of CBR is more complicated and requires more assumptions and data than the calculation of PBR, so its uncertainty is greatly increased, and its applicability is lower [73]. CBR includes indirect emissions into its accounting, which mitigates the "carbon leakage" problem to some extent compared to PBR. CBR measures the emissions that stem from the final demand for goods and services in a country. Specified to a certain product, it calculates all the emissions generated in the product's supply chain to deliver the product to final demand, namely, its demand-driven upstream responsibility. CBR has, since, attracted huge attention and discussion as a substitute for PBR. It is important to remember, however, that although these two methods are homologous, CBR often ignores downstream responsibility. Income-Based Responsibility Although CBR can force downstream manufacturers to choose upstream producers with lower-carbon emissions by tracking upstream emission responsibility, consumers often fail to choose to buy these more environmentally friendly products if they are more expensive because it represents a reduction in their actual income. People want to consume more and for that need to generate more income. This is done through the supply of primary factors of production [99]. Downstream responsibility, in this case, focuses on this source of income. The benefit of emissions is delivered to suppliers in the form of income, and downstream responsibility forces these upstream suppliers to choose a downstream producer or product consumer with lower emissions by attributing the emission responsibility to the production's initial input supplier. Based on IOA, Gallego et al. (2005) constructed the downstream responsibility accounting framework using the Ghosh model, concluding that the transfer of downstream responsibility is not an overall transfer, but a partial transfer-that is, only part of the indirect emissions from the upstream sector are transferred to the downstream sector [100]. Rodrigues et al. (2006) pointed out that the ratio of indirect transfer is unspecified, so the resulting downstream responsibilities are uncertain [101]. Aiming at this problem, Lenzen et al. (2007) improved the research by defining the transfer ratio of upstream responsibility based on the industrial value-added [102]. Lenzen et al. (2010) emphasized the lack of sufficient attention to downstream responsibility in academic literature, enterprise reports, and other relevant research. They believed that the main cause is that, although they can be defined quantitatively based on IOA, the concept and framework of downstream responsibility are not clear enough. They solved this problem by mapping downstream responsibility onto upstream responsibility, thus, explaining downstream responsibility's relevant terms in detail, and establishing a complete framework for downstream responsibility analysis through comparative analysis with upstream responsibility [2]. Marques et al. (2012) carried out a detailed summary of PBR and CBR based on the previous work of Lenzen et al., indicating that both methods ignored the influence of upstream investment on the emissions of downstream sectors. They formally suggest the idea of income-based responsibility, which starts from the initial input (value-added) of each department, and allocates emissions in sectors, as a consequence of accepting their upstream input, to related upstream sectors [99]. From the calculation prospective, the accounting of CBR and IBR is similar. Both take indirect responsibility into account, reducing the carbon leakage problem that commonly exists in PBR. Most current studies on environmental analysis, by contrast, approach the problem from an upstream perspective and seldom consider downstream responsibility [2]. Marques et al. (2012) pointed out that is because consumers' final demand is the main driver of the current market-driven economy, which focuses on the consumption process-therefore, it is natural to conclude that consumers benefit from emissions [99]. There are only a few studies conducting studies based on the IBR. [107]. IBR allocates responsibility for emissions generated throughout the whole production chain to the supplier of its initial input, which can be interpreted as placing responsibility on the production side. In many studies by Rodrigues et al., IBR is regarded as the broad "producer principle" [2,[99][100][101][102]108]. Although both CBR and IBR take indirect emissions into account, which can reduce PBR's "carbon leakage" problem, these three principles all place emission responsibility entirely on one agent. Marques et al. (2012) pointed out that it is fair that those who benefit monetarily from emissions should bear responsibility for those emissions. However, any measure placing full responsibility on one actor is unlikely to be accepted as a basis for climate policy, since it will never be perceived as fair for all the agents involved in negotiations [99]. Some scholars believed that any allocation mechanism which distributes all responsibility to one party has inevitable defects, and thus, proposed shared responsibility as a solution. Shared Responsibility SR is based on the benefit principle and suggests that producers and consumers should jointly share emission responsibility. According to the benefit principle, all participants that benefit from emissions should take responsibility for them. The production of a certain product combines producers' and consumers' outcomes. The responsibility for emissions, which are a by-product of production, should be assigned to all the factors driving it, and thus, borne by producers and consumers together. Kondo et al. (1998) proposed attributing the emissions induced by exports and imports to both producers and consumers, and they pointed out that the attribution ratio should be adjusted based on the kind of commodity in question, notably distinguishing between the final intermediate use products and the countries importing or exporting them [109]. Ferng (2003) also believed that the benefit principle is a reasonable basis for assigning responsibility. Because exporting countries generate income through emissions generated by the production of exports (while importing countries improve their quality of life through emission-generating imported products), they should share responsibilities for these emissions. Ferng (2003) and Chang (2013) pointed out that a mechanism in which many subjects share emission responsibilities is more conducive to the widespread mobilization of countries directly involved in international trade and benefiting from emissions to shoulder emission responsibilities together, further promoting fairness. It is beneficial to global emission reduction, helps to promote mandatory emission reduction pledges in developing countries, and improves global engagement [77,110]. Lenzen et al. (2007) pointed out that the responsibility of each stage in the production chain under SR is more closely related to its upstream and downstream stages than under PBR, so all stages can be better encouraged to cooperate to reduce the emissions of the entire production chain. Compared with CBR, which mainly encourages consumers to change their habits, this principle can encourage producers and consumers to jointly reduce production emissions [102]. Li et al. (2019) also proposed that SR, as a kind of modified responsibility allocation scheme, can not only effectively mitigate "carbon leakage" in international trade, but can also inspire producers and consumers together to reduce global emissions. This, it is a relatively comprehensive and effective responsibility principle [111]. At present, the main dispute about SR is on the distribution ratio. Various ways based on different considerations have been proposed in previous literature. Kondo et al. (1998) believed that the distribution ratio should separate products by type, for example, dividing intermediate and final consumer goods, as well as by the national conditions of the countries importing and exporting them [109]. Ferng (2003) argued that the distribution ratio should be fair, giving expression to the economic structures, consumption patterns, and consumption levels of different countries, and it is crucial to ensure the basic demand per capita. However, this study did not propose any specific ratios, instead only assuming that each party should shoulder half of the responsibility in the empirical analysis [77]. Bastianoni et al. (2004) proposed a ratio calculation method. The direct emissions of each stage in the production chain are first calculated (e.g., 50 for stage 1, 30 for stage 2, and 20 for stage 3), and the emissions of each stage and its upstream stages are summed (50 for stage 1, 80 for stage 2, and 100 for stage 3), and finally summarized (50 + 80 + 100 = 230). Each stage's proportion is equal to the total emissions of the upstream production chain in the summary proportion (50/230 for stage 1, 80/230 for stage 2, and 100/230 for stage 3). According to this method, the further downstream a company is, the greater the proportion it bears, with final consumers bearing most of the responsibility [81]. Some scholars argued, however, that this method lacks a theoretical basis, and increasing or decreasing the number of production stages causes changes in the resulting distribution ratio [73]. Rodrigues et al. (2006) provided a realistic basis for SR through simulated negotiation. They proposed that the emission responsibility principle should have six attributes: (1) The overall responsibility is equal to the sum of its parts. (2) The sum of each country's emission responsibility equals the global total of direct emissions. (3) Indirect responsibility from upstream and downstream should be included. (4) The ratio of emission responsibility allocated to downstream (upstream) participants is equal to the ratio of products obtained from upstream (downstream). (5) Emission responsibility can be reduced only when direct emissions are reduced. (6) The responsibilities of producers and consumers are symmetrical. The only principle that possesses all six attributes at the same time is SR. Producer and consumer responsibility is allocated symmetrically, so the distribution proportion of emission responsibility should be equally shared. As every producer is a consumer, we must assume that they share symmetric responsibility even if asymmetries exist in reality. Otherwise, there will be too many distributive possibilities to reach an agreement [101]. Lenzen et al. (2007) raised doubts, pointing out that not all producers are consumers and that asymmetry is the norm in real producer-consumer relationships. They contended that although the asymmetric distribution of responsibility may lead to the possibility of excessive distribution, it is not enough to justify symmetry. Their study put forward a calculation method using the ratio of value-added and net output to allocate proportion. Thus, the greater value-added in a stage in the production chain, the greater the degree of control and influence it exercises over the industrial chain, and the greater responsibility it shares. They further pointed out that this method has invariance, namely, increase and decrease in the number of production stages does not change the ratio [102]. Rodrigues (2008), however, proved that this invariance only existed under certain conditions [108]. Since no attribution principle is well proven and widely accepted, SR research has developed slowly. There are also disputes about the operability of SR. Bastianoni et al. (2004) argued that PBR and CBR are at issue, and SR may become a compromise solution [83]. Andrew et al. (2008) compared the emission responsibility under the three principles and believed that SR may obtain more extensive support [10]. However, McKerlie et al. (2006) pointed out that SR expands responsibility in general, but makes it more difficult to define the responsibilities of each subject [112]. Peters (2008) also believed that the issue of weight would become the new focus of debate [87]. Zhou (2012) proposed that, among the three principles, PBR has the best operability. Compared with PBR, CBR adds one step, which increases uncertainty and reduces operability. SR adds another step beyond CBR, so of the three, it has the highest data requirement, the highest uncertainty, and because of the unsolved theoretical problems, such as distribution ratio, the worst operability [73]. Li et al. (2019) pointed out PBR so far is still the most widely adopted principle, but SR is hampered by its operability and is currently used in few relevant studies [111]. Whether calculating PBR, CBR, or SR, current allocation schemes of emission responsibility are all based on EEIT accounting using traditional gross value statistics. Emission reduction is a globally common problem. The recent development of economic globalization and increasing production fragmentation has caused the costs and benefits of goods and services to dispersed around the world, so producers and consumers cannot be simply distinguished according to traditional trade statistics. As for emission responsibility allocation, a single PBR or CBR method has inevitable defects. Under the general trend of production globalization and the rapid development of international trade, its disadvantages will become more obvious. From a global production perspective, SR is the inevitable choice for the concretization of the principle "common but differentiated responsibilities" for global emission reduction. The development of GVC theory makes it realizable to trace EEIT according to its source, destination, and possible transfer path. SR based on detailed EEIT decomposition may provide new solutions for the research on emission responsibility allocation. GVC Theory and Its Utilization in Carbon Emission The concept of GVC originated from the value chain, which describes all the activities of a good (or service) from its conception to its final use, including design, production, marketing, supply, after-sales support, etc. Porter (1985) first proposed the concept of the enterprise value chain. He stated that the overall economic activities of enterprises can be divided into separate activities of different natures and links, which are interrelated in the process of enterprise value creation and constitute the behavioral chain, namely, the internal value chain of enterprises. He believed that value chains between enterprises are also interrelated and that the position of each enterprise in the value chain system has an important impact on its competitiveness [113]. Kogut (1985) extended the enterprise value chain concept to the whole world. He proposed that each link of the entire value chain has a spatial configuration between different countries and regions, which depends on the comparative advantages between those countries and regions [114]. The difference between the value chain and the GVC is that the value chain can be contained in a geographical location or even within an enterprise, while the GVC is divided into multiple enterprises distributed in multiple locations. In a word, the international trade in intermediate is the core element of GVC. The rise and evolution of GVC theory have become the main feature of current international trade research. It is widely used in the detailed study of global industrial structure and dynamics to understand who, where, and how economic, social, and environmental value is created and distributed. The labor division model of GVC changes the realization and distribution mechanism of trade benefits, separating it from the trade scale. Traditional trade statistics cannot effectively reflect the value created by a country, nor accurately reflect the trade benefits obtained by a country. In 2012, the WTO and OECD launched the "Measurement of TiVA" joint research project. Several international organizations, such as the European Union and the United Nations Conference on Trade and Development (UNCTAD) have also conducted statistical studies on TiVA. This work has promoted the mainstreaming of TiVA statistics and made it a permanent part of the official international statistical system [8]. The measurement of GVC based on TiVA accounting has been widely adopted, with relevant studies mainly focusing on two aspects: (1) The theoretical accounting framework of the GVC based on TiVA, and (2) the description of the locations of countries participating in GVC within that framework. The Accounting Framework of GVC GVC is also called "vertical specialization", and it has many related labels (such as "value chain cutting", "outsourcing production", "production non-integration", "production fragmentation", "multi-level production", "product internal specialization", etc.). [115]. Balassa (1965) proposed in the literature about trade liberalization that, a kind of continuous production processes product is divided into a vertical trade chain, which extends to many countries, and the interconnectivity of this production process is gradually enhanced. Each country focuses on a specific stage in the process of production and adds value according to its comparative advantage. This global division phenomenon is defined as vertical specialization [116]. However, because of data and calculation problems, the research on vertical specialization remained at the case study level until Hummels et al. (2001) defined a narrow concept of vertical specialization and put forward the quantitative index of systematic measurement, which made the measurement of GVC possible. Hummels et al. (2001) defined the value of imported inputs embodied in goods that are exported as vertical specialization (VS) and called the value of exports that are embodied in a second country's export goods VS1. They provided a formula for computing VS, but no formula for VS1, and pointed out that VS1 is more difficult to measure than VS, as it requires bilateral trade flow data matching the input-output relations [115]. Koopman et al. (2010) deemed that Hummels et al. (2001) provided the first empirical measurements of vertical specialization, but their measurement is valid only in special cases and breaks down when confronted with the multi-country, back-and-forth nature of current global production networks. They pointed out that there are two key assumptions embodied in Hummels's measurement. First, all intermediate inputs imported should be wholly foreign value-added, with no value-added from itself and returned after processing abroad, and no more than one country can export intermediates. In their model, a country cannot use import intermediates to produce exporting intermediate products. Second, the domestic consumption and exports are produced with the same importing inputs. This assumption is violated when processing exports raise the imported intermediate content of exports relative to domestic use [117]. Relaxing the first assumption, Wang et al. (2009) extended Hummels's method based on the international input-output model and developed an accounting framework involving multiple countries. Based on the Asian international Input-Output tables, they decomposed value-added in the multinational production chain into the net contribution of each country. They provided formulas for both VS and VS1 and pointed out that Hummels's measurement is only a special case in certain situations of their framework [118]. Koopman et al. (2008) relaxed the second assumption and proposed a method for computing domestic and foreign value-added under commonly existing processing trade conditions [119]. Koopman et al. (2010) relaxed both assumptions and provided a complete decomposition of gross exports into its value-added components, thus, making it possible to connect trade statistics with SNA standards and construct a quantitative index to assess whether a particular sector in a country is likely located in the upstream or downstream of the global production chain [117]. The concept of vertical specialization in these studies, following the narrow concept of vertical specialization proposed by Hummels et al. (2001), is extended to the global system. For closed economies, IO information can be fully captured in the regional IO table. Under such circumstances, a narrow vertical specialization is only a special case of the new framework. Other studies expand the elements related to vertical specialization and define new indicators for measurement. Daudin et al. (2011) defined VS1* as the exports that, further down the production chain, are embedded in re-imported goods that are either consumed, invested, or used as inputs for final domestic use, namely, the domestic content of invested or consumed imports [120]. Johnson et al. (2012) defined the value-added produced in one country and eventually absorbed in other countries as value-added export (VAX) and used the ratio of VAX to total export (VAX ratio) as the measurement index of value-added in trade (VaiT) [5]. Stehrer et al. (2012) discussed the measurement of TiVA and VaiT, defined TiVA as a country's direct and indirect value-added embodied in the final foreign consumption, and VaiT as the value-added of total trade flow between two countries [121]. TiVA estimates the value-added embodied in imports, exports and net exports based on the use of final products, while VaiT represents the (net) flows of value-added generated by traditional exports and imports. Koopman et al. (2014) decomposed total exports into nine categories of value-added components and double-counting items, but their decomposition was limited to the national level and did not go deeply into the industrial sector level [6]. Wang et al. (2013) compared the TiVA accounting method with the gross value accounting system from the perspective of gross exports and decomposed total exports into 16 value-added components and double-counting items, thus, realizing the complete decomposition of gross exports. They also pointed out that the VS, VS1, and VS1* indicators only represent part of the components or linear combinations after decomposition [122]. In their work, gross exports can be decomposed into domestic value-added, foreign value-added (FVA), and double-counting items (PDC). Domestic value-added can be further decomposed into domestic value-added absorbed in foreign countries (DVA) and domestic value-added that is exported before finally coming back (RDV), as shown in Figure 1. Furthermore, the domestic value-added of exports can be divided into eight types based on the form of export products, their absorption mode and the form of returned products, as shown in Figure 2. The FVA can be further divided into four parts according to their country of origin and product form, and the PDC can be further divided into four parts according to their source, as shown in Figure 3. [121]. TiVA estimates the value-added embodied in imports, exports and net exports based on the use of final products, while VaiT represents the (net) flows of value-added generated by traditional exports and imports. Koopman et al. (2014) decomposed total exports into nine categories of value-added components and double-counting items, but their decomposition was limited to the national level and did not go deeply into the industrial sector level [6]. Wang et al. (2013) compared the TiVA accounting method with the gross value accounting system from the perspective of gross exports and decomposed total exports into 16 value-added components and double-counting items, thus, realizing the complete decomposition of gross exports. They also pointed out that the VS, VS1, and VS1* indicators only represent part of the components or linear combinations after decomposition [122]. In their work, gross exports can be decomposed into domestic value-added, foreign value-added (FVA), and double-counting items (PDC). Domestic value-added can be further decomposed into domestic value-added absorbed in foreign countries (DVA) and domestic value-added that is exported before finally coming back (RDV), as shown in Figure 1 (a). Furthermore, the domestic value-added of exports can be divided into eight types based on the form of export products, their absorption mode and the form of returned products, as shown in Figure 1 (b). The FVA can be further divided into four parts according to their country of origin and product form, and the PDC can be further divided into four parts according to their source, as shown in Figure 1 The main estimating formula is as follows: The main estimating formula is as follows: The components of the gross export accounting in Figure 1 correspond to each part in Equation (1), thus, gross exports can be completely decomposed. In this formula, E sr is the export vector that denotes the gross exports of country s to country r. E r* is the total export of country r. V s is the value-added coefficient vector of country s, and V t and V r denote the similar. B rs is the Leontief inverse matrix, which is the total requirement matrix that gives the amount of gross output in producing country r required for one unit increase in final demand in country s, and B ss , B rr , B rt , and B ts have a similar meaning. A sr is the total input coefficient matrix, giving intermediate use in country r produced in country s. L ss and L rr are the local Leontief inverse matrix. "#" is defined as element-wise matrix multiplication operation, for example, when a matrix is multiplied by a n×1 column vector, each row of the matrix is multiplied by the corresponding row of the vector. Previous concepts related to TiVA can be obtained by a linear combination of some components in the decomposition. So far, the measurement of vertical specialization has been incorporated into a unified and compatible framework, and the GVC accounting framework based on TiVA accounting has been improved [123]. Measurement of GVC Characterized by the international division of production, GVC's depth and breadth continuously extend as the level of the international division of labor is gradually refined from the products to the production processes. The specialization of each country is no longer based on the products it produces, but on the production processes consistent with its comparative advantage. In his discussion of the enterprise value chain, Porter (1985) pointed out that the position of each enterprise in the value chain system has an important impact on their competitiveness [113]. Extending to the global level, each country's participation status in the GVC also has a profound impact on its benefits and competitiveness in international trade. As the accounting framework of the global value chain gradually improves, many scholars have proposed various indicators to measure a country's participation status in GVC, including physical location (position in the production chain) and economic status (profitability). In pinpointing the location of GVC, Dietzenbacher (2005) first proposed using average propagation lengths (APLs) to measure the economic distance between sectors. APLs can be interpreted as the measurement of the average number of steps it takes for cost-push in industry i to affect the price of product j, or as the measurement of the average number of steps it takes for demand-pull in industry j to affect the production in sector i [124]. Dietzenbacher et al. (2007) and Inomata (2008) then extended APLs, applying them to an MRIO model [125,126]. Fally (2011) proposed two measurement indexes, including "distance to final consumption" (such as the average number of production stages between production and final consumption) and "average number of production stages embodied in each product", respectively named upstream and downstream indicators by Antras et al. (2012) and Antras et al. (2013). stated that these indicators have two problems. First, these indicators are calculated based on a sector's total output, which includes not only final goods and services but also intermediate inputs [127][128][129]. Dietzenbacher et al. (2005) and Dietzenbacher et al. (2007) argued that a production chain's indicators must start from the initial input of a department, such as labor and capital (or value-added), not its total output. Second, "upstream" and "downstream" indicators are not substitutes for each other, and the two indicators may lead to opposite results for the same country/sector [124,125,130]. Aiming at the first question, Su et al. (2015) improved the upstream indicator based on the TiVA accounting framework and calculated the sectoral upstream indicator in 2011 and exporting upstream indicator in 1995-2011 by using world input-output table (WIOT) information for 40 countries and 35 sectors. The results showed that the previous calculation method is indeed flawed [131]. Ni (2016) extended the production stage number concept to the global input-output framework, making a distinction between the domestic stage and international stage [132]. Ye et al. (2015) defined the average value-added propagation lengths (VAPLs) from the perspective of value-added propagation and defined the average VAPLs from forward and backward perspectives [133]. Ni et al. (2016) also extended VAPLs from the angle of value-added propagation from one sector to a final demand sector (point-to-point), one sector to final a demand sector group (point-to-plane), from a sector group to one final demand sector (plane-to-point) and from a sector group to a final demand sector group (plane-to-plane), and pointed out that generalized VAPLs cover various measurements of GVC location in previous studies [134]. redefined production length as the distance between initial inputs and final products and pointed out that indicators constructed on this basis are more consistent and more in line with economic interpretation, and the average length of the value chain under this definition is always equal to the ratio between the total output value of each part, and the corresponding value-added resulting from it. Moreover, based on the value-added trade accounting framework proposed by Wang et al. (2013) and Koopman et al. (2014), total production length can be decomposed into the pure domestic stage, the direct TiVA stage, and the GVC stage, which comprehensively reflects in-depth transnational production activities. They pointed out that although there are some conceptual differences between production length measurement and location measurement-as long as the production length is defined according to the number of production stages at the bilateral or sectoral level, indicators representing the position of a country/sector in GVC can be constructed by decomposition at various levels [130]. Yan et al. (2018) further expanded this method by decomposing the change in production chain length into the change each industry's length and the change in the proportion of each industry's value-added (industrial structure) to calculate the influencing factors of the change in the length of the production chain [135]. Ni (2018) comprehensively reviewed the literature on GVC measurement and pointed out that the definition of generalized VAPL from sector group to final demand sector (plane-to-point) proposed in Ni et al. (2016) was in line with the backward production length (namely, the downstream indicator of the sector) defined in . Ni (2018) also stated that, no matter how and from which perspective the position and length of production is defined, its core is the weighted sum calculation for each stage of the production process, and has been largely on perfecting to measure macro GVC position [123]. The economic status of GVC has a direct meaning of welfare, and its measurement indicator, which is expressed by the ratio of domestic value-added in exports (DVAR), is relatively uniform. Generally speaking, the higher the DVAR, the more domestic value is added by a country's per unit exports, and the stronger the country's profitability in the international division of labor, in other words, the higher its economic status in the international division of labor. The definition of this indicator can also show that the key point of the DVAR index is the exact definition of value-added exports. Johnson et al. (2012) defined value-added exports from the perspective of the final consumption of products as the value that one country's production adds, which is eventually absorbed by other countries [5]. With the improvement of the TiVA accounting framework, DVAR is being applied at an everincreasing rate. Zhang et al. (2013) and Luo et al. (2014) used this indicator to evaluate China's exports [136,137]. Su (2016) also used this index to evaluate China's provincial exports [138]. In addition to the DVAR indicator, Koopman et al. (2010) constructed a GVC status index to evaluate the status of sectors in country i in the international division of labor, as shown in Equation (2): where IVir is a domestic intermediate product used by the importing country to produce exports from sector r of country i, and FVir is a foreign intermediate product used in sector r of country i. They believed that the larger the indicator is, the higher this sector's economic status is in the division of GVC [117]. Wang et al. (2014) also used this indicator to evaluate the international division of labor status of various manufacturing sectors in China [139]. then constructed the global value chain status index (GS index), as shown in Equation (3): where, GS i,k represents GS index of sector k in country i, va i,k represents its direct value-added coefficient, Y i,k represents its gross output, d ik,jl represents the direct consumption coefficients of sector l in country j on sector k in country i, and d ik,jl Y j,l /Y i,k represents the proportion of the gross output of sector k in country i used as intermediates in the production of sector l in country j. They believed that the economic meaning of the GS index is the value-added range experienced by intermediate products in a specific industry before they become final products, which can reflect the position and value-added capacity of a country's specific sectors in the GVC [140]. With the gradual improvement of the value-added accounting framework, until Wang et al. (2013) unified the accounting framework of TiVA and realized the complete quantitative decomposition of gross exports, GVC accounting (and description indicators of on the macro level) had been relatively perfected and applied with increasing frequency in the field of international trade. The GVC accounting framework is based on TiVA accounting, and it can not only solve the "double counting" problem present in EEIT calculation based on traditional trade statistics and the complete decomposition framework of gross exports, but it can also provide an important means to clarify the source, destination and transfer path of emissions embodied in ' country's exports. Furthermore, it is importance to make the real EEIT situation clear, and to fairly allocate emissions responsibility among countries. The Utilization of GVC Theory in Emission Research Since the GVC accounting framework has not been implemented until recent years, there are relatively few studies of EEIT based on TiVA accounting. The relevant research can be divided into two categories: (1) Recalculating the EEIT based on TiVA statistics and comparing the results with traditional gross value accounting, and (2) ) measured regional exports and their embodied pollutants (PM2.5, SO 2 , NOx and NMVOC (non-volatile organic compounds of methane)) by applying TiVA accounting to China's MRIO table, and tracked the export-oriented economic and environmental costs mismatch along with the state of the supply chain area. They deemed that about 56% of the economic benefits induced by China's exports go to the developed coastal regions, but about 72% of the associated air pollution is mainly caused by the underdeveloped central and western regions [152]. The study is a regional level application of GVC theory, which traces the domestic paths of the embodied emissions (or other pollutants) involved in international trade in various regions within a country clearly but cannot distinguish their specific destinations. Meng [153]. Because the current trade statistics system is not yet complete, the compilation of this IO table involves a lot of international trade data, and there may be large errors, thus, receiving less attention, and related research has not been fully developed. Using this method, Pei et al. (2018) analyzed the transfer routes of the embodied emissions originating in each region of China through the GVC [154]. Owing to the limited development of the trade statistics system, it is difficult to compile such IO tables. However, just as the international division of labor is further deepening, production fragmentation is growing in depth intra-nationally. GVC participation actually takes place at the regional and even enterprise level. Research combing the GVC and domestic value chain will also become an important research direction, of which studies like Meng et al. (2013) are effective attempts. Subjected to the slow progress of the GVC accounting framework and global MRIO statistics, research on EEIT from the GVC perspective is a newly arisen issue, so few studies have even attempted to use it. For example, Zhao et al. (2013) and Zhang et al. (2015) distributed the emission responsibilities of upstream sectors to backstream sectors, final consumption sectors and themselves according to the value-added in their productions. A major weakness in their work is that it assumes that the studied sectors form a closed economic system, so only connections between sectors within this system are considered, and connections between this system and external sectors are not [155,156]. As the GVC accounting framework, intra-national trade statistics and global IO database are being perfected, the combination research on the GVC and emission responsibility allocation is coming closer to drawing a clear roadmap of EEIT sources, destinations, and transfer paths, thus, providing a basis for emission responsibility allocation. This combination of research approaches may be an important trend of the domain in the future. Conclusions There are two main deficiencies in the existing EEIT research: Firstly, it is difficult to recognize the sources and destinations of emissions embodied in the processing trade, and there is a lack of relative studies. Secondly, current EEIT accounting is based on traditional gross value statistics, which has an increasingly severe "double counting" problem. Results based on current EEIT accounting do not sufficiently show the real EEIT situation among countries, and are, therefore, unable to establish a fair and reasonable mechanism for global emission responsibility allocation. PBR, CBR, and SR are currently the most common principles for emission responsibility allocation, and among them, CBR is of special concern in recent studies. There are inevitable drawbacks in both PBR and CBR, which assign responsibility to a single subject. SR seems to be more reasonable as it can press producers and consumers to reduce emissions together. Nevertheless, there is a controversy surrounding the weight of distribution and yet no perfect and commonly acceptable standard for its assignment. GVC accounting is based on TiVA and can solve the "double counting" problem in current EEIT accounting, as well as provide a clear roadmap of EEIT sources, destinations and transfer paths using the decomposition framework of gross exports. GVC theory provides new ways to determine EEIT and its allocation of responsibility, and there are some effective empirical studies. GVC accounting framework has been improved at the macro level, but it is still a field on the frontier of current research. Therefore, it still has many theoretical and practical obstacles to in-depth study and further study. The main drawback is the faultiness of the current trade statistic system; however, the global input-output database and international trade statistics have been increasingly accurate, meaning empirical analysis and application are predicted to be more extensive. Perspectives As the processing trade accounts more in the gross international trade, the use of TiVA accounting to calculate EEIT, which distinguishes processing and non-processing trade, can provide a detailed analysis of countries' EEIT and more accurately describe the current global EEIT situation. It helps to compare the national and global emission reduction pressures and targets, and further provide references for formulating and implementing effective emission reduction policies. It's of paramount importance to establish a global emission chain according to GVC theory and to then apply GVC measurement to the global emission chain. This helps to analyze the influence of a country's GVC position and status on EEIT and can provide effective references for relative policy-making and implementation for countries. The GVC is not only directly supported by the direct exporting regions, but also indirectly supported by other regions that provide intermediate goods and services to these exporting regions. However, in most of the current GVC studies, China participates in GVC as one region. Considering the vastness of China, the economic development, industrial structure, and export structure differ among regions. It's also essential to embed the regional input-output relationship of China into the global input-output table to analyze the global value chains and global emission chains-such studies help to analyze the performance in the global value chain and global emission chain for each region of China and enhance their competitiveness when participating in global production. Meng et al. (2013) provided a new framework for measuring the domestic linkages to global value chains [157], and more empirical studies are urgently needed. The economic benefits and environmental costs of products and services are globally scattered, any principle that assigns responsibility to a single subject has its inevitable defects, and SR is a better choice. By decomposing global international trade flow and EEIT to analyze their sources, destinations, and transfer paths, the real economic benefits and emission costs for countries participating in the GVC can be calculated. Based on the specific analysis, we can establish a unified correlation index between benefits and emissions, providing a basis to define the responsibility share between producer and consumer. Thus, it is conducive to the establishment of a fair and unified emission responsibility allocation mechanism, which is expected to make both producers and consumers more enthusiastic and active in reducing emissions. Author Contributions: B.Z. is responsible for conceptualization and writing-original draft preparation. Y.N. is responsible for conceptualization and manuscript review together with S.B., T.D. and Y.Z. All authors have read and agreed to the published version of the manuscript. Funding: This study received no external funding.
2020-04-16T09:05:49.829Z
2020-04-12T00:00:00.000
{ "year": 2020, "sha1": "358067b8a0341ccd13c68b1bcff24cdf0370adf3", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/12/8/3097/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "dedde59501072faaffe251745b61b2d7a30e9142", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Economics" ] }
266981149
pes2o/s2orc
v3-fos-license
Enhancing Commercial Antibiotics with Trans-Cinnamaldehyde in Gram-Positive and Gram-Negative Bacteria: An In Vitro Approach One strategy to mitigate the emergence of bacterial resistance involves reducing antibiotic doses by combining them with natural products, such as trans-cinnamaldehyde (CIN). The objective of this research was to identify in vitro combinations (CIN + commercial antibiotic (ABX)) that decrease the minimum inhibitory concentration (MIC) of seven antibiotics against 14 different Gram-positive and Gram-negative pathogenic bacteria, most of them classified as ESKAPE. MIC values were measured for all compounds using the broth microdilution method. The effect of the combinations on these microorganisms was analyzed through the checkboard assay to determine the type of activity (synergy, antagonism, or addition). This analysis was complemented with a kinetic study of the synergistic combinations. Fifteen synergistic combinations were characterized for nine of the tested bacteria. CIN demonstrated effectiveness in reducing the MIC of chloramphenicol, streptomycin, amoxicillin, and erythromycin (94–98%) when tested on Serratia marcescens, Staphylococcus aureus, Pasteurella aerogenes, and Salmonella enterica, respectively. The kinetic study revealed that when the substances were tested alone at the MIC concentration observed in the synergistic combination, bacterial growth was not inhibited. However, when CIN and the ABX, for which synergy was observed, were tested simultaneously in combination at these same concentrations, the bacterial growth inhibition was complete. This demonstrates the highly potent in vitro synergistic activity of CIN when combined with commercial ABXs. This finding could be particularly beneficial in livestock farming, as this sector witnesses the highest quantities of antimicrobial usage, contributing significantly to antimicrobial resistance issues. Further research focused on this natural compound is thus warranted for this reason. Introduction In the current healthcare environment, the alarming rise in multi-drug-resistant bacterial infections has become a global public health threat.Due to the severity of the issue, the World Health Organization (WHO) declared the spread of antibiotic-resistant bacteria as one of the three greatest public health hazards of the 21st century and published a top-priority pathogen list for which research is of utter importance.Among the highlighted bacteria, Acinetobacter baumannii, Pseudomonas aeruginosa, Staphylococcus aureus, and Salmonellae spp.can be found [1]. The spread of antibiotic resistance genes among bacteria [2] has created the necessity of developing alternative therapies or strategies.This approach has led the scientific Plants 2024, 13,192 2 of 21 community to return to the origin of therapies, such as the use of substances naturally synthesized by plants, that is, their primary or secondary metabolites [3].Plants' secondary metabolites are involved in vital functions, for instance, growth, development, and storage, and, moreover, they help protect the plant against harmful stress like UV light or different pathogens that can cause infections [4].Plant secondary metabolites have been (and continue to be) a rich source of bioactive molecules with significant clinical applications.In fact, over 50% of all pharmaceutical drugs currently on the market are directly derived from or inspired by natural products [5].Some authors, like O'Shea and Moser [6], believed that the historical superiority of novel bioactive natural antimicrobial molecules compared to synthetic libraries was due to evolutionary selection pressure for antibiotic-producing organisms, a higher degree of diversity, and an average increase in heteroatoms.Some examples of unaltered natural compounds that exhibit potent antibacterial activity include penicillin, polymyxin B, or vancomycin [7]. Therefore, the search for new antimicrobial natural products continues to draw the attention of many researchers.One of the most influential families of bioactive phytochemicals are polyphenols, which, alongside terpenes, are one of the most abundant groups of secondary metabolites [8].The chemical structure of polyphenols includes one hydroxyl group (-OH), responsible not only for the antioxidant properties but also for the antimicrobial ones.OH groups act as proton exchangers, thereby reducing the pH gradient across the plasmatic membrane and thus leading to cell death [9]. For this reason, combinations of conventional antibiotics and natural products with demonstrated antimicrobial properties have been grabbing researchers' attention [10].These natural molecules have the potential to increase the antibiotic's antibacterial effect and, therefore, recover their initial ability to eliminate the most resistant strains of bacteria [11].The combination effects occur, for instance, by intercalating several simultaneous mechanisms of action, a situation facilitated by the structural and stereochemical complexity of natural active molecules with a wide variety of functional groups [12].Interaction between two antimicrobial compounds can result in synergy, additive effects, or antagonism.Synergy is generally recognized when the effect of two compounds in combination is stronger than the sum of the effects of each compound when acting alone [13,14].Such a strategy can also reduce the toxicity of the antibiotic by diminishing its dosage, which can be remarkable in most resistant infections. Two main reasons to include phytochemicals in pharmacological strategies to solve bacterial resistance problems must be highlighted.Firstly, they are common in nature.The natural environment accounts for a source of very promising ingredients to produce new antimicrobial drugs, in spite of the fact that their antibacterial action is not as effective as that of commercial synthetic drugs [15].Secondly, most of them had already been used as traditional remedies and, therefore, were presumably safe for humans.In fact, some of them, like cinnamaldehyde, were Generally Recognized As Safe (GRAS) by the Flavoring Extracts Manufacturers' Association and were approved for food use (21CFR 182.60) by the Food and Drug Administration (FDA), the WHO, and the Council of Europe [16][17][18]. Cinnamaldehyde is a bioactive phytocompound that occurs in the bark of cinnamon trees and is responsible for its flavor and odor.It is a member of the phenylpropanoid family and is produced via the shikimate pathway [19].Cinnamaldehyde has been deeply studied as a major component of cinnamon due to its numerous properties.Early studies suggest antioxidant properties through inhibition of oxidative stress by elimination of ROS (alkoxyl, superoxide anion, hydroxyl, and peroxyl radicals) responsible for lipid peroxidation, peroxidative hemolysis, and aging of cells, thus preventing oxidative injury that increases cell damage [14,20].The process of oxidative stress is also linked to coronary and Alzheimer's disease and some types of cancer [21].Cinnamaldehyde has also proven to be an efficient anti-inflammatory molecule by inhibiting the intracellular signaling pathway of macrophages, which results in cytokine production suppression [22].Furthermore, it is also a candidate for anti-diabetic treatments as it has hypoglycemic action thanks to its ability to regulate glucose metabolism [23].It has an insulinotropic effect due to its Plants 2024, 13,192 3 of 21 ability to alter mRNA expression levels of pyruvate kinase and phosphoenolpyruvate carboxykinase [24].Moeover, the most relevant property for our study is its antimicrobial activity.Cinnamaldehyde is, in fact, a well-known antimicrobial agent against both Grampositive and Gram-negative bacteria [25,26]. Despite these facts, few authors have conducted a detailed analysis of the potential synergistic benefits of cinnamaldehyde combined with commercial antimicrobials [11,27].Thus, the main objective of this research was to identify cinnamaldehyde synergistic combinations with commercial antimicrobials against pathogenic bacteria and to characterize the kinetics of these interactions over time.For this purpose, a group of seven widely used antibiotics with different mechanisms of action and 14 Gram-positive and Gram-negative bacteria responsible for very prevalent infectious pathologies have been selected. Toxicity Study The monoterpene trans-cinnamaldehyde (CIN) (Figure 1) is insoluble in water, and therefore, an organic solvent was needed.To guarantee that the presence of DMSO did not affect bacteria growth, a DMSO toxicity study was developed for each strain as previously described.As it can be seen in Figure S1a,b, DMSO concentrations at or below 2.5% (concentration in the well) can be used without modifying the growth of microorganisms, except for Proteus mirabilis, whose survival was 92.7 ± 0.9% at 2.5% of DMSO.For this reason, P. mirabilis was excluded in any test where CIN was assayed.Furthermore, it is also a candidate for anti-diabetic treatments as it has hypoglycemic action thanks to its ability to regulate glucose metabolism [23].It has an insulinotropic effect due to its ability to alter mRNA expression levels of pyruvate kinase and phosphoenolpyruvate carboxykinase [24].Moeover, the most relevant property for our study is its antimicrobial activity.Cinnamaldehyde is, in fact, a well-known antimicrobial agent against both Gram-positive and Gram-negative bacteria [25,26].Despite these facts, few authors have conducted a detailed analysis of the potential synergistic benefits of cinnamaldehyde combined with commercial antimicrobials [11,27].Thus, the main objective of this research was to identify cinnamaldehyde synergistic combinations with commercial antimicrobials against pathogenic bacteria and to characterize the kinetics of these interactions over time.For this purpose, a group of seven widely used antibiotics with different mechanisms of action and 14 Gram-positive and Gram-negative bacteria responsible for very prevalent infectious pathologies have been selected. Toxicity Study The monoterpene trans-cinnamaldehyde (CIN) (Figure 1) is insoluble in water, and therefore, an organic solvent was needed.To guarantee that the presence of DMSO did not affect bacteria growth, a DMSO toxicity study was developed for each strain as previously described.As it can be seen in Figure S1a,b, DMSO concentrations at or below 2.5% (concentration in the well) can be used without modifying the growth of microorganisms, except for Proteus mirabilis, whose survival was 92.7 ± 0.9% at 2.5% of DMSO.For this reason, P. mirabilis was excluded in any test where CIN was assayed. Antimicrobial Susceptibility Test The MIC results from tests of the natural antimicrobial and conventional antibiotics against the 14 bacterial strains are presented in Table 1.Experiments revealed that the lowest trans-cinnamaldehyde MICs were 500 μg/mL for most strains except for Enterococcus faecalis and Pseudomonas aeruginosa, whose MICs doubled the previous ones.P. mirabilis cannot be probed against this compound because the DMSO concentration required to dissolve it would have affected its growth (Figure S1a). Antimicrobial Susceptibility Test The MIC results from tests of the natural antimicrobial and conventional antibiotics against the 14 bacterial strains are presented in Table 1.Experiments revealed that the lowest trans-cinnamaldehyde MICs were 500 µg/mL for most strains except for Enterococcus faecalis and Pseudomonas aeruginosa, whose MICs doubled the previous ones.P. mirabilis cannot be probed against this compound because the DMSO concentration required to dissolve it would have affected its growth (Figure S1a).With these results, some exclusion criteria for the interaction experiments were established as follows: 1. Antibiotics with a MIC for a given bacteria below 10 µg/mL were discarded.These results were considered difficult to improve in combination with a synergistic compound. 2. Bacteria that were susceptible to DMSO at a concentration below 2.5% in the toxicity study were excluded because of the impossibility of solubilizing CIN in pure water. After applying the mentioned criteria, 60 natural compound-antibiotic-bacteria interactions were selected for the checkboard test. Checkerboard Assay The potential reduction of the commercial antibiotic MICs by the natural antimicrobial CIN was examined together with the calculation of the SFIC of the resulting combinations, that is, when both compounds were added simultaneously to the bacterial population.Data on these interactions are offered in Table 2. Synergistic interactions between the commercial antibiotics and CIN were found for all bacteria tested, except for A. baumannii, E. coli, K. aerogenes, and P. aeruginosa, whose interactions were additive and/or antagonistic.Regarding the antibiotics, streptomycin sulfate (STM) was indeed the compound with a higher number of synergistic interactions with CIN (five synergies on S. enterica, S. aureus, S. agalactiae, L. monocytogenes, and E. faecalis).However, amoxicillin (AMO) and erythromycin (ERY) only displayed this effect on P. aerogenes and S. marcescens, respectively.On the other hand, the highest antibiotic MIC reduction was found for the antibiotic chloramphenicol (CHL) in the combinations S. marcescens-CIN-CHL, with an antibiotic MIC reduction of 98%, and S. aureus-CIN-STM, P. aerogenes-CIN-AMO, and S. enterica-CIN-ERY, where the MIC was reduced by 94%. The number of synergistic interactions (CIN + ABX) was 15.All of them meant antibiotic MIC reductions higher than 75%.However, some of the interactions coined as 'additive´also reported MIC reductions (ranging from 50 to 94%).Hence, if we consider this fact, the number of interactions giving antibiotic MIC reductions grew to 37. Kinetic Growth Assay The 15 synergistic interactions underwent a detailed kinetic growth study (Figures S2-S9) to analyze the behavior of each microorganism for 24 h under the circumstances cited in Section 4.5. In general terms, we observe for all of them how the mixture of commercial ABX and CIN, when both were at their MIC comb , provoked a complete inhibition of the bacterial growth (Figures S2-S9).In the same way, Figures S2-S9 also illustrate the complete growth inhibition for the experiments when the natural and commercial compounds were administered individually at their respective MIC alone . On the contrary, this effect is not seen when each compound at that MIC comb is applied individually to the bacteria.In most of these experiments, a lower OD was measured after the lag phase when compared to the positive control. Antimicrobial Activity Analysis of Cinnamaldehyde and Commercial Antibiotics Cinnamaldehyde is a well-studied substance with a wide range of properties.Its antimicrobial activity has been extensively evaluated.However, to the best of our knowledge, no previous studies with our identical Gram-positive and Gram-negative bacteria strains have been reported for macro-and/or microdilution assays (Table 1).The only exceptions are the works by Sim et al. [26] on P. aeruginosa (ATCC 27853) and E. coli (ATCC 25922), by Ferro et al. [28] on P. aeruginosa (ATCC 27853), and the research by Bianchi et al. [29] on E. coli (ATCC 25922).In the first publication, for the MICs for E. coli (157 mg/mL) and for P. aeruginosa (630 mg/mL), Sim et al. provided lower values than ours (see Table 1), even using the trans-isomer as we did.In the second work, the MIC obtained by Ferro et al. [28] for P. aeruginosa (1000 mg/mL) equals ours (Table 1).Finally, in this third article, our MIC value for E. coli (500 mg/mL, Table 1) doubles that obtained by Bianchi et al. (256 mg/mL) [29], which, at the same time, doubles the amount given by Sim et al. [26].Regarding other studies, most of the antimicrobial tests are designed to evaluate the efficacy of essential oils (not isolated components) [45,46] and/or isolated CIN, but on different bacterial strains from those used in the current research. Commercial antibiotics were also tested on the 14 bacterial strains.In Table 1, experimental values are shown together with MICs obtained by other authors.As mentioned in the Results section, only referenced MIC values for the same strain and macro or microdilution methods are given in that table.Most of our results coincide with the bibliographic ones, for example, AMO MIC on A. baumannii and E. coli, ampicillin (AMP) on E. coli, tetracycline clorhydrate (TC) on P. aeruginosa, and CHL on B. subtilis and E. coli, or they are in the same order of magnitude as the reported values, as happens with the remaining data.This was observed when the microdilution method was used in both sets of MIC values (Exp.and Lit., Table 1).However, for STM and CHL on A. baumannii, AMP and AMO on P. aeruginosa, and CHL on S. marcescens, MIC data differ from ours.The cause could be the fact that in these reported studies, they applied the macrodilution method instead of the microdilution one. Different mechanisms of action for CIN have been reported, depending on the type of bacteria.Most of them include alteration of the cell membrane structure, disruption of the bacterial biofilms, or gene inhibition, as will be described in the paragraphs below. Gram-Positive For L. monocytogenes (different strains from ours), values of trans-cinnamaldehyde MIC from 250 [47] to 512 µg/mL [48] or 640 µg/mL [49] were found, from which the last two are quite close to the MIC we report for CIN in this research (Table 1).CIN seems to reduce the swimming motility of L. monocytogenes, preventing the early stages of biofilm [49].In previous studies about fungi, this microbicidal power of CIN had been attributed to the high electrophilic properties of the carbonyl group (Figure 1), which turns the compound into a very reactive structure, ready to interact with sulfhydryl and amino moieties from the microorganism proteins [50].Perhaps similar interactions also take place with bacterial proteins. According to García-Salinas [51] and Beáta Kerekes [47], the MIC values for S. aureus (a different strain from ours) were 400 µg/mL (very close to our result), although the stereochemistry of the aldehyde is not specified. For S. agalactiae, a MIC for trans-cinnamaldehyde of 660 µg/mL was described, slightly higher than our result of 500 mg/mL [52]. For E. faecalis, authors like Ali et al. and Ferro T et al. [53,54] seem to agree with the fact that CIN inhibits bacterial biofilm formation through regulation of exopolysaccharides and/or downregulating genes related to the Quorum Sensing-Fsr system [53], whose contribution to biofilm formation is via gelatinase production.On the other hand, superoxide radicals are secreted extracellularly in large quantities by E. faecalis but not by the other microorganisms studied [55][56][57][58].In our experiments on E. faecalis (Table 1), the MIC was slightly higher than the others, as will happen with that of P. aeruginosa. Concerning the proven antimicrobial activity of CIN on S. aureus, one of the most interesting results is that reported by Baskaran et al. [59], who verified that the effectiveness of this bioactivity persisted for 10 days in milk infected with the cited Gram-positive bacteria.With respect to its mechanism of action, some authors found that biofilm formation is also suppressed by the natural compound [51,60], while others [51,61] suggested a cell membrane disruption or lysis of the peptidoglycan molecule and the subsequent leakage of intracellular content.CIN was also active on B. subtilis (Table 1).The authors [62] explained the CIN mode of action via delocalization of membrane-associated proteins, which are involved in division and cell shape processes. Gram-Negative For Gram-negative bacteria, on the contrary, it was possible to find MICs referenced in the literature for the same strains of E. coli and P. aeruginosa used in our study (Table 1).Other authors proposed MIC values for CIN ranging from 400 [61] to 519 µg/mL on different strains of the two cited bacteria [63].The E. coli outer membrane has proven to be completely lysed by CIN [51].Thanks to confocal microscopy, serious membrane damage was observed in this kind of bacteria when exposed to the natural compound.Apart from that, CIN was also able to inhibit biofilm formation [60] and, from a more clinical perspective, it also prevented bacteria from adhering to host tissues or Hep-cells [61]. Even though the other strains in our study differ from those found in the literature, trans-cinnamaldehyde has proven its antimicrobial activity for all the bacteria tested, like A. baumannii, with a MIC of 310 µg/mL [64], slightly lower than our result (500 µg/mL, Table 1). For S. enterica, Liu et al. [48] describe MIC values of 1024 µg/mL and V.T. Nair [66] of 656 µg/mL for cinnamaldehyde.Both authors give results higher than those found in our study (500 µg/mL) (Table 1).Structural changes were also observed in S. enterica [25] and K. pneumoniae [63] when treated with CIN, while some other researchers hypothesized that the CIN carbonyl group might bind to proteins, provoking the inhibition of amino acid decarboxylase [67], modifying the production of bacteria metabolites. K. aerogenes and P. aerogenes have not been tested with CIN or with essential oils containing them. No conclusive information has been found about the specific modes of action of CIN on K. aerogenes, P. aerogenes, or S. marcescens. On the contrary, P. aeruginosa has been extensively studied.According to Didehdar et al., CIN inhibits PA01 biofilm formation via quorum sensing (QS) inhibition because the compound represses lasB, rhlA, and pqsA genes [60].There are also some important findings about the use of CIN sub-inhibitory concentrations in this bacteria [28], i.e., doses lower than the MIC.The first of them is that the natural compound did not induce an adaptative phenotype in P. aeruginosa, E. faecalis, or S. aureus [28], suggesting that CIN may not have generated resistant strains in the time studied.In addition, at these lower proportions, CIN diminishes the bacteria's metabolic rate [28].Finally, the compound inhibits biofilm formation [28]. Assessment of the Combination Combining antibiotic therapies is a strategy often employed in the treatment of multidrug-resistant infections.It has previously been utilized for the most resistant Gramnegative infections, but evidence suggests that coadministration of two antimicrobial drugs can be one of the cornerstones for the treatment of Gram-positive infections in the future as well [68].Our results (Table 1) prove that the antimicrobial activity of cinnamaldehyde and, therefore, the feasibility of using this natural compound in combination with commercial antibiotics lead to a decrease in their MIC.Effects resulting from the simultaneous use of antibiotics with different plant extracts have been studied by a number of researchers [69][70][71], but less attention has been previously paid to the synergistic effects of an individual commercial antibiotic and a natural component, in particular, trans-cinnamaldehyde. Although the mechanisms of action of the synergies may be different depending on the antibiotic or bacterium, they all seem to draw attention to the fact that damaging the cell envelope by CIN facilitates the entry of ABXs into it, making them more effective.On the other hand, the action of CIN on the membrane might also alter microbial resistance mechanisms against ABXs, such as efflux pumps, preventing them from ejecting ABXs out of the cell [72,73]. Beta-Lactams Combinations of CIN and beta-lactams used in this study have only been described for AMP by Palaniappan et al. [25].Although they used resistant strains, their work shows synergy among AMP and CIN against S. enterica, S. aureus, and E. coil [25].However, no studies are offered for the combination with AMO. In our study, CIN interactions with beta-lactams gave better results than those with aminoglycosides in Gram-negative bacteria.AMP and AMO interactions resulted in synergy in P. aerogenes (Figure S7a,b) and, additionally, AMP synergized with CIN in K. pneumoniae and L. monocytogenes (Figures S3b and S6).In addition, CIN achieved a MIC reduction of AMO up to 94% in P. aerogenes (Table 2). Beta-lactam antibiotics are relatively small and hydrophilic molecules, but the hydrophobic outer membrane architecture of Gram-negative bacteria protects them from hydrophilic antibiotics.Porins have been shown to be important entrance points for antibacterial chemicals in these species [74]. Literature reports that beta-lactams inhibit the activity of PBP (penicillin-binding proteins) enzymes that are needed for the cross-linking of peptidoglycans during the final step in cell wall biosynthesis [75,76].This causes the cell wall to weaken due to fewer cross-links, altering the correct osmotic gradient of the cell [76]. This effect of the ABXs might be added to that of CIN to disrupt the cell membrane by changing the gradient of the electric cations, thus provoking the cell to burst in the end.One of the resistance mechanisms that bacteria can develop against beta-lactams is efflux pumps, so coadministration with cinnamaldehyde might prevent the intracellular antibiotic concentration from a sudden reduction [77].This interaction was already described by Karumathil et al. for trans-cinnamaldehyde in A. baumannii [72]. Aminoglycosides Many of the interactions among aminoglycosides and CIN resulted in FIC values < 0.5 (Table 2) and, therefore, synergy.A synergistic combination between STM and CIN was already described by Liu et al. [48] against foodborne pathogens like L. monocytogenes and S. enterica, both included in our study (Table 2).Their results are consistent with our observations, with FIC values of 0.5 and 0.37, respectively, both with a 75% reduction of ABX MIC (Table 2).Combinations with gentamycin (GTM) were also noted by Wang et al. against S. aureus in clinical strains with a range of FICs between 0.37 and 0.5 [73], while our work also shows a clear synergy interaction with a FIC value of 0.19 (Table 2).For the Gram-positive S. agalactiae, E. faecalis, and S. aureus, both aminoglycosides interacted synergistically (Table 2).Jatin Chadha et al. described a synergistic activity among GTM and P. aeruginosa, which, in our case, resulted in a FIC index of 1.5 and, therefore, addition [78], maybe because the strain was different.On the other hand, our results are consistent with those of Thirapanmethee et al., who described additive effects among GTM and CIN against five strains of A. baumannii [11].The most successful antibiotic group interactions are found in aminoglycosides for Gram-positive microorganisms: E. faecalis, L. monocytogenes, S. agalactiae, and S. aureus (Table 2).Especially relevant is the experiment where CIN reduced the MIC of STM against S. aureus by 94%. Aminoglycosides mechanism of action involves protein synthesis inhibition by altering the 30 s subunit of the prokaryotic ribosome [75,79].These antibiotics are large, polar molecules that cannot passively diffuse through the bacterial cell envelope, so they require active transport systems to enter the bacterial cell [80].If the cell membrane is disrupted by CIN, the antibiotic entry rate into the cytosol will increase, and the intracellular concentration of the antibiotic to inhibit protein synthesis will be higher compared to the treatment when the natural substance is absent [68].Furthermore, the interaction with CIN that can destabilize the cell envelope could also neutralize one of the main mechanisms bacteria employ to remove antibiotics, the efflux pumps, maintaining the concentration of ABXs in the cytoplasm [81]. Macrolides and Amphenicoles Synergy between macrolides like ERY and CIN was detailed by Palaniappan et al. [25] for S. enterica and E. coli.On the contrary, an additive interaction was observed in both of our tests, but while on E. coli no ABX MIC reduction was seen, a reduction of 94% was measured on S. enterica.Additionally, synergistic interactions against clinical strains of E. coli were described by Visvalingam et al. [82] when combining this macrolide with CIN.The variations in the FIC values (between 0.3 and 0.5) when compared to ours may be attributed to the differences in the strains tested.However, our findings do concur with those from Halim Topa et al. [27], who investigated the same combination against P. aeruginosa biofilms and found additive action like we did. Macrolides, like ERY, and amphenicols, like CHL, share a similar mechanism of action by inhibiting the elongation of the protein chain at the ribosomal unit [75,83].They both are broad-spectrum antibiotics that are effective against a wide range of bacteria, and they both enter the bacteria through passive diffusion [84].Furthermore, they both synergized with CIN on S. marcescens (Figure S9a,b), a Gram-negative bacterium, in which CIN achieved an ABX MIC reduction of 75% for ERY and 98% for CHL, the highest obtained in this research (Table 2).In addition, CHL combination with CIN also resulted in synergy for the Gram-positive S. agalactiae (Figure S4), with an ABX MIC reduction of 75% (Table 2). CHL and ERY are both lipophilic molecules that enter the bacterial cell through passive diffusion across the cell membrane.CIN membrane disruption could be the mechanism in both ABXs synergies with CIN, allowing greater entry of both antibiotics, which would act on its ribosomal target [75]. When studying the MIC reduction (%) (Table 2), it can be observed that CIN helped reduce CHL MIC in many other cases that are considered additive, for example, A. baumannii and K. pneumoniae. Natural products have also been seen to induce the expression of efflux pumps in bacteria.Its induction can result in decreased susceptibility of bacteria to antibiotics that target the ribosome [85].Cinnamaldehyde has also been studied as an efflux pump inductor for P. aeruginosa [86].P. aeruginosa, together with K. aerogenes, A. baumannii, and E. coli, are the only bacteria for which synergies have not been observed, and all of them are especially prone to generating resistance.It should be noted that every bacterial strain has an individual behavior responding to the damage that the drug can cause, and therefore each combination should be studied separately. When analyzing candidate molecules to be used as antibiotics, attention must also be paid to their physicochemical properties.Some of them are extremely related to their potential bactericidal activity, such as molecular weight, partition coefficient, hydrogen bonding, and solubility [87].Antimicrobials with a low molecular weight, like CIN (132.16 g/mol) [88], usually target better Gram-negative cultures [6].CIN has a LogP of 1.90, which means it is slightly more lipophilic than hydrophilic [88].Finally, water solubility is a crucial feature in order to test in vitro antibacterial activity.CIN is fairly insoluble in water (1.42 mg/mL at 25 • C), and therefore, the need for an organic solvent rises [89].Pure organic solvents may be harmful to bacteria or alter their growth.That is why a toxicity test (Supporting Information) was conducted to determine whether the concentrations needed to dissolve CIN have intrinsic damaging activity on the bacterium culture. Analysis of antibiotic interactions (Figures S2-S9) shows how the antimicrobial hypothesis of cell disruption might fit in.Many authors believe that naturally occurring bioactive substances primarily target the cell membrane and are more effective against Gram-positive bacteria [90].CIN activity has been shown to affect not only Gram-positives but Gram-negatives at the same level, which, in fact, shows that, at least, one of the possible modes of action is unspecific, like cell disruption.Furthermore, its slight lipophilicity and low molecular weight would enable CIN to penetrate the lipidic membranes and exert some other mechanisms. Kinetics As it was reported in the Results section, when synergies were observed, the combination of the commercial ABX and CIN at their respective MIC comb resulted in a complete inhibition of bacterial growth with time (Figures S2-S9).However, if tested alone at these same concentrations, complete inhibition was never achieved because these doses are actually sub-inhibitory (all of them are below their respective MIC alone ).If we focus on ABXs kinetics when tested alone at their MIC comb on the different microorganisms, the general trend is an increase in their lag phase when compared to that of the control, regardless of whether they are Gram-positive or negative (Figures S3, S4, S5b, S6, S7b, S8 and S9a), although the effect is stronger on Gram-negative.So is the effect of CIN for all tested bacteria, except for K. pneumoniae (Figure S6) and S. marcescens (Figure S9), whose lag phase is approximately the same.The lag phase is the moment when cells are adapting to the new environment, and this adaptation can be slowed down if the situation is hostile [91].According to our data, CIN provoked a longer lag phase than ABXs in the following situations: S. aureus-STM (Figure S5a), P. aerogenes-AMO (Figure S7a), P. aerogenes-AMP (Figure S7b), and S. enterica-STM (Figure S8), which could mean that these bacteria perceive the environment with CIN more aggressively than that caused by the commercial drug. Most antibiotics demonstrated a reduction in the growth rate in the exponential phase (in other words, a diminution in the slope in the log phase) when added alone at their respective MIC comb .This step in bacterial growth corresponds to the moment in which bacteria are reproducing but also to a stage when they become bigger because of their active metabolism [91].This inhibitory effect is especially pronounced on E. faecalis-STM, GTM, and CIN (Figure S2a,b), L. monocytogenes-STM (Figure S3a), S. agalactiae-GTM, STM, and CHL (Figure S4), S. aureus-STM and CIN (Figure S5a), K. pneumoniae-AMP (Figure S6), S. enterica-CIN (Figure S8), and S. marcescens-CHL (Figure S9b).If the slopes of the log phase of experiments with ABXs at their MIC comb are compared to the respective slopes of the log phase of assays with CIN at their MIC comb , the trend seems to be that ABX slopes are lower than CIN slopes (Figures S3, S4, S6 and S9b).Only for experiments on E. faecalis (Figure S2) and S. enterica (Figure S8) was the opposite behavior found.For the remaining tests, no difference is highlighted.A higher slope would mean a higher reproduction rate and/or a larger cell size. Prior to the use of any microorganism, the bacterial inoculum was re-cultured from the cryotubes and incubated (J.P. Selecta) at the conditions required for each optimum bacterial growth for 24 h.Then, the overnight culture was adjusted to the required bacterial optical density of 0.5 McFarland scale, corresponding to 2.5 × 10 8 CFU/mL [30]. Compounds to Be Tested and Stock Solutions CIN and seven commonly used antibiotics (ABXs) were used as antimicrobials.The CAS numbers and purity information of these compounds and trans-cinnamaldehyde are shown in Table 3. ABXs were our antimicrobial reference agents: AMP, AMO, GTM, STM, ERY, TC, and CHL.All of them were soluble in sterile water, and no other solvents were used. CIN was dissolved in sterile water with 5% of DMSO (Table 3) at a final concentration of 4000 µg/mL (stock solution).This concentration of DMSO was the minimum concentration required for the trans-cinnamaldehyde to be dissolved in an aqueous solution.Sterile conditions were ensured for all operations.The tested concentrations of ABXs and CIN ranged from 0.1 to 2000 µg/mL. Toxicity and Antimicrobial Susceptibility Tests The minimum inhibitory concentration (MIC) and the minimum bactericide concentration (MBC) of CIN and the MIC of ABXs were determined.In order to ensure that the solvent DMSO was innocuous (at the concentration used) for each bacterial strain, toxicity tests were previously performed. The procedure to develop these tests was the same as that for the determination of the MIC (the lowest concentration of a substance that completely inhibits bacterial growth). The MIC of all antimicrobial agents against the bacterial strains was performed by the standard broth microdilution method according to the Clinical and Laboratory Standards Institute [30,33,92].Each experiment was carried out in triplicate.Briefly, 100 µL of broth were added to each well of the 96-well plate (round bottom), followed by 100 µL of antimicrobial (or DMSO) solution.Samples were tested in two-fold dilutions, followed by the addition of 10 µL of the bacterial suspension once it was adjusted according to the McFarland scale, as previously described.Positive and negative controls were also added.The positive control measures the standard growth of bacteria with no antimicrobial agent.Negative control involved only culture broth to ensure that there was no microbial growth or contamination in it.After the incubation period (24 h), absorbance was measured at each bacteria's optimum temperature and at 625 nm with a BioTek™ Synergy H1 Hybrid multimode microplate reader. DMSO toxicity results are given as a percentage of survival (Survival (%)), calculated as follows (Equation ( 1)): where Abs DMSO is the average of the absorbance of the wells with DMSO solution at a given concentration and Abs C is the average of the absorbance of the positive control wells. The doses tested for DMSO ranged from 0.16 to 20%, and it was considered non-toxic when the survival was higher than 99%.Experiments were performed in triplicate.Survival data are expressed as the mean ± standard deviation. The MBC was defined as the lowest concentration that yielded no colony growth by subculturing on agar plates.It was tested only for CIN.Therefore, aliquots from different wells from the resulting MIC and subsequent wells with higher concentrations were cultured and incubated for 24 h [93]. Checkerboard Assay After the identification of MIC and MBC, the possible interactions between the natural compound and each commercial antibiotic were studied.Combination tests were performed using the checkerboard assay method [94][95][96]. New stock solutions of each antimicrobial agent were prepared.The concentration of each stock solution was 4 times its reported MIC, which had been obtained for each microorganism individually beforehand.Trans-cinnamaldehyde serial two-fold dilutions were added from the 1st to the 7th column of the 96-well plate (round bottom), while antibiotic solutions were added from rows A to G. Therefore, each well resulted in a different concentration of both antimicrobial agents tested.The most concentrated well was A1, while the least concentrated one resulted in G7.Once the serial two-fold dilutions were applied to the plate, 10 µL of bacteria inoculum were added after its adjustment to the right McFarland concentration, as previously described.Positive (bacteria without antimicrobial) and negative (culture medium without bacteria) controls were also prepared [97]. Two sorts of Fractional Inhibitory Indices (FIC) were obtained to evaluate the interactions: FIC A and FIC B (Equations ( 2) and (3), respectively). With them, the ΣFIC, as the addition of both FIC A and FIC B, was calculated, and, according to the resulting value, the type of interaction between the two compounds was established.The interactions were considered synergistic if ΣFIC < 0.5; additive if 0.5 < ΣFIC < 4; and antagonistic if ΣFIC > 4 [98][99][100]. Experiments were performed in triplicate and with flow chamber sterility (Model MSC Advantage 1.2). Kinetic Growth Assay The methodology followed was described by the Clinical and Laboratory Standards Institute with slight modifications [78,101]. On a 96-well plate (round bottom), wells were filled with: (i) commercial antibiotic solution at its MIC when tested alone (MIC alone ), (ii) commercial antibiotic solution at its MIC when tested in combination with the natural product (MIC comb ), (iii) natural compound solution at its MIC when tested alone, (iv) natural product solution at its MIC when tested in combination with the commercial antibiotic, and (v) solution made of natural compound and commercial antibiotic both at their respective MICs when tested in combination [102]. The absorbance was measured every hour for 24 h at 625 nm with the SPECTROstar Nano of BMG Labtech at each bacterial optimal temperature.Experiments were performed in triplicate; all data were expressed as mean ± standard deviation. Conclusions One of the key strategies to counteract multidrug resistance pathogens is the coadministration of two active molecules with complementary modes of action, where one of them is a commercial antibiotic and the other could be, for example, a natural product with confirmed antimicrobial properties, like trans-cinnamaldehyde.If the substances act in a synergistic way, the action of their combination is more powerful than that of the compounds when added separately, leading to the use of lower concentrations of these substances to get the same effect. In this paper, the antimicrobial properties of trans-cinnamaldehyde were studied on 14 different Gram-positive and Gram-negative bacteria.The MIC for seven commercial antibiotics (AMP, AMO, STM, GTM, ERY, TC, and CHL) was also determined.MIC values for the natural product were 500 mg/mL for all bacteria except for E. faecalis and P. aeruginosa, whose values were 1000 mg/mL.Antibiotic MICs equaled literature values only when the strain and the method were exactly the same as ours. Then, the combination of CIN with each antibiotic was applied to every bacterium.This study led to 15 synergistic interactions (on E. faecalis, L. monocytogenes, S. agalactiae, S. aureus, K. pneumoniae, P. aerogenes, and S. enterica) in which the MIC of the commercial antibiotic was reduced up to 75% in the worst synergy and up to 98% in the best.One-third of the synergistic interactions were shown for STM.The remaining synergies were observed for GTM, AMP, CHL, AMO, and ERY.Nevertheless, some of the additive interactions also reported MIC antibiotic reductions (ranging from 50 to 94%).Considering this fact, the number of interactions that provoked an antibiotic MIC reduction increased to 37. Therefore, if one of the possible solutions to reduce antibiotic resistance consists of reducing the antibiotic dose, additive combinations may also be contemplated for future work. Only the 15 synergistic combinations underwent the kinetic study of the bacterial growth, where the synergistic effect of the combination was perfectly clear and confirmed the results obtained when the checkboard assay was previously applied. According to our experimental data and previous research, the theory we support is that the main synergistic mechanism of CIN when added with commercial antibiotics is the disruption of the bacterial envelope, which would alter its permeability, helping the commercial antibiotic enter the cell and inhibit its growth.This would not mean that other mechanisms, such as efflux-pump inhibition or gene expression inhibition, do not take place, but they would be complementary or secondary modes of action of the CIN synergistic effects. Polyphenols, like trans-cinnamaldehyde, are widely available bioactive compounds that are not only common but also considered safe for human consumption by the FDA and the European Council.CIN has been probed to show antimicrobial properties against pathogenic bacteria and has demonstrated synergistic behavior when combined with commercial antibiotics.Although the mechanism of action is not completely elucidated, this natural compound deserves much more attention as a candidate for future antimicrobial therapies because, in combination with commercial antibiotics, the dose reduction of the latest can reach extremely high values. Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/plants13020192/s1. tested alone at their respective MIC.MIC ABXcomb is the curve for the specific ABX tested alone but added at its MIC when this and CIN were tested simultaneously.MIC CINcomb is the curve for CIN tested alone but added at its MIC when this and the specific ABX were tested simultaneously.(MIC ABXcomb + MIC CINcomb) is the cruve for the combination of the mixture of the specific ABX and CIN when tested simultaneously at their respective MICs in combination.Data are given as mean ± standard deviation.Figure S3.Kinetic study for cinnamaldehyde (CIN) and (a) streptomycine (STM) or (b) ampicillin (AMP) on L. monocytogenes (OD at 625 nm vs. time (h)).C+: curve for positive control.MIC CIN alone and MIC ABX alone are the curves for CIN and the specific ABX, respectively, when each of them was tested alone at their respective MIC.MIC ABX comb is the curve for the specific ABX tested alone but added at its MIC when this and CIN were tested simultaneously.MIC CIN comb is the curve for CIN tested alone but added at its MIC when this and the specific ABX were tested simultaneously.(MIC ABX comb + MIC CIN comb ) is the cruve for the combination of the mixture of the specific ABX and CIN when tested simultaneously at their respective MICs in combination.Data are given as mean ± standard deviation.Figure S4.Kinetic study for cinnamaldehyde (CIN) and (a) gentamicin (GTM), (b) streptomycin (STP), or (c) chloramphenicol (CHL) on S. agalactiae (OD at 625 nm vs. time (h)).C+: curve for positive control.MIC CIN alone and MIC ABX alone are the curves for CIN and the specific ABX, respectively, when each of them was tested alone at their respective MIC.MIC ABX comb is the curve for the specific ABX tested alone but added at its MIC when this and CIN were tested simultaneously.MIC CIN comb is the curve for CIN tested alone but added at its MIC when this and the specific ABX were tested simultaneously.(MIC ABX comb + MIC CIN comb ) is the cruve for the combination of the mixture of the specific ABX and CIN when tested simultaneously at their respective MICs in combination.Data are given as mean ± standard deviation.Figure S5.Kinetic study for cinnamaldehyde (CIN) and (a) streptomycine (STM) or (b) gentamicin (GTM) on S. aureus (OD at 625 nm vs. time (h)).C+: curve for positive control.MIC CIN alone and MIC ABX alone are the curves for CIN and the specific ABX, respectively, when each of them was tested alone at their respective MIC.MIC ABX comb is the curve for the specific ABX tested alone but added at its MIC when this and CIN were tested simultaneously.MIC CIN comb is the curve for CIN tested alone but added at its MIC when this and the specific ABX were tested simultaneously.(MIC ABX comb + MIC CIN comb ) is the cruve for the combination of the mixture of the specific ABX and CIN when tested simultaneously at their respective MICs in combination.Data are given as mean ± standard deviation.Figure S6.Kinetic study for cinnamaldehyde (CIN) and ampicillin (AMP) on K. pneumoniae (OD at 625 nm vs. time (h)).C+: curve for positive control.MIC CINalone and MIC ABXalone are the curves for CIN and the specific ABX, respectively, when each of them was tested alone at their respective MIC.MIC ABXcomb is the curve for the specific ABX tested alone but added at its MIC when this and CIN were tested simultaneously.MIC CINcomb is the curve for CIN tested alone but added at its MIC when this and the specific ABX were tested simultaneously.(MIC ABXcomb + MIC CINcomb) is the cruve for the combination of the mixture of the specific ABX and CIN when tested simultaneously at their respective MICs in combination.Data are given as mean ± standard deviation.Figure S7.Kinetic study for cinnamaldehyde (CIN) and (a) amoxicillin (AMO) or (b) ampicillin (AMP) on P. aerogenes (OD at 625 nm vs. time (h)).C+: curve for positive control.MIC CIN alone and MIC ABX alone are the curves for CIN and the specific ABX, respectively, when each of them was tested alone at their respective MIC.MIC ABX comb is the curve for the specific ABX tested alone but added at its MIC when this and CIN were tested simultaneously.MIC CIN comb is the curve for CIN tested alone but added at its MIC when this and the specific ABX were tested simultaneously.(MIC ABX comb + MIC CIN comb ) is the cruve for the combination of the mixture of the specific ABX and CIN when tested simultaneously at their respective MICs in combination.Data are given as mean ± standard deviation.Figure S8.Kinetic study for cinnamaldehyde (CIN) and streptomycine (STM) on S. enterica (OD at 625 nm vs. time (h)).C+: curve for positive control.MIC CINalone and MIC ABXalone are the curves for CIN and the specific ABX, respectively, when each of them was tested alone at their respective MIC.MIC ABXcomb is the curve for the specific ABX tested alone but added at its MIC when this and CIN were tested simultaneously.MIC CINcomb is the curve for CIN tested alone but added at its MIC when this and the specific ABX were tested simultaneously.(MIC ABXcomb + MIC CINcomb) is the cruve for the combination of the mixture of the specific ABX and CIN when tested simultaneously at their respective MICs in combination.Data are given as mean ± standard deviation.Figure S9.Kinetic study for cinnamaldehyde (CIN) and (a) erythromycin (ERY) or (b) chloramphenicol (CHL) on S. marcescens (OD at 625 nm vs. time (h)).C+: curve for positive control.MIC CIN alone and MIC ABX alone are the curves for CIN and the specific ABX, respectively, when each of them was tested alone at their respective MIC.MIC ABX comb is the curve for the specific ABX tested alone but added at its MIC when this and CIN were tested simultaneously.MIC CIN comb is the curve for CIN tested alone but added at its MIC when this and the specific ABX were tested simultaneously.(MIC ABX comb + MIC CIN comb ) is the cruve for the combination of the mixture of the specific ABX and CIN when tested simultaneously at their respective MICs in combination.Data are given as mean ± standard deviation. FIC A = MIC of A in combination with B MIC of A alone(2)FIC B = MIC of B in combination with A MIC of B alone Figure S1.Survival (%) of a) Gram (-) bacteria and b) Gram (+) bacteria (B) tested in this work when exposed to different dilutions of DMSO. Figure S2.Kinetic study for cinnamaldehyde (CIN) and (a) streptomycine (STM) or (b) gentamicin (GTM) on E. faecalis (OD at 625 nm vs. time (h)).C+: curve for positive control.MIC CINalone and MIC ABXalone are the curves for CIN and the specific ABX, respectively, when each of them was Plants 2024, 13, 192 16 of 21 Table 1 . Minimum inhibitor concentration (MICalone (mg/mL)) of commercial antibiotics and cinnamaldehyde tested alone on the selected bacteria. Table 1 . Minimum inhibitor concentration (MIC alone (mg/mL)) of commercial antibiotics and cinnamaldehyde tested alone on the selected bacteria. Table 2 . MICs for commercial antibiotics and cinnamaldehyde when tested in combination (MICcomb (µg/mL)), FIC, ΣFIC, the subsequent type of activity for every combination bacteria-natural compound-commercial antibiotic, and MIC reduction (%) * for each compound.Synergies are highlighted in pale gray.Antibiotic reductions ≥ 50% are highlighted in dark gray. Table 3 . Name, CAS number, provider, and purity of the compounds tested.
2024-01-15T16:13:59.727Z
2024-01-01T00:00:00.000
{ "year": 2024, "sha1": "85293328a7e12eea637212c636209e5ef85814bb", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2223-7747/13/2/192/pdf?version=1704902319", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4ecb8468dc15d3456fb589cba6167a66a69529af", "s2fieldsofstudy": [ "Medicine", "Chemistry" ], "extfieldsofstudy": [] }
231834632
pes2o/s2orc
v3-fos-license
Colorectal cancer incidence and mortality trends by sex and population group in South Africa: 2002–2014 Background South Africa (SA) has experienced a rapid transition in the Human Development Index (HDI) over the past decade, which had an effect on the incidence and mortality rates of colorectal cancer (CRC). This study aims to provide CRC incidence and mortality trends by population group and sex in SA from 2002 to 2014. Methods Incidence data were extracted from the South African National Cancer Registry and mortality data obtained from Statistics South Africa (STATS SA), for the period 2002 to 2014. Age-standardised incidence rates (ASIR) and age-standardised mortality rates (ASMR) were calculated using the STATS SA mid-year population as the denominator and the Segi world standard population data for standardisation. A Joinpoint regression analysis was computed for the CRC ASIR and ASMR by population group and sex. Results A total of 33,232 incident CRC cases and 26,836 CRC deaths were reported during the study period. Of the CRC cases reported, 54% were males and 46% were females, and among deaths reported, 47% were males and 53% were females. Overall, there was a 2.5% annual average percentage change (AAPC) increase in ASIR from 2002 to 2014 (95% CI: 0.6–4.5, p-value < 0.001). For ASMR overall, there was 1.3% increase from 2002 to 2014 (95% CI: 0.1–2.6, p-value < 0.001). The ASIR and ASMR among population groups were stable, with the exception of the Black population group. The ASIR increased consistently at 4.3% for black males (95% CI: 1.9–6.7, p-value < 0.001) and 3.4% for black females (95% CI: 1.5–5.3, p-value < 0.001) from 2002 to 2014, respectively. Similarly, ASMR for black males and females increased by 4.2% (95% CI: 2.0–6.5, p-value < 0.001) and 3.4% (, 95%CI: 2.0–4.8, p-value < 0.01) from 2002 to 2014, respectively. Conclusions The disparities in the CRC incidence and mortality trends may reflect socioeconomic inequalities across different population groups in SA. The rapid increase in CRC trends among the Black population group is concerning and requires further investigation and increased efforts for cancer prevention, early screening and diagnosis, as well as better access to cancer treatment. Background Colorectal cancer (CRC) is regarded as a leading cause of cancer morbidity and mortality worldwide [1]. It is ranked third in cancer incidence and second in cancer mortality globally [1]. In 2019, there were 1.8 million new CRC cases and 880,792 CRC deaths worldwide [1][2][3]. In 2018, the age-standardised incidence rate (ASIR) and age-standardised mortality rate (ASMR) in the African region were estimated at 8.2 and 5.6 per 100,000 population, respectively [4]. According to the Globocan report, the estimated ASIR and ASMR for South Africa (SA) in 2018 were 14.4 and 7.6 per 100,000, respectively [5]. The latest South African National Cancer Registry (NCR) 2016 report observed 3884 cases of CRC with an estimated 6.81 and 11.01 ASIR per 100,000 population for females and males, respectively [6]. There is limited data on colorectal cancer survival in SA [7,8]. The patient survival is dependent on multiple factors such as the topography, morphology, staging and treatment type [7]. According to the CONCORD-3 study for the period from 2010 to 2014 among adults aged 15-99 years, the 5-year survival rate in SA was less than 20% [9]. Disparities in CRC between countries exist based on the Human Development Index (HDI). HDI is a composite score of life expectancy, education and per capita income of countries [10]. Countries with a high HDI reported the highest CRC incidence rates, while low HDI countries reported the highest CRC mortality rates [1]. The CRC incidence of a country increases with increasing HDI status and, as such, may be used to signal changes in socioeconomic status [1]. The rise in incidence rates in countries undergoing rapid developmental transition may be attributed to changes in diet, obesity, and other lifestyle factors such as higher alcohol and red meat consumption [1,11,12]. On the other hand, the increase in CRC mortality rates, observed in low HDI countries, may be attributed to minimal or lack of appropriate screening and early detection programmes, and poor access to cancer treatment [11]. The HDI of SA has been increasing steadily since 1990 [13]. Despite an increasing HDI, the Gini coefficient in SA is 0.63, indicating considerable societal inequality [13,14]. The Gini coefficient measures wealth distribution and ranges from 0 to 1, where 0 represents perfect equality and 1 indicates perfect inequality. The inequity in socioeconomic status and health access, among other factors, influences the health outcomes of the SA population [15][16][17]. As a result of the unique political history of SA, many of these inequalities are apparent when examining health trends by population groups (Black, White, Asian, and Mixed race-commonly known as Coloured in SA). To appropriately plan for interventions, prevention, and control of CRC in the era of epidemiologic and economic transition, the evaluation of CRC patterns across population groups, and sex are necessary. This study aims to provide insights into the CRC incidence and mortality trends by population group and sex for SA. Study design and data sources A cross-sectional study was conducted using secondary data analysis of two datasets, to determine incidence trends and mortality trends for CRC between 2002 and 2014 in SA by sex and population groups. To determine incident CRC trends, the NCR database was used. The NCR methodology is outlined in detail by Singh et al. [18]. Briefly, the NCR collects data on all pathologically confirmed cancer cases from both public and private laboratories across SA. The NCR then collates, analyses, interprets and reports annual cancer incidences by age groups, population groups, and sex. All cancers are coded according to the International Classification of Diseases for Oncology, 3rd edition (ICD-O-3) [19]. The cancers reported by NCR are primary incident cancers. Cancers reported in metastatic sites (for example, lymph nodes) are investigated to determine the primary tumour site and are registered with the primary tumour site topography. If a primary site cannot be determined, the tumour is registered as an unknown primary site. The CRC primary incident cases, defined as ICD-O-3 codes C18, C19, and C20, from 2002 to 2014, were extracted from the NCR. The variables extracted included the year of diagnosis, sex, population group, age at diagnosis, and morphology and topography of cancer. CRC deaths from 2002 to 2014 were extracted from the Statistic South Africa (STATS SA) mortality and causes of death database. STATS SA collates national mortality statistics for the registered causes of deaths from death certificates. We extracted the date of death, sex, population group, smoking status, marital status, education level, and age of the deceased for individuals whose first or underlying causes of death were recorded as CRC (ICD-O-3 codes C18, C19, and C20). Statistical analysis Stata® statistical software version 14.2 (StataCorp LLC, Texas, USA) was used to generate frequencies and calculate ASIR and ASMR by sex and population group. The estimated mid-year population from STATS SA was used as the denominator. The Segi world standard population was used for age standardisation. The Segi world standard population was developed by Dr. Mitsuo Segi in the late 1950s to ensure international comparison of rates and evaluation of changes in incidence by comparing the current rates with the past published rates [20][21][22]. It was later modified in 1966 by Doll et al. [20]. It is the most commonly used world standard population, particularly in the field of cancer and allows in-country and between-country comparisons [21,23,24]. A detailed description of the calculation of the ASIR and ASMR is available elsewhere [20]. Briefly, the crude incidence rates were calculated first by dividing the total number of new primary CRC cases/deaths observed in a particular year by the total number of population (STAT S SA mid-year population) in that same year, stratified by age groups, sex and population group. The crude rates were multiplied by the weighted Segi world standard population to obtain age-standardised incidence or mortality rates. The weighted Segi world standard population is calculated by dividing the Segi world standard population for each age group (5-year intervals, e.g. 0-5 years) by the total Segi world standard population of all age groups. The Calculation methods are as follows: Crude The age-standardised rates by year, sex, and population group were imported into Joinpoint regression statistical software for trend analysis for the study period. The model used by Joinpoint statistical software to create the trend patterns was described by Kim et al. [22,25,26]. Joinpoint is used to identify points (or "joinpoints") where there is a significant change in a trend, which enables a more nuanced trend analysis than a standard regression method. Furthermore, Joinpoint is used by the US National Cancer Institute to analyse cancer trends rather than a standard regression method [26]. All parameters were set at default [26]. The method identifies joinpoints based on regression models with zero to five join points. The modelled or estimated annual percentage change was based on the trends within each segment. To quantify for trends over a fixed number of years the average annual percentage change (AAPC) was calculated. The AAPC is a geometric weighted average of the estimated average percentage change trend analysis, with the weights the same as the lengths of each segment during a specified fixed interval. The AAPC was considered significant at the p-value threshold of less than 0.05 using a two-sided test. Study population characteristics A total of 33,232 incident CRC cases were reported during the study period, 56% from private healthcare sector laboratories, and 44% from public healthcare sector laboratories. Of CRC incident cases, 54% were males. Throughout the study period, the annual median incidence was 2292 cases per year (interquartile range (IQR): 2132-3081). The mean age at diagnosis was 61.7 years (±14.1 standard deviation (SD) years) and 61.8 years (±14.7 SD years) for males and females respectively. The White population group had the highest percentage of CRC cases (49%) compared to other population groups. (Table 1). There were 26,836 CRC deaths with an annual median of 2138 (IQR: 1982-2321) between 2002 and 2014. Females accounted for 53% of all CRC deaths. The mean age at death was 64 years (±14.9 SD years) and 66 years (±15.7 SD years) for males and females respectively. Sixty-three percent of the deaths were among adults over 60 years of age. The White population group reported the highest proportion of deaths for CRC at 41%, while the Asian population group reported the lowest deaths at 4%. The mortality incidence ratio by age group was highest among the Black population group and lowest among the White population group (data not shown). Age-specific incidence and mortality rates Figures 1 and 2 illustrate age-specific incidence and mortality rates for males and females in SA between 2002 and 2014. Rates in males and females increased proportionally until the age of 50 years, after which the rate for males was higher than the rate for females for both incidence and mortality. Rates peaked in the age group of 75 years and older. Age-standardised incidence rates trends On average, for males and females combined, there was 2.5% annual average increase in ASIR from 2002 to 2014 (annual average percentage change (AAPC) = 2.5, 95% CI: 0.6-4.5, p-value < 0.001) ( Table 2). ASIR ranged from 11.6 to 13.5 and 8.5 to 10.6 / 100,000 population per year for the study period among males and females, respectively. Overall the ASIR were higher among males compared with females ( Figs. 3 and 4). As shown in Fig. 5, the highest ASIR among females were observed in the year 2014 at 18.5 per 100,000 population for the White population group. The ASIR in 2014 were similar for Asian and Mixed race population groups at (10.6/100,000) and (10.6/100,000) respectively. In the same year, Black females reported the lowest ASIR at 2.3 per 100,000 population. The ASIR among females of the Mixed race and Asian population group remained stable, while the ASIR among females of the Black population group increased consistently at 4.3% from 2002 to 2014 (AAPC = 4.3, 95% CI: 1.9-6.7, p-value < 0.001). Among females of the White population group, the ASIR trend remained stable until 2009, when ASIR increased by 14.9% from 2009 to 2014 (AAPC = 14.9, 95% CI: 6.4-24.2, p-value < 0.01) ( Table 2). Age-standardised mortality rates trends On average, for males and females combined, the ASMR increased by 1.3% from 2002 to 2014 (AAPC = 1.3, 95% CI: 0.1-2.6, p-value < 0.01) ( Table 2). The overall ASMR ranged from 7.1 to 8.9 and 5.5 to 5.9 /100,000 population per year for the study period among males and females, respectively. Similar to ASIR, the overall ASMR was higher in males than females (Figs. 5 and 6). The highest ASMR in males was observed in the White population group in 2004 at 15.1 per 100,000 population, and this rate was 1.2, 1.7 and 6.0 times higher than the Asian (12.2/100,000), Mixed race (8.6/ 100,000), and Black (2.5/100,000) population groups in 2004, respectively (Fig. 5). For males, over the study Table 2). Among females the highest ASMR was reported among the White population group at 10.3 per 100,000 population in 2004, this rate was 1.4, 1.5 and 6.3 times higher than the Asian (7.3/100,000), Mixed race (6.8/ 100,000) and the Black (1.6/100,000) population groups, respectively (Fig. 6). Among females, a significant change in ASMR was only observed among the Black population group, where the ASMR increased by 3.4% from 2002 to 2014 (AAPC = 3.4, 95% CI: 2.0-4.8, p-value < 0.01) ( Table 2). The ASMR of females among other population groups remained stable (Table 2). Discussion This is the first study in SA to assess national CRC incidence and mortality trends by population group and sex. We reported overall ASIR from 11.6 to 13.5 and 8.5 to 10.6 per 100,000 population per year for the study period among males and females, respectively. The overall ASMR ranged from 7.1 to 8.9 and 5.5 to 5.9 per100, 000 population per year for the study period among males and females respectively. The ASIR and ASMR were highest amongst the White population group and lowest amongst the Black population group. However, the Black population group demonstrated a consistent increase in both ASIR and ASMR for the study period. The age at diagnosis was much closer to the age at death with the average age at diagnosis being 61 years and the average age at death being 65 years. These findings suggest that most cases are diagnosed at a late stage and hence mortality occurs soon after diagnosis and further studies are warranted to delineate this relationship. Screening and early detection of colorectal cancer are recommended as the survival rate is highly dependent on the stage at diagnosis. The age at diagnosis is comparable with other sub-Saharan African countries and the United State of America where most of the cases are diagnosed between the ages of 65 and 68 years [27]. The incidence and mortality were higher in males compared to females. Our study found that there was no significant difference in the pattern of CRC age-specific incidence and mortality rates between males and females. The association between sex and CRC is not fully understood. The disparities found in other studies are partly attributed to differences in exposures and risk factors across sex, such as smoking behaviour and hormones [28][29][30]. Further studies are needed to understand the relationship between CRC and sex to implement possible interventions to reduce CRC incidence in South Africa. We found that between 2002 and 2014, the overall ASIR and ASMR increased by 2.5 and 1.3% respectively. As expected for countries undergoing rapid developmental transition and are considered to be medium-to-high HDI, the overall incidence and mortality rates of CRC in SA are increasing with increasing HDI [10,[31][32][33]. The increase in ASIR is linked to changes in individual behaviour, in particular, lifestyle changes influenced by westernisation [34]. These lifestyle factors include excessive tobacco smoking, excessive alcohol consumption, lack of physical activity leading to obesity, and poor diet i.e. consumption of fast food and red meat, among other factors [35][36][37][38][39]. The increase in ASMR might be associated with quality of care within the country's health system, for example, inefficient resource allocation (human resources, equipment and medical supplies), lack of access to cancer treatment and or management, and poor health infrastructure may affect mortality rates [7,35,[40][41][42][43][44][45]. Other factors include low or lack of awareness, and screening and early detection to detect and remove precancerous polyps as early as possible before they progress into cancerous lesions, as well as late diagnosis, translating into poor prognosis and ultimately increased mortality rates [46][47][48][49]. The ASIR and ASMR trends differed across population groups. In 2014, the White population group had the highest ASIR and ASMR, followed by the Asian population group, Mixed race population group, and then the Black population group. The most significant and highest trend changes over time were observed among the Black population group. This is of particular concern as the Black population group comprises 87% of the total SA population [50]. Traditionally, large disparities in access to healthcare exist among the different population groups in SA. For example, the majority of the White population in SA have access to adequate healthcare services in the private healthcare sector, which has more resources [50,51]. According to the STATS SA general household survey in 2017, 75% of the White population group have private health insurance compared to 10% of the Black population group [52]. Increasing cancer trends and poor access to healthcare services combined could exacerbate the cancer disparity in SA. The disparities noted were also observed in a study conducted in the United States, where the proportions of CRC incidence were 59% for Whites and 30% for Blacks. However, the Annual Percentage Change (APC) for Blacks was 1.6 times higher than Whites [53]. The cause of variation of patterns and rates across population groups and sex may be attributed to socioeconomic disparities and differential risk behaviours [54,55]. Stable ASIR and ASMR were observed among the Asian population group and females of the Mixed race population group over the study period. Stable ASIR but increasing ASMR was observed among Mixed race males. The Asian and Mixed race population groups account for 11% of the total population; thus, the numbers are likely too low to show a significant change in relation to other population groups [50,56,57]. However, the level of access to healthcare services among individuals of the Asian and Mixed race population groups remained fairly similar 4 and 10 years post-democracy (1994), a possible explanation for the rate of CRC diagnosis remaining stable [57,58]. Among the White population group, the ASIR de- [59,60]. An estimated 16% of the SA population seeks care in the private health sector, of which 92% are Whites, and hence the ASIR trends mimic the reporting levels during and few years after the data withholding period [60]. The stable ASMR was expected among the White population group, as 88% of the White population group have access to adequate healthcare services provided by the private healthcare sector [52]. The screening rates are higher than other population groups, which translates into early detection and treatment, and ultimately better prognosis and lower death rates, hence the stable mortality pattern observed [61,62]. A consistent increase in ASIR and ASMR was observed among the Black population group throughout the study period. This may be explained by the rapid transition of the Black population group's socioeconomic status and change in lifestyle [63,64]. The increased adoption of western lifestyles and consequently, dietary changes as well as other risk factors for CRC may have led to an increase in CRC incidence among Black South Africans. This phenomenon is also observed by other African countries experiencing rapid westernisation [64,65]. Secondly, the incidence pattern may be attributed to an increase in cancer reporting for the Black population group, which constitutes > 80% of the total SA population that was excluded from accessing quality healthcare services pre-democracy [49,57,66]. The ASMR trends observed may be explained by inequities in socioeconomic status differences. There is a general lack of access to quality healthcare services, late diagnosis of cancer, and limited treatment options available for poorer South Africans, as only 8% the Black population group can afford private medical insurance in SA [52,67]. Limitations This study had several limitations. The incidence data used in this study was derived from the NCR. The NCR is a passive pathology-based cancer registry that collects histological, haematologically, and cytologically diagnosed cancer cases; therefore, CRC cases not diagnosed through laboratory means are excluded [18]. Between 2005 and 2007, cancer case reporting and data collection from private health facilities were limited due to concerns around confidentiality and the lack of legislature support on cancer reporting/surveillance in SA [60]. This may cause an underreporting of CRC incidence disproportionately amongst certain population groups that predominately utilised private healthcare facilities such as the non-Black population groups [60]. The lack of cancer staging in both the NCR and STATS SA data makes it more challenging to explore disparities in cancer incidence and mortality across population groups. The data for cancer mortality was derived from STATS SA. Underreporting of death was likely across all population groups. In 2007, the completeness of mid-year estimates and vital statistics was below 90% as found by an evaluation [68]. This underreporting occurs because of delayed reporting, ill-defined causes of death, and misreporting or misclassification of cause of death [69]. Despite these limitations, this is the first national study to report CRC ASIR and ASMR trends in SA. Improvements in the accuracy of reporting will occur with more triangulation of data sources. Secondly, the data sources used are the primary source of cancer statistics in SA. This study has reported the CRC ASIR and ASMR, which were comparable with other rates reported by previous regional and international studies [70,71]. Conclusion The increase in CRC ASIR and ASMR is an indication of the epidemiological transition of disease burden in SA with increasing HDI. The disparities across population groups, in particular, the consistent and rapid increase in ASIR among the Black population group, requires further in-depth studies to delineate the factors driving the increasing rates. Programs for promoting healthier behaviours such as reducing alcohol intake, reducing tobacco smoking, and increasing physical activities are recommended to reduce the risk of CRC in SA. The inequalities can be bridged through universal health coverage, targeted screening, early detection, and highquality cancer care provision, especially for previously disadvantaged population groups. Enhancement of cancer surveillance, in particular, population-based cancer registration is required to produce better quality cancer data for in-depth exploratory research to better inform policies and interventions for cancer prevention and control in SA.
2020-10-28T18:54:57.871Z
2020-10-07T00:00:00.000
{ "year": 2021, "sha1": "7f4c192a32a2538d022db684a5eee6795aa67da0", "oa_license": "CCBY", "oa_url": "https://bmccancer.biomedcentral.com/track/pdf/10.1186/s12885-021-07853-1", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d60c45f631007f884d3be8b0198a4c5e13150616", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
190894244
pes2o/s2orc
v3-fos-license
Clinically relevant differences in the selection of toric intraocular lens power in normal eyes: preoperative measurement vs intraoperative aberrometry Purpose: To assess the value of intraoperative aberrometry (IA) in determining toric intraocular lens (IOL) power in eyes with no previous ocular surgery. Patients and methods: This was a retrospective data review at one US clinical site of eyes that underwent uncomplicated cataract surgery with toric IOL implantation where standard preoperative and IA measurements were available. Calculated IOL sphere and cylinder powers and orientation were compared based on the measurement method and the postoperative refraction, using both actual and simulated (back-calculated) results. Comparisons were between the surgeon’s preoperative calculations, IA measurements, the actual IOL implanted and results from the Barrett toric calculator. Results: There was no significant difference (p>0.7) in the number of eyes expected to have, or having, a spherical equivalent refraction within 0.50D of the target between Actual (92%), IA (93%) or Preoperative calculation results (86%). The percentage of eyes with expected residual refractive astigmatism ≤0.50D was significantly higher for the IA vs Preoperative calculations (75% vs 53%, p<0.01). There was no significant difference in expected results between the Actual, IA and Barrett toric calculations (p>0.65). Conclusion: Modern IOL calculations for sphere produced results comparable to those achieved with IA. The value of IA in determining IOL cylinder power and orientation was more evident when comparing expected results between IA and a preoperative method based on measured total corneal astigmatism than when comparing to expected results from the Barrett toric calculator. Introduction The goal of toric intraocular lens (IOL) implantation is to eliminate refractive astigmatism after cataract surgery, but this is not achieved in all cases. A 2012 review by Visser et al 1 noted that about 30% of the eyes had more than 0.5D of residual refractive astigmatism after toric lens implantation. More recent studies have noted some improvement in outcomes, but still report about 20% of the eyes with more than 0.5D of residual refractive astigmatism. [2][3][4] Further improving outcomes remains a challenge. One source of variability in the prediction of refractive outcomes after toric lens implantation is preoperative measurement, primarily corneal power measurement. The most commonly used devices to measure corneal power are keratometers that measure only the anterior cornea. Ignoring the effects of the posterior cornea is likely to result in estimation errors 5 and result in an increase in residual astigmatism after toric lens implantation. 6,7 Formulas that take into account the astigmatic contribution of the posterior cornea, such as the Abulafia-Koch formula, have increased the likelihood of residual astigmatism less than 0.5D by nearly 50%. 2 There are three methods in common use to account for posterior corneal astigmatism (PCA); one is to apply a formula to account for PCA, 2 another is to directly measure the PCA preoperatively 4 and the third is to directly measure astigmatism intraoperatively in the aphakic eye. Several formulas have been introduced to account for PCA. One of the most commonly used is the Barrett toric calculator, which relies on a proprietary formula. Compared to other toric calculators the Barrett toric calculator has been shown to be as effective or more effective at increasing prediction accuracy of postoperative residual astigmatism. 2,[8][9][10][11] Reported results using the Barrett toric calculator show 72 10 -80% 2 of the cases having a residual refractive astigmatism of 0.5D or less. This may indicate a limitation to the method, as formulas can rarely account for atypical eyes. An accurate measurement of the posterior cornea, or the total corneal power, may further improve results. Several preoperative devices can now measure posterior corneal astigmatism directly and can incorporate that into a total corneal power calculation. One device in common use is the Pentacam® HR (OCULUS Optikgeräte GmbH, Wetzlar, Germany), which uses a rotating Scheimpflug camera to provide various corneal power measurements at different diameters from the center of the cornea; it is referred to in this manuscript as the Scheimpflug device. Park et al 12 reported that using data from this device may improve results relative to the Barrett toric calculator, while Savini et al 6 suggested that results to date were comparable to those obtained with the Barrett toric calculator. One of the measurements provided by the Scheimflug device is the total corneal refractive power (TCRP), a ray tracing technique that has been demonstrated in one study to show high repeatability, 13 though was reported as less reliable in another. 15 Davison and Potvin 4 reported that 80% of the eyes had residual astigmatism within 0.5D using TCRP, while Reitblat et al 14 reported a much lower percentage (25%). The large difference in results may be attributed to the fact that it is unclear at what diameter the TCRP value was measured in the Reitblat study. It may also be related to conflicting reports on the consistency of the Scheimpflug device readings, noted above. Intraoperative aberrometry (IA) is a technique that allows for measurement of the power (sphere and cylinder) of the aphakic eye. One of the most commonly used intraoperative aberrometers is the ORA™ System (Alcon Laboratories, Inc., Fort Worth, TX, USA). Davison and Potvin 16 showed comparable outcomes when IA was used relative to preoperative calculations for sphere power, with a possible benefit to considering IA when the difference between the IA and preoperative power calculations was high. Hill et al 17 noted that when selecting sphere lens power, IA resulted in 80% of the cases within 0.5D, better than all other methods tested. A recent retrospective study of 30,000 cases noted that IA resulted in 82% of the cases within 0.5D of residual spherical equivalent refraction, while preoperative calculations resulted in 76% of the cases within 0.5D; the difference was greater when the lens power was different. 18 Aphakic astigmatism can be directly measured using IA. In one study, compared to using a toric calculator that did not take into account posterior corneal astigmatism, results using aphakic astigmatism measurements from IA showed significantly more eyes with 0.5D or less of residual refractive astigmatism (78% with IA vs 33% without). 19 Woodcock et al 20 noted that 89% of the cases had residual astigmatism within 0.5D when they implanted toric lenses based on IA measurements, compared to 77% when using standard preoperative optical low coherence reflectometry and a calculator that did not take into account PCA. A large study (3,159 eyes) evaluating astigmatism outcomes from an online toric back calculator found that the use of IA was associated with less residual astigmatism. 21 These data were limited to eyes exhibiting significant (>0.5D) residual refractive astigmatism after surgery. The main concern with IA is that it is reliant on the assumption that the intraoperative measure is a reliable indicator of the postoperative state of the eye. Variables that may influence the accuracy of the IA measurement include eye position, intraocular pressure, effects of the speculum and the effect of ophthalmic viscosurgical devices. 22,23 The purpose of the current study was to determine whether suggested IOL power and cylinder orientation from IA were superior to values from preoperative calculations for the purposes of toric IOL planning. IA sphere calculations were compared to the surgeon's standard approach while IA cylinder calculations were compared to calculations from both the Barrett toric calculator and a toric calculation from a standard calculator using total corneal power as the input data. Patients and methods This retrospective data analysis was approved by the Wolfe Eye Clinic Institutional Review Board. A waiver of informed consent for the use of chart data was granted, as no protected health information was used for the analyses. Patient confidentiality was preserved, and data were treated consistent with the tenets of the Declaration of Helsinki. Operative records from 6/17 to 5/18 were reviewed to find eyes with no previous refractive surgery, no significant corneal pathology, for which an uncomplicated cataract surgery with toric IOL implantation was successfully completed; cases where a non-toric IOL was implanted but that included any calculation suggesting a toric IOL were also included. Cases had to include use of intraoperative aberrometry at the time of surgery and a manifest refraction performed 21 days or more after surgery. No eyes that had any secondary treatment (IOL reorientation or refractive procedure such as LASIK) were included. For each eye, the preoperative biometry from the IOL Master 700 (Carl Zeiss Meditec AG, Jena, Germany), referred to in this manuscript as the biometer, and the Scheimpflug device were required. Sphere power was calculated using the biometer data and the Haigis and SRK/T formulas, with a given mean target between the two for the postoperative refraction. Long eye adjustments were made for eyes longer than 27.99 mm. 24 The Haigis and Hoffer Q formulas were used in similar fashion for eyes with an axial length less than 22.0 mm. Results from the formula that yielded a value closest to plano were chosen when the other formula yielded a myopic result. Preoperative toric IOL calculations were made using the TCRP data from the 3 mm Apex Ring of the Scheimpflug device and a standard toric calculator. Additional calculations using the biometer data as input to the Barrett Toric calculator were subsequently made. During cataract surgery, intraoperative aberrometry was used for each eye to determine the recommended sphere power, toric power and orientation of the IOL to be implanted. This was done in the aphakic eye to determine the recommended cylinder power and orientation and then again in the pseudophakic eye to refine the orientation of the IOL. The actual IOL implanted and its final orientation (in the case of toric IOLs) were recorded at the time of surgery. Back-vertex calculations were used to adjust the postoperative refraction based on differences in suggested sphere power from the preoperative and IA calculations. Results were rounded down (closer to plano) to the nearest 0.125D. After adjusting for the target refraction, this provided the expected (simulated) residual spherical equivalent refractive error for the IOL implanted (Actual), the preoperative calculation (Preop) and the IA calculation (IA). To compare the cylinder results, four different measures were considered. For the Actual group, and for the IA/Preop group when the IOL implanted was the one suggested by IA/ Preop, the postoperative refraction was the measure of interest. For the IA or Preop groups when the IOL suggested was not equal to the lens implanted, and for Barrett, the suggested IOL power and orientation were recorded. Then, a method first described by Hill et al to simulate clinical results from toric IOLs was used. 25 In effect, the actual toric IOL implanted was mathematically removed from the eye and the IOL suggested by the results of the IA, Preop or Barrett calculation was mathematically inserted into the eye, yielding a simulated residual cylinder. The cylinder power at the corneal plane as determined from the Barrett toric IOL calculator was used for the remove/replace operation -the ratio between the IOL cylinder power at the IOL plane and the cylinder power at the corneal plane for the given eye was used. Cylinder analysis consisted of considering the percentage of eyes within 0.50D, 0.75D and 1.00D of residual cylinder (actual or simulated) by calculation method. The IA cylinder power and orientation were compared to those from both the preoperative calculation and the Barrett Toric formula. Of primary interest was which method was expected to result in the highest percentage of eyes within 0.50D. Two other comparisons were made. First, calculation results were compared for those eyes with actual high postoperative cylinder, as these were considered refractive surprises. Second, calculation results for eyes where the vector difference between the Barrett or Preop expected refractive cylinder power and IA was more than 1.0D were analyzed, as these were cases where the cylinder power difference was considered significant. In these two specific comparisons, the intent was to determine if IA produced results consistently better than the other methods; this would suggest that IA might have a positive benefit in terms of reducing outliers. The measured and calculated data were tabulated in Excel spreadsheets and then imported into an MS Access database for data checking and preliminary analyses (both Microsoft, Redmond, USA). Detailed statistical analysis was performed using the Statistica data analysis software system, version 12 (TIBCO Software Inc., Palo Alto, CA). Categorical comparisons were made using a Chi-squared test, and parametric data were evaluated using an analysis of variance (ANOVA). The level of statistical significance was set at p<0.05. Results The retrospective chart review identified 123 eyes in the specified time period with the relevant planning and postoperative refractive data available. Five of these eyes (4%) could not be measured with IA because of excessive movement and/or small pupils, leaving 118 eyes for analysis. Only two lens models were included in the data set; 100 eyes had a toric IOL implanted, while the remaining 18 eyes received a non-toric IOL (SN6ATx and SN60WF, respectively, both Alcon, Fort Worth, Texas, USA). The expected spherical equivalent refractive error for each calculation method was determined as described in the methods. Table 1 summarizes the results, comparing the actual postoperative residual refractive error and the expected residual refractive errors from the preoperative and IA calculations. The number of eyes within 0.25D and within 0.50D of the intended spherical correction was higher for the IA calculation relative to the preoperative calculation; the difference was statistically significant for the 0.25D values (Chi-squared test, p=0.01) but not for the 0.50D values (Chi-squared test, p=0.08). The mean expected residual error differed overall by 0.13D, with the Preop group having a slightly more myopic mean; this is likely reflective of a more conservative Preop IOL selection (least minus). This was also apparent in the slightly higher likelihood of a hyperopic result (>+0.25D spherical equivalent) with the IA versus Preop calculations (13 vs 8), but the difference here was not statistically significant (Chi-squared test, p=0.25). The mean absolute expected residual error differed by a maximum of 0.08D between methods. The number of outliers for each method was low and similar. There were 9 eyes with Actual spherical equivalent refractive errors greater than 0.5D from intended. In only one of these cases did the IOL power determined by preoperative calculation produce an expected refraction less than or equal to 0.50D. This was similar to IA, where the IOL power determined also would have resulted in an expected refractive error less than or equal to 0.50D in only one case. Table 2 shows the differences between the preoperative and IA calculated sphere powers, along with a comparison of the results by method. Note that in 42% of the eyes (50/ 118) there was no difference in the IOL sphere power determined by the preoperative and IA calculations. Where the difference in the expected residual refraction was less than or equal to 0.25D for the preoperative and IA methods, Residual cylinder was available from the results of the actual IOL implanted, and from simulated results based on implanting the IOLs suggested by the preoperative calculation, the Barrett Toric calculation and IA. Table 3 summarizes the expected residual refractive errors by calculation method if the recommended IOL was implanted at the orientation determined by the different calculators. The expected percentage of eyes with residual cylinder ≤0.50D was significantly higher for the IA calculation relative to the Preop calculation (Chisquared test, p<0.01). There was no statistically significant difference between the IA calculation and the other two methods (p>0.65 in both cases). Similarly, the percentage of eyes with residual cylinder ≤0.75D and ≤1.00D was also significantly higher for the IA calculation relative to Preop (p<0.01 and p=0.03, respectively), while there was no statistically significant difference between IA and the other two methods (Actual and Barrett, p>0.11 in all cases). The differences between IA and the three other methods were considered as follows. The choice of lens cylinder power was either the same or different. If the planned orientation angle for any two methods differed by less than 5 degrees, then the orientation was considered the same; otherwise it was considered different. Table 4 shows the results of comparing the IA suggested cylinder and orientation to the Preop, Barrett and Actual cylinder and orientation. Note that the close match between IA and Actual (70% of the eyes with the same IOL power at the same orientation) was a function of the fact that IA was the primary method used to determine the implanted cylinder power and orientation at the time of surgery. From a practical standpoint, the degree to which these changes in IOL cylinder power and orientation were likely to affect clinical outcomes was of interest. The magnitude of simulated or actual residual cylinder was considered for this purpose. If any two methods compared in Table 4 resulted in a difference in cylinder magnitude of 0.25D or less, they were considered the same; test-retest variability in refractive cylinder is higher than this. Otherwise, the method that resulted in the lowest residual cylinder was considered the better choice. Table 5 summarizes the results of this analysis; each line in this table corresponds to an individual cell in Table 4. As can be seen, in 93% of the cases (110/118), the results for IA and the actual lens implanted were expected to be the same. In the remaining 8 cases, there was no clear bias towards one method or the other. A similar comparison showed that 65% (77/118) of the results for the IA and Barrett calculations were expected to be the same. The remaining cases were equally distributed between the IA and Barrett calculations. It is apparent from the table that IA appeared to provide a slightly better result than Barrett when only the lens orientation differed (10 cases to 4), but this was not a statistically significant difference (p=0.25). The differences between IA and Preop were more pronounced. Only 52% (61/118) of the results for the IA and Preop calculations were expected to be the same. When they were different, the IA calculation was statistically significantly more likely to produce a better outcome (Chi-squared test, 41:16 vs 29:29, p=0.02). The potential for preventing refractive cylinder outliers was another practical consideration of interest. Three eyes of the 118 (2.5%) had an actual residual refractive astigmatism of 1.0D or higher after toric IOL implantation. In 2 cases, the IA calculation was used to select the IOL implanted. In the other case, the IA calculation would have been expected to produce a slightly better result (1.00 D residual cylinder instead of 1.25 D). Finally, all cases where the difference in the calculated residual cylinder magnitude between IA and the other two methods (Preop or Barrett) was 1.0D or more were identified. In 7 cases, the IA calculation differed from Preop by 1.0D or more; the IA calculation appeared better for 6 of these. In 5 cases, the IA calculation differed from Barrett by 1.0D or more; the IA calculation appeared better for 4 of these. The numbers show a trend but are too small for reliable statistical analysis. Discussion The current study was designed to provide a clinically relevant examination of differences in IOL sphere power, cylinder power and orientation when using IA versus standard preoperative calculation methods. The percent of eyes with an expected residual spherical equivalent refraction within 0.5D was 93% using IA; this is higher than reported by Hill et al (80%) 17 and Cionni et al (82%). 18 The mean absolute expected residual error difference between Preop and IA was 0.07D, similar to the value reported by Cionni et al. 18 The slightly higher number of eyes within 0.25D with the IA calculation appears to be a function of targeting emmetropia with IA relative to a least-minus target with the Preop calculation, evident in the slightly higher number of hyperopic outcomes with IA. The current study did not observe a notable reduction in outliers (>0.5D absolute error) in spherical equivalent results with IA vs Preop calculations. There was no clinically significant difference between the sphere power suggested by the Preop calculation and IA (Table 2). This is consistent with results reported by Davison and Potvin. 16 The current study found that the number of eyes within 0.50D of the intended spherical equivalent refraction was not significantly different between the Preop and IA groups, though 7% more eyes in the IA group had an expected spherical equivalent refraction within 0.5D. This appears consistent with Cionni et al 18 , where a 6% 18 increase in the number of cases with residual spherical equivalent refraction within 0.5D was found in the IA group when compared with preoperative methods; the larger data set in that study resulted in the observed difference being statistically significant. In the current study, residual refractive astigmatism of 0.50D or less was expected in 75% of the eyes based on IA, 75% of the eyes based on the Barrett toric calculator and 53% of the eyes using the Preop method ( Table 3). The IA and Barrett results reported here are consistent with results reported in the literature, with 78% 19 and 72% 10 of eyes having residual astigmatism within 0.5D for IA and Barrett, respectively. The percentage for the Preop method is lower than was reported by the same authors in a previous study, 4 but higher than has been reported for the Scheimpflug device in a second study. 14 Variability remains a concern with the Scheimpflug device, 15 which may explain some of the larger differences between the IA and Preop calculations in the current study. The IA calculations produced a statistically significantly higher percentage of eyes with an expected residual cylinder of 0.50D or less relative to the Preop calculations, though the percentage of eyes with an expected residual cylinder of 0.50D or less was equivalent to that calculated for the Barrett Toric Calculator. Where Barrett and IA differed with regard to orientation angle, the IA measurement appeared more likely to be correct, though again this was not statistically significant. The use of IA in the pseudophakic eye to refine final orientation of the toric IOL may have been important in this regard. Finally, there was no evidence that IA could consistently prevent outliers (refractive surprises), but it did appear that there was a greater likelihood that IA was correct when IA and the other methods produced largely different calculations. There are limitations to the current study. It was noted in the paper by Hill et al 25 that the simulated calculation was always slightly worse than actual calculations; simulated results were systematically about 0.2D higher than the actual residual cylinder. This "remove and replace" technique appears to slightly favor the actual method used to determine the cylinder power. The current study included eyes where the majority of IOL implants were based on IA. This may result in an overstatement of the advantages of IA. One alternative to this approach is to conduct a prospective randomized study using contralateral eyes, but such a study would also have limitations. Other limitations include the fact that the study was retrospective in nature, and postoperative IOL orientation was not available to compare intended IOL orientation to actual orientation at the time of the refraction. For the latter comment, it can be stated that the literature indicates significant toric IOL misorientation is relatively rare. 1 It is worth noting that for both sphere and cylinder, the highest percentage of eyes within 0.5D of the target was achieved with the Actual IOL implanted. This may be a result of the limitations in using a "remove and replace" methodology for analysis. However, it seems more likely that it reflects the fact that surgeon judgment related to the inputs from various devices remains an important deciding factor when choosing a toric IOL for surgery. Balanced against the use of measurements/calculations from various devices is the cost and time associated with collecting them from each device, and the cost of the devices. Conclusion In conclusion, modern IOL calculation formulas for sphere appear to produce results comparable to those achieved with IA. However, there may be some value in using IA to determine IOL cylinder power and orientation. This is most apparent when comparing results between IA and a preoperative method based on measured total corneal astigmatism. The relative benefit of IA is less apparent when results from the IA calculation are compared to those expected with the Barrett toric calculator. The consideration of both preoperative and IA toric IOL planning produced the best overall results in astigmatic eyes.
2019-06-06T09:04:39.348Z
2019-05-30T00:00:00.000
{ "year": 2019, "sha1": "b74ead7976d83e8ceda29744c1ec4d99ff631ca1", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=50232", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "620c7f94af43e423e175b711ec35c83bd25346ed", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
23957793
pes2o/s2orc
v3-fos-license
Social inequalities in blindness and visual impairment: A review of social determinants Health inequities are related to social determinants based on gender, socioeconomic status, ethnicity, race, living in a specific geographic region, or having a specific health condition. Such inequities were reviewed for blindness and visual impairment by searching for studies on the subject in PubMed from 2000 to 2011 in the English and Spanish languages. The goal of this article is to provide a current review in understanding how inequities based specifically on the aforementioned social determinants on health influence the prevalence of visual impairment and blindness. With regards to gender inequality, women have a higher prevalence of visual impairment and blindness, which cannot be only reasoned based on age or access to service. Socioeconomic status measured as higher income, higher educational status, or non-manual occupational social class was inversely associated with prevalence of blindness or visual impairment. Ethnicity and race were associated with visual impairment and blindness, although there is general confusion over this socioeconomic position determinant. Geographic inequalities and visual impairment were related to income (of the region, nation or continent), living in a rural area, and an association with socioeconomic and political context was suggested. While inequalities related to blindness and visual impairment have rarely been specifically addressed in research, there is still evidence of the association of social determinants and prevalence of blindness and visual impairment. Additional research should be done on the associations with intermediary determinants and socioeconomic and political context. Health inequity refers to differences or inequalities in health among social groups that are unnecessary, avoidable, unfair, and intolerable. [1] These inequalities are related to social determinants based on gender, socioeconomic status, ethnicity, race, living in a specific geographic region, or having a specific health condition. Inequality, poverty, exploitation, violence, and injustice are causes of illness and death of the poor and marginalized. [2] However, instead of focusing solely on reducing global poverty to improve health equity, greater attention should be given to improving the socioeconomic conditions of global society. [3] Health inequalities can be reproduced at any level associated with the effect of the relative versus absolute socioeconomic position of individuals and patterning of the social gradient in health. [4] Indeed, there is a common social gradient across global society-the lower the socioeconomic position of an individual, the poorer is their health. [5] Social determinants of health are structured along three major levels: structural determinants focusing on socioeconomic and political context (governance, macroeconomic policies, social policies, public policies, and culture and social values); socioeconomic position structural determinants (class, power, prestige, and discrimination); and intermediary determinants. [5,6] In this review, we examine how the socioeconomic factors of gender, income, education, occupation, and ethnicity/race related to an individual's social position influence visual impairment and blindness. A review on socioeconomic status and blindness [7] was published 10 years ago, which focused only on blindness, even though there are six times as many people with visual impairment. However, previous literature usually did not discuss how social inequalities influenced visual health. There is literature on visual impairment and blindness that stratified outcomes on income, education, employment status, social class, gender, and race/ethnicity. Geopolitical area is also considered to have an influence on the socioeconomic and political context. The goal of this article is to provide a current review to understand how inequities based on socioeconomic determinants of health influence prevalence of visual impairment and blindness. Materials Literature was searched on PubMed using combinations of the following two groups of keywords: Ocular outcome (visual impairment, blindness, cataract, diabetic retinopathy, glaucoma, eye health, eye care, ophthalmology, and prevalence) and structural determinants of socioeconomic position (socioeconomic status, social class, income, educational status, gender, poverty areas, ethnic groups, race, inequality, disparity, inequity, and access). Causes of blindness and visual impairment were included as key words to capture publications that produce secondary results on visual impairment or blindness. The search included original population-based studies, reviews, and meta-analysis from 2000 to 2011 in the English and Spanish languages. There were no other limitations specified. There were 565 publications found: 101 for gender, 53 for income, 42 for education, 12 for social class, 109 for inequality, 109 for socioeconomic factor, and 95 for race/ethnicity. A total of 312 publications were found for visual impairment and 253 for blindness prevalence outcomes. Three reviewers independently examined the title and the abstract of each article, classifying the articles in six fields: gender, income, education, employment status and social class, geographic, and race/ethnicity. Full text and tables of all the articles that had results on visual impairment or blindness outcomes were reviewed. Two inclusion criteria were evaluated: (1) present empirical findings related to outcomes of prevalence of visual impairment or blindness in population-based studies on adult populations and; (2) a gender, income, educational level, employment status and social class, or race stratifying measure. A manual search of the references of these studies, an additional 16 articles were identified. Given that publication dates included in the review range from 2000 to the present, when the articles found in this review did not fill the gap on the knowledge previously documented additional material was sought from study references and literature collections conducted in earlier years by the authors. Results A table has been generated that summarizes the literature review of publications in the last 12 years. A total of 23 studies were found that stratified the results of prevalence of blindness or visual impairment with the structural social determinants of health: gender, income, educational level, employment status, social class, and ethnicity/race [ Table 1]. Although important inequalities in the prevalence of blindness and sex have been reported, no gender review has been published. [8] A meta-analysis conducted by Abou-Gareeb ten years ago found that women accounted for nearly two-thirds of the population with blindness. [6] After age adjustment, the overall odds ratio (OR) of blind women to men was remarkably consistent by geographical area, being 1.39 for Africa, 1.41 for Asia, and 1.63 for industrialized countries. Research continued to highlight this gender inequity with respect to blindness and visual impairment. [9,10] Later studies on the prevalence of visual impairment and blindness reported a prevalence ratio of more than 1.5 for women in high-income countries, which is surprisingly higher than in low-income countries. [11][12][13][14] Women generally have a longer life expectancy than men. Since many eye diseases are age-related, we would expect women to have a higher burden of visual impairment and blindness. [6] However, even after age adjustment, inequities still persisted. [15] Access to the health care system was an intermediary determinant and played a role in exposure and vulnerability. [6] Through the analysis of women's access to services, differences between countries at low, middle, and high levels of the Human Development Index (HDI) were observed. [16] In high HDI countries such as the United States (US), Australia, and Germany, women reported more visits to the eye care specialist. [17][18][19][20] All age groups of women in the US had better ocular health care utilization than men for all three racial/ethnic groups. [21] No differences in eye care service utilization were found between men and women in the middle HDI countries of Asia, such as Oman, and similar access to cataract surgical services was noted in Latin America. [22,23] In the low HDI countries of Africa and Asia, access to cataract surgical services was lower for women compared with men. [13,[24][25][26][27][28][29] Reasons for not seeking eye care showed different gender patterns for people with some visual impairment; "no need" was the main reason for men and "cost/insurance" for women in the US. [30] Indirect costs of service was a more relevant barrier for women in Ethiopia. [31] Attitudinal differences in seeking health care were also suggested as reasons to explain gender differences in access to eye health care services. [17,32] Studies in France and the US showed that people with low vision had less income. [33,34] In low HDI countries, such as Kenya, the Philippines, and Bangladesh, multivariate analyses showed that case participants were consistently poorer than controls when assessed using three different measures of poverty even after adjustment for health and social support indicators. [35] Only one article evaluated risk of blindness and individual income and found that low income was associated with blindness in India. [36] Visual impairment, even unilateral, was associated with household income (>$75,000 a year) in the US, and both high rates of blindness and visual impairment were found in the elderly of the US. [37][38][39] Increased risk for visual problems was documented in the impoverished neighborhoods with the worst economic indicators in the US and Australia (we must bear in mind, however, that these data are relatively old). [40,41] In addition, the prevalence of blindness and visual impairment was higher in low-income countries when compared with high-income countries. [42] There was also a gradient between the gross domestic product (GDP) of a country and its prevalence of blindness. [43] Although only a few articles addressed the association of blindness or visual impairment with income, the results were consistent in that lower income was associated with visual problems. Not one article, analyzed in this review, stratified association of income with visual outcomes by sex. Lower levels of education were associated with higher prevalence of visual impairment in Australia, Taiwan, and the US, as well as blindness in the US, India, and China. [38,39,[44][45][46][47] In 1991, a reverse association with years of education and prevalence of visual impairment and blindness was observed, and prevalence increased at a much faster rate when illiteracy in India, Pakistan, Nigeria, and the US was taken into consideration. [34,38,39,44,48,49] A higher association with the level of education was found for bilateral (as compared with unilateral) visual impairment in the US. Reasons for not seeking eye care in the US were based on educational level, with "no need" as the main reason for the highly educated and "cost/insurance" for the lower educated. However, highly educated individuals (32%) still reported "cost/insurance" as a reason for not seeking eye care, although this factor decreased among the high-income population (22%). [28] People with visual impairment in Europe were at higher risk of not having a paid job, being unemployed, suffering from permanent disability, belonging to a manual social class (with less job satisfaction), having less opportunity to develop new skills, having less recognition for their work, and having an inadequate salary. [50] In France, individuals without visual Vol. 60 No. 5 problems had a chance of having a paid job five times greater than that of blind people and twice than those with low vision. [30] In India, people without work had twice the risk of visual impairment. [44] The concept of social class derived from occupation was also associated with health indicators. For coding social class, each individual was assigned to their occupation, and each occupation was assigned to one of the six social classes. The first three corresponded to nonmanual workers, and the last three to manual workers. Social class based on occupation integrated the level of training required for a job, income, and the level of responsibility. [51] In Britain, the risk of poor vision was associated with social class (unskilled manual workers). For each increment in social class grade on a scale of I through V, the risk of poor vision increased by 28%, with a prevalence of 1.9% in social class I (professional) and 5% in social class V (unskilled manual workers). Additionally, children with manual social class fathers at the moment of birth had an increased risk of far and near visual impairment at adulthood. [52] However, prevalence of low vision and blindness for workers was similar to that of the unemployed in the US. [37] After adjusting for age and socioeconomic position, no association with visual impairments was found among Hispanics, African Americans, and Caucasians in the US. [34] However, Hispanics had a higher incidence of visual impairment than that reported in non-Hispanic White persons and the highest reported in a population-based study in the US. [53] In Australia, bilateral visual impairment and blindness were found to be four to seven times more frequent in the indigenous population. [54] However, it is often difficult to evaluate the prevalence of visual impairment or blindness that might be truly inherent in a racial or ethnic population in addition to the social determinants that a specific racial group were exposed to due to low socioeconomic position and marginalization. Many studies in the US, but not all, showed no significant differences between ethnic or racial minorities when compared with Caucasian populations, although differences were not always adjusted for socioeconomic position variables. [52,55,56] Geographic inequalities were found among continents, countries, and regions within a country. [57] In 2000, Africa and India bore the highest prevalence of blindness, followed by the rest of Asia, China, and Latin America. [40] However, Asia led the burden of disability-adjusted life years (DALYs), and cataract was the principal cause of blindness with 95% of the burden in low-income countries. [58] Other studies showed that 87% of the visually impaired and 90% of blind people lived in low-income countries but differences in prevalence persisted between countries in the same region or continent and were inversely correlated with GDP per capita of each country. [40,41,59,60] In France, geographic inequalities were also found between regions within the country for age-adjusted visual impairment and blindness prevalence. [61] Geographic inequalities were found after occupational social class adjustment, and they were evident for age-adjusted low vision between regions of Nigeria. [45] Differences in the prevalence of visual impairment were similarly found for five states in the US. [62] In Singapore, an ecologic effect of socioeconomic determinants of the community was found to have an independent association with visual impairment, even when considering individual socioeconomic determinants. [62] In Canada, ecologic research found that prevalence of blindness registration correlated with medium household income of districts after evaluating the five geopolitical regions of the country. However, when the model did not consider geopolitical region, medium Vol. 60 No. 5 household income was not statistically correlated. Those results suggested that the geopolitical region played a role in blindness independent of district income. Moreover, income derived from government transfer payments had a negative correlation with blindness registration prevalence. [63] Discussion The review produced four main findings: (1) women had a higher prevalence of visual impairment and blindness, which was not fully explained by age or by access to services, (2) socioeconomic status measured as higher income, higher educational status, or nonmanual occupational social class was inversely associated with prevalence of blindness or visual impairment, (3) ethnicity and race were associated with visual impairment, although other social determinants of health can be associated, and (4) geographic inequalities and visual impairment (of the region, nation or continent) were observed to be related to income, and living in a rural area. An association with socioeconomic and political context was additionally suggested. Results bearing the evidence of the association between socioeconomic position determinants and prevalence of visual impairment and blindness were found in this review, even if this relationship has rarely been addressed in research (23 articles in the last 12 years). However, the effect of an individual's socioeconomic position on his/her health may not be only direct, but may also emerge from intermediary determinants that remain, pending investigation. [4] Possible social determinant pathways that lead to the social gradient should be explored. [3] Future research should be done measuring how exposure or vulnerability explain the pattern of inequalities regarding a specific social stratification such as educational level or income. A few articles analyzed in this review, upon publication, were categorized as social determinant(s) of health, although the tag was an intermediary factor mostly related to accessibility of services. Additionally, some of the articles considered only psychosocial factors, which are related to occupational health and environmental factors, as determinants for producing inequalities. Although psychosocial consequences of socioeconomic inequality were an important intermediary determinant, interpretation of links between socioeconomic status and health must begin with the structural causes of inequalities. [4] An "ecosocial" approach is a needed consideration to better understand the mechanism of how differences are produced by integrating social and biological factors in a dynamic, historical, and ecological perspective. [64] A possible effect of the first level of social determinants of health-the determinants of socioeconomic and political context-might be considered for future research, since the few results produced thus far consistently suggested this concept. [58,60] Perceived gender discrimination by women was associated with their poor health outcomes. [65] A greater awareness of gender discrimination behaviors could explain differences between the outcomes of men and women if the slightly increased gender inequalities in prevalence are confirmed for high-income countries versus low-income countries. In addition, gender discrimination patterns affected the decision-making authority, which not only influenced access to services, but also, differences in psychosocial and environmental risk exposure. [66] Further research is necessary. Women also accumulated more working hours than men, and their additional domestic chores negatively affected their health. [67] This could also influence risk of diabetic retinopathy, glaucoma, and cataract if those health issues were related to stress. [68] More research is needed to identify if perceived gender discrimination, decision-making authority, and working hours are associated with gender inequalities in blindness and visual impairment. A study performed in Saudi Arabia, a country with significant gender discrimination, found an extremely high gender inequality of visual impairment between men and women attending primary care (lower for women), and there was lower registration for government allowances provided for blindness in Kuwait. [69,70] It is worthy to note that with other pathologies, women generally put less therapeutic effort in seeking treatment with regard to organ transplants, coronary problems, emergency treatments, and pharmaceutical spending. [71][72][73][74][75] More research would be needed to assess if there are gender differences in therapeutic efforts regarding ophthalmological procedures, and if so, whether those differences could explain why women, despite having had more access to a specialist in high-income countries, had higher prevalence of blindness and visual impairment than men. Sex differences in the distribution of pathologies that cause blindness and visual impairment were not broadly described. [76,77] (It should be clarified that sex refers to the biological construct/characteristics, whereas gender is a social construct concerning behaviors, roles, and interactions between men and women.) Genetic, hormonal, and other biological factors associated with ocular pathologies could lead women to greater risk of blindness and visual impairment. Reporting sex-stratified data in publications could allow more accurate knowledge on gender inequalities. However, biological factors cannot be considered alone. [78] Income, educational level, and social class measure socioeconomic position and act in similar, although not equal, ways to produce visual impairment and blindness. More research should be done to understand performance and how influences vary in relation to visual outcomes. Although only a few articles addressed the association of blindness or visual impairment and income, the results consistently showed correlation between lower income and higher risk for visual problems. However, even when considering that women were at a significantly greater risk of developing visual impairment or blindness, none of the articles analyzed in this review stratified association of income and visual outcomes by sex. There were more articles that disaggregated the prevalence of visual impairment and blindness according to level of education, rather than income. Educational level was associated with knowledge and awareness of eye conditions and eye care services and poor behavior toward eye care, but education might have a different effect than income in seeking eye care when needed. [79] However, more research is needed to identify whether lack of knowledge and poor behavior explains this association or if other socioeconomic factors can be implicated. Complex behavior in seeking and receiving eye care services may be embedded in socioeconomic determinants, and more research needs to be done to confirm those findings. [80][81][82] Ulldemolins, et al.: Social determinants in eye health inequity The father's social class at time of birth of an individual had a direct effect on that individual's embodiment of social class and affected middle-age risk of developing visual impairment. [51] However, more research is required for measuring the direct or reverse effect of blindness and socioeconomic position, as well as the role of gender. While many studies demonstrated that ethnic or racial groups have a different prevalence of visual impairment, often due to specific eye diseases, it can be difficult to ascertain how much was intrinsic to the race or ethnicity and how much was associated to socioeconomic position or the lack of eye care for various reasons. More sophisticated research will be needed to determine this conclusion. Eye care inequities exist in a variety of ways around the world. While some studies suggested that eye care access is a major barrier even in the presence of national health care systems, substantial numbers of subjects did not utilize services where they were available. [37,53,83] In these instances, the lack of education and perhaps more importantly, the lack of basic literacy and/or knowledge of eye diseases provide some explanation. [84] Poverty by itself or combined with educational factors (social deprivation) is also another reason why many patients cannot access services. As a final note to the discussion, a limitation of the review was that most of the publications were not correctly tagged with adequate keywords when searching for inequalities. As a recommendation, it would be important that future publications should be classified in terms of inequities or social determinants in order to facilitate knowledge-sharing of work that has already been produced. More research and interpretation needs to be done to better understand the social and biological mechanisms that produce the social inequalities patterns in the prevalence of blindness and visual impairment. Publications, even those not focused on inequalities, should stratify and interpret findings separately by sex and socioeconomic status to provide better understanding of gender inequalities. Associations with determinants of socioeconomic and political context should be further explored.
2018-04-03T02:38:06.899Z
2012-09-01T00:00:00.000
{ "year": 2012, "sha1": "2fe77968b575e3582b0ebdcfd237f23246d5ab07", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/0301-4738.100529", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "ac741bb4b0b66ae68707037905561039a5899cff", "s2fieldsofstudy": [ "Medicine", "Sociology" ], "extfieldsofstudy": [ "Medicine" ] }
252546278
pes2o/s2orc
v3-fos-license
Immune response after COVID-19 vaccination among patients with chronic kidney disease and kidney transplant Graphical abstract Introduction Patients with chronic kidney disease (CKD), including kidney transplant (KT) recipients, and those on dialysis represent a special subgroup of patients requiring protection during the severe coronavirus disease 2019 (COVID-19) pandemic [1,2]. Patients with CKD usually have a compromised immune response [3,4], require higher dosages of vaccine and more frequent dosing because the vaccine response is short lived and achieves a lower response, especially among patients undergoing dialysis [5,6]. Related reports of vaccination among patients with CKD mainly considered mRNA vaccines [8,7]. Recent reports describing seroconversion rates among patients undergoing dialysis receiving two doses of the BNT 162b2 vaccine (Pfizer BioNtech) were lower than those of controls [9,10]. One study reported a weak antibody response of patients with HD to the viral vector COVID-19 vaccine [11]. In Thailand, the main vaccines available are Coronavac (Sinovac Life Science, Beijing, China), BBIBP-Cor V vaccine (Sinopharm) and ChadOx1 nCoV-19 (Oxford-Astra Zeneca). Zhang et al. conducted a pilot, prospective study to survey the safety and humoral response to inactivated SARS-CoV-2 vaccine among 45 patients with CKD receiving a 2-dose immunization of inactivated (Sinovac and Sinopharm). They showed that the majority (84 %) of patients with CKD acquired detectable neutralizing antibody lower than those of controls [12]. Bruminhent Materials and methods This prospective cohort study included four different patient groups: patients with CKD, those on hemodialysis (HD) and contin-uous ambulatory peritoneal dialysis (CAPD), recipients of KT, and a control group without kidney failure from the Faculty of Medicine, Vajira Hospital, Navamindradhiraj University. Participants were enrolled between July and December 2021. The inclusion criteria were CKD stages 3-5 (eGFR 60 mL/min/1.73 m 3 ), patients with stage 5 CKD undergoing HD, CAPD and KT > 3 months. The healthy control group consisted of volunteer healthcare workers that had eGFR ! 60 mL/min/1.73 m 3 . Participants in every group were 18-90 years old. Every participant received the same vaccine type in both first and second doses. The exclusion criteria included allergy to the components of the vaccines, inability to receive the vaccine according to their schedule, fever or concomitant serious illnesses and side effects from the first dose of vaccination. Patients Table 1 Baseline characteristics of patients. Trial procedure The enrolled patients received the COVID-19 vaccine according to the vaccination protocol approved in Thailand, that is, two doses of ChAdox-1 nCOV-19 vaccine, 12-week interval, Coronavac, 3week interval or BBIBP-Cor V, 4-week interval. All participants provided a blood sample for antibody and cellular immunity measurement at the following time periods: T0 (before the first injection), T1(before the second injection) and T2 (12 weeks after the second injection). Immunogenicity analysis was performed at one and three months post-infection. Determination of antibodies against SARS-CoV-2 All SARS-CoV-2 antibody assays were performed and analyzed using the EUROIMMUN Analyzer I-2P Ò (Euroimmun Medizinische Labordiagnostika, Lubeck, Germany) at the Central Laboratory and Blood Bank, Faculty of Medicine, Vajira Hospital, Navamindradhiraj University. Controls and calibrators were used in the test kit for each run. The ratios of diluted serum, optical density and cut-off values in this study were used according to the manufacturer's instructions. Quantitative determination of anti-SARS-CoV-2 S1 (IgG) The anti-SARS-CoV-2 S1/ (RBD) IgG QuantiVac ELISA IgG (Euroimmun, Lübeck, Germany) Kit was used for quantitative determination of human antibodies against immunoglobulin class IgG against the S1 domain of the SARS-CoV-2 spike protein of in serum samples (see Appendix). To detect the presence of NA against the S1 receptor-binding domain (RBD) of SARS-CoV-2 to ACE2 receptors in the plasma samples, the ELISA-based surrogate virus neutralization test was used (SARS-CoV-2 NeutraLISA (Euroimmun, Lübeck, Germany) (see Appendix). Assessment of the T cell response by quantitative determination of interferon-c release by SARS-CoV-2-specific T cells Cellular immunogenicity was measured by calculating the secretion of interferon gamma (IFN-c) using peripheral blood mononuclear cells upon SARS-Co-V2 glycoprotein stimulation and subsequent determination of released IFN-c by ELISA (Euroimmun, Lübeck, Germany) (see Appendix). Participants Demographic information, including age, sex and body mass index, was obtained at the first enrolment. The vaccine type, date of vaccination, use of immunosuppressed agents, number and types of comorbidities and history of transplantation were recorded. Primary outcomes included humoral and cellular responses after COVID-19 vaccination at T0, T1 and T2, as measured by SARS-CoV2 spike S1-specific IgG antibody levels and the viral neutralization test by surrogate virus neutralization test. The percentages of responders in different cohorts (CKD, HD, CAPD and KT) were compared with the controls, within and between cohorts to define the seropositivity rate (individuals who developed detectable anti-SARS-Co-V antibodies). The secondary outcomes were rates of AEs after vaccination and the incidence of COVID-19 breakthrough infection after vaccination, including illness severity. Statistical analysis The sample size calculation is provided in detail in the Appendix. Values are presented as median (interquartile range) for con- tinuous variables. Antibody levels were compared between timepoints and analyzed using the paired sample t-test or Wilcoxon matched-pairs signed-ranks test. Categorical variables were reported as frequencies and percentages. Proportions were compared using Fisher's exact test (or the Kruskal-Wallis test as appro-priate). Correlation between two continuous parameters was calculated using Spearman's correlation. Logistic regression models were used in both univariate and multivariate analyses, and statistical significance was set at p < 0.05. Statistical analysis was Monitoring of adverse events AE assessments, including vaccine and drug side effects after the first and second vaccine doses, were monitored. Baseline characteristics Between June 2021 and December 2021, 212 patients with CKD at various stages and controls were vaccinated with COVID-19 vaccines, CoronaVac, BBIBP-Cor V, or ChAdOx1 nCoV-19 vaccine (AZD 1222). Totally, 31 patients (15.20 %) had underlying heart problems and none of the patients had either lung or liver diseases. Fourteen patients were lost to follow-up. Eleven patients died during the study period (COVID-19, eight;underlying diseases, two; sepsis,one:underlying disease). Finally, 212 patients (104 men, 49.06 %) with a mean age 54.8 ± 16.07 years were enrolled in the study (Fig. 1). The vaccination distribution was as follows: 190 patients (89.62 %) received ChAdOx1 nCoV-19, 20 (9.43 %) Corona-Vac and two (0.94 %) BBIBP-Cor V. One hundred and thirty-four (63.20 %) patients were undergoing HD, four (1.88 %) were undergoing CAPD and seven (3.30 %) were KT recipients, twelve (5.66 %) were nondialysis patients with CKD, and 55 (25.94 %) were the controls. The median duration of HD was 3.04 years (IQR 1.42-5.29 years). Almost all patients and the control group received the ChAdOx1 nCoV-19 vaccine being the main vaccine scheme adopted in our country at the time of the study; the baseline characteristics of the population are detailed in Table 1. The KT recipient group revealed an average age of 50.86 ± 11. 11 years; 42.86 % were women; and median time since transplantation was 9.83 years (IQR 5.08-20.5) The maintenance immunosuppressant regimens included calcineurin inhibitors (87 %), corticosteroids (45.4 %), antimetabolites (82.4 %) and mTor inhibitors (10.4 %). The antimetabolite treatments used included mycophenolate mofetil (85.2 %), mycophenolic acid (11.5 %) and azathioprine (3.3 %). The mean age in the HD group was 57.34 ± 1 4.84 years, 45.52 % were women.Subjects in the control group were aged 46.69 ± 17.65 years and 60 % were women. Only four patients were treated with peritoneal dialysis, with a mean age of 58.00 ± 11.66 years. None of the patients had a prior or current diagnosis of COVID-19 and all tested negative for the anti-SARS-CoV-2 NCP IgG. Diabetes is the most common cause of end-stage renal disease (ESRD Fig. 2). Anti-SARS-CoV-2 antibody response Patients on HD and nondialysis patients with CKD exhibited nonsignificant different antibody responses compared with those in the control group. In the CKD group, the median antibody titer was 3. 20 (Fig. 3).The antibody levels in the CAPD group at T2 were significantly lower than those in the control and HD groups (p = 0.01; CAPD vs control, p = 0.016 CAPD vs HD). A positive antibody level was detected in only one KT recipient at T2. Vaccine response was evaluated for 151 patients after the second dose in vaccine types. The response rate was 70.59 % in the control group of the ChAdOx1 nCoV-19 vaccine. The CKD and dialysis group had similar response rates of 60 and 59.62 % respectively. The KT group revealed a weak response of 33.33 % (Fig. 4). The CAPD group also showed a poor immunological response, with none being seropositive at T2. The NA and IFNc seropositive rates followed a similar pattern to anti-SARS-CoV-2 antibodies with the lowest response rate in the KT and CAPD groups, and the level of immunity and response rate in the inactivated vaccine groups were satisfactory in CKD, HD, and KT groups compared with controls ( Table 2). NA showed a good correlation with levels of anti-spike IgG antibodies at T1 and T2 (r = 0.876 at T1, r = 0.819 at T2, p <0.001) (Fig. 5) (Tables 2 and 4). Of these, 13 (92.85 %) received only one dose of a vaccine, with a median interval of 52 [IQR 44-61] days after the first vaccination. One patient developed COVID-19 after completing the second dose on day 64. Infection in two controls was resolved uneventfully. Overall, 85.17 % of cases were in the HD group. Factors associated with SARS-CoV2 infection were male sex and blood group (p = 0.005 and p = 0.039, respectively) (Supplementary Table 1). Only one patient received an inactivated vaccine. Among the patients diagnosed with COVID-19 during follow-up, the median anti-spike IgG, NA and IFNc levels significantly increased at 1one and three months after diagnosis, and natural immunity was robust and significantly higher than vaccine-induced immunity for as long as three months (Table 6). Vaccine type and immune response Of the 20 patients receiving inactivated vaccines, seven, one and 14 were in the HD, CKD and control groups, respectively. Antibodies were detected at a positive level (>35 BAU/mL) at T1 and increased progressively to a median of 217. at T2 among patients with HD. NA levels were detected at low titers at T2 in both CKD and HD groups. All patients in the control group responded to the inactivated vaccine with an antibody titer above Adverse events Among vaccine recipients, mild-to-moderate pain at the injection site was the most commonly reported local reaction, which Table 6 Comparison of immunogenicity between non-COVID-19 and COVID-19 patients. resolved within 1-2 days. Fever was the second most common symptom. The local reactions did not increase after the second dose. Fever occurred more frequently in the control group (p = 0.025) and no serious AEs were recorded (Table 7). Discussion Patients with CKD, especially ESRD undergoing dialysis, are at a very high risk of death following COVID-19 [14,15]. Evidence suggests that patients with CKD may have a less robust antibody response after vaccination than healthy controls [16][17][18][19][20]. Our study consisted of a diverse group of patients with CKD receiving different therapies. Our major finding was that patients with CKD, including those on maintenance HD, developed a substantial humoral response following the two vaccine doses (inactivated and ChAdOx1 nCoV-19 vaccines). Humoral seroconversion responses were maintained for as long as 12 weeks after completing the second dose, and the responses were equivalent to those of healthy individuals. However, patients with KT developed fewer humoral immune responses than those in the other groups. Immunosuppression may induce a weak anti-SARS-CoV-2 antibody response. The immune response from inactivated whole-virus SARS-CoV-2 vaccine among patients with HD was demonstrated to be satisfactory. However, fewer patients achieved humoral immune responses compared with healthy individuals [21]. In our study, >50 % of all patients except recipients of KT experienced seroconversion after receiving the second dose of inactivated vaccines. Related studies have reported variable responses to COVID-19 vaccines among patients with CKD, with most studies reporting on mRNA vaccines [22][23][24][25][26]. However, the durability of this immune response and the extent to which it translates to protective immunity remain unclear. A systematic review of 18 studies found that the antibody response to full vaccination with two doses of COVID-19 mRNA vaccines among patients undergoing HD, CAPD and KT was lower than that in the healthy population [27]. In phase 3 trials, BNT162b2, mRNA-1273 and ChAdOx1 nCoV-19 prevented COVID-19 in 95, 94.1 and 70.4 % of participants [28][29][30], respectively, suggesting that the mRNA vaccines might induce protective immunity more reliably than ChAdOx1 nCoV-19. In addition, both mRNA vaccines and viral-vector vaccines induce balanced humoral and T cell immunity [31]. Our study measured cellular immunity to better explore the immunogenicity of these specific populations using IFNc levels. We found a significant correlation between IFNc, SARS-CoV-2specific antibodies and NA.Cytotoxic CD8+ T cells help accelerate the clearance of many respiratory viruses [32] and are essential in reducing the risk of SARS-CoV-2. Here, we demonstrated a good T cell response among patients with CKD and those on HD, which occurred as early as after the first dose of the ChAdOx1 nCoV-19 vaccine. The level of cellular immunity in this study correlated well with anti-SARS-CoV-2 antibody and NA, as in related studies [33]. Cellular immunity was then extrapolated to a good humoral immune response. The antibody responses and NA levels in both vaccine groups did not significantly different except in the control group at T1. After the second dose, the level of immunity was similar (Supplementary Tables 3-7). The sample size in the inactivated vaccine group was small, and the protocol for the inactivated vaccine was only 3-4 weeks apart. This implied that antibody levels in the inactivated group declined more rapidly than in the other groups and this vaccine should not be recommended among patients with CKD and those on HD. The inappropriately high level of anti-spike IgG in the control group at T1 after inactivated vaccine might have been caused by natural infection. Since antinucleocapsid was performed only before participants recruitment. The new cases of COVID-19 were detected after the first doses of vaccination; more than one half of these patients were in the HD group (85.71 %). Our study suggested a more rapid vaccination response among patients with CKD and on dialysis. The results also implied that patients on HD should not be considered for a delayed second vaccination dose. To prevent new cases of COVID-19, the second dose should be scheduled early as four to eight weeks after the first dose. Most people with symptomatic SARS-CoV-2 undergo seroconversion to produce a detectable, specific antibody response in the acute phase (Table 6). However, they should be re-vaccinated because the specific IgM rises in the acute phase and the IgG peaks appear later but decline after three to four months [34,35]. Most studies have not reported an association between antibody response and other factors, such as age or sex. Our findings showed significant associations for blood group O, which constitutes a novel finding. Related studies have revealed that blood group O is associated with less viral infection and illness severity [36][37][38].Blood group A was found to be associated with an increased risk of infection and mortality but a decreased risk of intubation and death [39]. The molecular mechanisms by which ABO polymorphism impacts risks of SARS-CoV2 infection might involve ABO antibodies inhibiting the interaction between angiotensin converting enzyme-2 receptor and the virus, related to the presence of the anti-A antibody [40] or anti-A isohemaglutinin titers [41]. Further studies need to confirm these findings. This study showed the effectiveness of the ChAdOx1 nCoV-19 vaccine over inactivated vaccines among patients with CKD. We found no difference in serious AEs between the two vaccine formulations, except for fever and numbness, which resolved in a few days. Our study exhibits several strengths, including a comprehensive overview of the immunogenicity of both humoral and cellular responses to COVID-19 vaccines in a broad sample of patients with CKD. The sizes of the HD and CKD cohorts were sufficiently large for the control group to allow us to identify differences. The results presented here have a long follow-up to three months after vaccination in contrast to only <four weeks in other studies. This may have implications for treatment and policy, because the ChAdOx1 nCoV-19 vaccine remains one of the main COVID-19 vaccine used in many countries. Nevertheless, our findings were limited by the small sample size and unequal distribution of the CKD population The number of patients other than those undergoing HD was insufficient to draw a meaningful conclusion concerning the other subgroups. The loss follow-up rate was also high in the control group. Conclusion Immunity among patients on HD and those with CKD after completing two vaccinations with candidate vaccines was strong, although the responses among recipients of KT and patients on CAPD were below acceptable levels, reinforcing the idea that this population should be vaccinated as soon as possible and receive a booster dose with the same or a different vaccine platform, such as an mRNA vaccine. A timely second dose of the COVID-19 vaccine seems necessary to ensure protection of patients with kidney disease from SARS-CoV-2. Blood group O and vaccine type were associated with good immune response. Ethics approval This study protocol was reviewed and approved by the Vajira Institutional Review Board, Faculty of Medicine, Vajira Hospital, Navamindradhiraj University, approval number 94/2564. Consent to participate Written informed consent was obtained from all participants, and the experiment was performed in accordance with the principles of the Declaration of Helsinki and Good Clinical Practice. Data availability The data supporting the findings of this study are available from the corresponding author. Data supporting the findings of this study are openly available in ''figshare" at http://doi.org/10.6084/m9.figshare. 19552222. Declaration of Competing Interest The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: Thananda Trakarnvanich reports financial support was provided by Navamindradhiraj University. Thananda Trakarnvanich reports a relationship with Navamindradhiraj University that includes: funding grants and non-financial support. None to be declared.
2022-09-28T13:14:52.393Z
2022-09-01T00:00:00.000
{ "year": 2022, "sha1": "9e784b77cc5f4dc300f2bfee241f4e289e615a1a", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.vaccine.2022.09.067", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "2b4e183137794acf408865b12d8e54c3edbdc412", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
90535226
pes2o/s2orc
v3-fos-license
Metabolomic estimation of the diagnosis of hepatocellular carcinoma based on ultrahigh performance liquid chromatography coupled with time-of-flight mass spectrometry Metabolomics has been shown to be an effective tool for biomarker screening and pathway characterization and disease diagnosis. Metabolic characteristics of hepatocellular carcinoma (HCC) may enable the discovery of novel biomarkers for its diagnosis. In this work, metabolomics was used to investigate metabolic alterations of HCC patients. Plasma samples from HCC patients and age-matched healthy controls were investigated using high resolution ultrahigh performance liquid chromatography-mass spectrometry and metabolic differences were analyzed using pattern recognition methods. 23 distinguishable metabolites were identified. The altered metabolic pathways were associated with arginine and proline metabolism, glycine, serine and threonine metabolism, steroid hormone biosynthesis, starch and sucrose metabolism, etc. To demonstrate the utility of plasma biomarkers for the diagnosis of HCC, five metabolites comprising deoxycholic acid 3-glucuronide, 6-hydroxymelatonin glucuronide, 4-methoxycinnamic acid, 11b-hydroxyprogesterone and 4-hydroxyretinoic acid were selected as candidate biomarkers. These metabolites that contributed to the combined model could significantly increase the diagnostic performance of HCC. It has proved to be a powerful tool in the discovery of new biomarkers for disease detection and suggest that panels of metabolites may be valuable to translate our findings to clinically useful diagnostic tests. Introduction Hepatocellular carcinoma (HCC), is one of the most common malignant tumors in the world and has caused a huge economic and social burden. 1 Few reliable biomarkers are available in clinical diagnosis to date. The early screening of HCC is an effective strategy to decrease its high mortality. Tissue-based histopathological or blood-based biochemical assays (a-fetoprotein, AFP) in the blood are the major screening methods for HCC. 2 Recent studies show that AFP lacks sufficient sensitivity, 3 the quality of the tissue-based histopathological images can be both equipment-and user-dependent. 4 Further, accurate and early diagnosis of HCC remains a great challenge to date. Therefore, novel screening strategy is still urgently needed for the early discrimination of patients. Recently, metabolomics have been proved to be a highly successful method which could detect metabolic changes in the different pathophysiological status. 5 Biomarkers can improve disease diagnosis and treatment. There is a close relationship between metabolism and cancer and metabolomics is a promising avenue for biomarker discovery. 6 Recent reports suggest that metabolite biomarkers have been identied and provided by metabolomic studies. 7 Metabolomics is an approach that allows the analysis of the entire metabolites in a biological system, and could be used to identify biomarkers for specic pathological status. [8][9][10][11] It has been expanding in scope, from its primary use in clinic for a new diagnostic role in diseases. 12 Some studies have been reported in biomarker discovery on HCC diseases. The alterations in the normal metabolome may be indicative of diseases. [13][14][15][16] The potential metabolite (biomarker) identication is very important due to it provides diagnostic biomarkers and can help to develop new therapy. HCC diagnosis is a challenging task for the clinician. In this study, we consider that metabolomics has shown great potential for discovering biomarkers and exploring the metabolic mechanisms of diseases. We aimed to investigating the combined use of metabolites for detection of HCC patients. Metabolites were measured aer analysis of plasma samples with ultrahigh performance liquid chromatography coupled to mass spectrometry (UPLC-MS). Multivariate parametric statistical test and receiver operating characteristics analysis were performed to evaluate diagnostic performance of the metabolites. Besides, we will elucidate major pathways contributing to a deeper understanding of the pathological mechanism in HCC. Chemicals and reagents Methanol (HPLC grade) and Acetonitrile (HPLC grade) were purchased from Merck (Darmstodt, Germany). Ultrapure water was provided by a Milli-Q water purication system (Millipore, Billerica, USA). Formic acid and leucine enkephalin was purchased from Sigma-Aldrich (St. Louis, MO, USA). Study subjects The patients with HCC and healthy control subjects were recruited from the First Affiliated Hospital, Heilongjiang University of Chinese Medicine from April 2015 to May 2016, and all patients signed informed consents before the study began. HCC patients were diagnosed according to the histological evidence and the AFP levels were signicantly higher in HCC than in healthy control subjects (the AFP level > 400 ng mL À1 ). The detailed information on the clinical information such as age, gender, and some important biochemical indexes for HCC patients and control subjects was presented in ESI Table 1. † A total of 70 patients together with 65 normal control cases were recruited. The study was approved by the Review Board of Heilongjiang University of Chinese Medicine (HUCM-2016-0324) and complied with the provisions of the Good Clinical Practice Guidelines and the Declaration of Helsinki. Sample collection and preparation The whole blood samples were collected in the morning before breakfast from all HCC patients and control group. During this procedure, the plasma was separated by centrifugation at 5000 rpm for 10 min and then immediately stored at À80 C until further analyses. To ensure the stability and repeatability of UPLC/MS, the blank samples and quality control samples were used in this study. All the plasma samples were thawed at 4 C and a volume 400 mL of cold methanol were added to 100 mL of plasma for deproteinization, and then centrifuged at 4000 rpm for 10 min. Next, the supernatants were recovered, evaporated using a vacuum rotary dryer and re-suspended in 100 mL acetonitrile/water (1 : 3, v/v), vortex-mixed for 10 min, then centrifuged at 4000 rpm for 10 min, and the supernatant was held for UPLC/MS analysis. To assess instrument stability and quality control sample was used to ensure data quality during the sample sequence. The pooled samples prepared by mixing aliquots of all plasma samples, were injected once aer every 10 sample injections via the data acquisition process. MS full scan was performed positive ion mode and negative ion mode using time of ight mass spectrometer (Waters, Milford, USA), which was coupled to the UPLC system. The mass scanning range was 50-1500 m/z in the full scan mode. The optimal conditions of MS analysis were as follow: in ESI + mode, the source temperature of 120 C, the capillary voltage was 3.0 kV, the sampling cone voltage was 30 V, desolvation temperature was 400 C, desolvation gas ow was 600 L h À1 ; in ESI À mode, the source temperature of 110 C, the capillary voltage was 2.5 kV, the sampling cone voltage was 20 V, desolvation temperature was 350 C, desolvation gas ow was 550 L h À1 . Nitrogen was used as the collision gas at a collision cell pressure of 2.0 Â 10 À5 torr. The ow rates of cone and desolvation gas were set at 60 L h À1 and 400 L h À1 , respectively. Data preprocessing and pattern recognition analyses LC-MS data were analyzed to identify potential discriminant biomarkers. Smoothing, denoising, peak ltering, and alignment of the acquired data were conducted by the MarkerLynx 4.1 application manager (Waters, Manchester, UK). The processed data were imported into MetaboAnalyst 3.0 (http:// www.metaboanalyst.ca) for multivariate pattern recognition analysis. PCA was applied to classify plasma samples and performed to detect outliers and distributions of deferent groups and OPLS-DA was carried out to obtain an overview of the complete data set aer mean centering scaling. VIP-plot from OPLS-DA was used to rank the differential metabolites and selected for their metabolic pathway analysis, according to their importance to the classication. The differential metabolites were identied based on the MS/MS fragment comparison with the standard compounds, or via search of the candidate compounds in the databases including the HMDB, and METLIN. Metabolic pathway and network analysis Pathway analysis was employed to perform the pathway topology analysis on the MetaboAnalyst (http://www.metaboanalyst.ca) to identify the most relevant pathways involved in the HCC. To visualize the metabolic pathway, the correlation network analysis using IPA soware was performed. Discrimination performance of potential biomarkers Metabolomics was performed to explore whether metabolomic signatures had the potential discrimination ability between HCC group and control group. To obtain a nal diagnostic score, receiver operating characteristic (ROC) curves were generated using the rocplot function program. It allows characterization of diagnostic accuracy, was explored to evaluate the predictive accuracy. ROC curves were used to evaluate the accuracy of the combined signatures model. Statistical analysis Metabolic pathway analysis was employed to perform the pathway topology analysis using the online soware Metab-oAnalyst 3.0 (http://www.metaboanalyst.ca). The visualization models include PCA and PLS-DA. The areas under curve (AUC) of receiver operating characteristic (ROC) curves were used to determine the diagnostic effectiveness of important metabolites using MetaboAnalyst 3.0. Two-tailed Welch's t-tests of covariance were performed using SPSS soware (version 19.0; SPSS, Inc., Chicago, IL), with p < 0.05 deemed to be signicant. Baseline clinical characteristics Characteristics of the study population are shown in ESI Table 1. † The mean ages, sex, AFP value, HBsAg, ALP, ALT, AST, D-BIL, and T-BIL were not signicantly different between patients with HCC cases and age-matched healthy controls. The AFP level indicates a fairly advanced stage of this disease. Overall metabolomics analysis The rst aim was to analyze the metabolome differences between HCC and health cases. To do this, in this study we applied global metabolomics UPLC-MS focusing on the proles of low-molecular-weight metabolites. Representative base peak intensity chromatograms of plasma samples from the HCC cases and controls were given in Fig. S1. † Aer alignment and normalization of the data sets of the serum spectra, multivariate statistical analyses were then carried out to enlarge metabolite identication. PCA and 3-D PCA showed clearly distinct distribution in Fig. 1A and B, respectively, which implied abnormal metabolic pattern from HCC patients. Orthogonal projection to latent structure with discriminant analysis (OPLS-DA) was then subsequently used to determine differential metabolites responsible for metabolic differences. The OPLS-DA model aer excluding the outliers observed in the PCA reveal an obvious separation between groups ( Fig. 1C and D). Differential metabolites analysis We employed an OPLS-DA statistical approach termed VIP-plot to select metabolites that highly contributed to the behaviors. VIP-plot of controls vs. HCC patients was shown in Fig. 1E, which is a scatter plot that combines the covariance and correlation. Higher values of VIP indicate metabolites that are more important to the classication. A T test was performed in variables with signicant differences between HCC cases and control individuals (P < 0.05) were kept. As a result, a total of 23 discriminate variables (listed in ESI Table 2 †) as interesting candidates were consistently altered in HCC. Elemental composition was calculated using the Masslynx 4.1 tool. Meanwhile, it was nally conrmed by comparison with a standard sample. Then, these changes of the metabolites are further visualized in the heatmap and indicated that the two groups could be separated based on these metabolites ( Fig. 2A). Metabolic pathways analysis All of the differential metabolites were determined by enrichment analysis utilizing MetaboAnalyst 3.0. The biological pathway analysis results showed 4 metabolic pathways, including arginine and proline metabolism, glycine, serine and threonine metabolism, steroid hormone biosynthesis, starch and sucrose metabolism were the most inuenced metabolic pathway which set as a pathway impact >0.01 ( Fig. 2B and ESI Table 3 †). To visualize biological connectivity of the signicantly changed metabolites, the network-generating algorithm of ingenuity pathway analysis was used to maximize the interconnectedness of molecules in the HCC-related metabolic network (Fig. 3A). Functional pathway category analysis of differential metabolites related to HCC was carried out. Consequently, cellular compromise, lipid metabolism, small molecule biochemistry pathways were considered closely related to the development of HCC. It revealed that the HCC patients possessed a highly unique metabolic phenotype characterized by these differential metabolites ( Fig. 3B and ESI Table 4 †). Potential diagnostics estimation We have performed ROC analysis to further characterize the predictive value of these individual metabolites independently. We found 7 metabolites including deoxycholic acid 3-glucuronide, 6-hydroxymelatonin glucuronide, 4-methoxycinnamic acid, 11b-hydroxyprogesterone, 4-hydroxyretinoic acid, N-acetylneuraminic acid, N-acetyltryptophan with an area under the curve (AUC) < 0.94 (ESI Table 5 †). To improve the prediction of HCC, a combination of more than one discriminatory metabolite was developed via logistic regression analysis (Fig. 4A). We found that the combination of metabolites was a better discriminator (AUC > 95%) than each metabolite individually (AUC < 94%), which reinforced the improved capacity of biomarker patterns to distinguish different groups. Thus, the distinctive signatures with the 5 metabolites had achieved the highest AUC value and signicantly increased the diagnostic performance of HCC, achieving an AUC value of 0.996. To validate the importance of these metabolites from the ROC analysis and to further screen out a group of metabolites as potential biomarkers for accurately diagnostics. Notably, for the feature ranking method, top 5 metabolites including deoxycholic acid 3-glucuronide, 6-hydroxymelatonin glucuronide, 4-methoxycinnamic acid, 11b-hydroxyprogesterone, and 4-hydroxyretinoic acid contributed to the combined model (Fig. 4B). Thus, these distinctive signatures greatly improved the diagnostic performance of a single metabolite in HCC. Discussion Many HCC patients are diagnosed at an advanced stage. The early detection of HCC has been a great challenge until now and the developing new effective methods for the discovery of novel biomarkers for HCC are urgently needed. Reliable biomarkers for the diagnosis of HCC need to reduce mortality and therapeutic expenditure. Metabolites can be changed in cancer, because the metabolites produced are indicators of what is happening in the metabolism of disease conditions. 17,18 Metabolomics has been introduced as a way for nding helpful diagnostic biomarkers for the clinician. Recently, some previous studies have attempted to use metabolomics to discover biomarkers that might apply the detection of cancer. 19 Identication of dysfunctional metabolic pathways of cancer via metabolomics can be used to discover biomarkers. 20 Numerous studies have reported the dysregulated metabolism is associated with the survival of cancers in patients. 21 A growing number of studies have used plasma-based metabolomics as a method of discovering biomarkers for diagnosing HCC. 22 In the present study, a non-targeted UPLC/MS plasma metabolism method was utilized to explore the metabolic characteristics related to the HCC patients and to screen meaningful predictors. A comprehensive workow was employed to determinate potential biomarkers, including the visualization of samples and metabolites, multivariate screening for the classication of disease status and ROC validation. We compared plasma metabolic proles of 70 patients with HCC to identify its metabolic signatures. PCA showed clearly different distribution that was caused by different levels of metabolites. We employed VIP-plot of OPLS-DA statistical approach to select metabolites that highly contributed to the sharp separation behaviors. A total of 23 metabolites were dened as biomarker candidates. We identied 12 metabolites in ESI + mode and 11 metabolites in ESI À mode related to the HCC. In this study, we provided evidence that there is a high relationship between these metabolites and HCC. Pathway analysis has revealed that arginine and proline metabolism, glycine, serine and threonine metabolism, steroid hormone biosynthesis, starch and sucrose metabolism and so on were most relevantly disturbed, and gives a better understanding of the potential mechanism of HCC. In recent years, numerous ndings have been published that using the marker candidates could potentially improve the diagnosis of patients. [23][24][25][26] Establishing a diagnostic model to predict the HCC was difficult due to the distinct metabolic prole of HCC consisting of the 4 altered metabolic pathways and 23 corresponding metabolites. Subsequently, we then performed ROC analysis to characterize the predictive value of these metabolites for discriminating HCC. Evaluation of biomarkers by ROC analysis showed that 13 metabolites with high AUC above 0.90. Their AUC values indicated a satisfactory performance in validation data sets. A combination of more than one discriminatory metabolite was developed via logistic regression analysis to construct an effective diagnostic model for HCC. Especially, for the feature ranking method, interestingly, we observed that top 5 metabolites including deoxycholic acid 3-glucuronide, 6-hydroxymelatonin glucuronide, 4-methoxycinnamic acid, 11b-hydroxyprogesterone and 4-hydroxyretinoic acid contributed to the combined model and had achieved the highest AUC value. These metabolites significantly increased the diagnostic performance of HCC. The potential biomarker pattern may has the advantages of improving the diagnostic performance, as well as simplifying the practical application. It revealed the potential pathogenesis of HCC, and also provided a feasible diagnostic tool for HCC populations through detection of plasma metabolites. The development of biomarkers to diagnose HCC is meaningful for both patient care and research. In this study, we had investigated whether the alterations in plasma metabolites are available for detection of HCC, and performed based on the UPLC/MS technique supported by the advanced chemometric analysis. The aim of this study is to nd metabolite biomarkers that would allow the discovery and diagnosis of HCC. We found that the combination of metabolites was a better discriminator than each metabolite individually. In this study, the plasma metabonomics analysis for identifying potential biomarkers to diagnose HCC was successfully demonstrated, which has the advantages of reliable, simple, and low-cost. It demonstrates the efficacy and the potential of plasma metabolomics strategy for biomarker discovery, and providing novel insights for disease study. In conclusion, our research highlights the highthroughput metabolomics makes it an ideal methodology for rapidly identifying the global metabolic alterations associated with HCC-alterations that not only enhance our understanding of the metabolic mechanism, but that can improve HCC diagnostics. Conclusions In this study, a non-targeted UPLC-TOFMS metabolomics method in conjunction with pattern recognition analyses based on 65 plasma samples from healthy controls, and 70 plasma samples from HCC patients were established to explore the metabolic characteristics of HCC. We applied plasma metabolomics approach based UPLC-MS combined with pattern recognition approach for identifying potential novel diagnostic biomarkers of HCC. Interestingly, we observed that 23 potential biomarkers in the HCC subjects were quite different from the control subjects. Additionally, signicant metabolic pathway alterations were observed in the HCC, including arginine and proline metabolism, glycine, serine and threonine metabolism, steroid hormone biosynthesis, starch and sucrose metabolism. Subsequently, the diagnostic potential of these biomarkers was then carefully evaluated based on the ROC curve with the AUC value. The combinational use of top ve metabolites has the promising clinical potential to improve the diagnostic performance of HCC. More studies are still required for further largescale validation. Overall, however, our study indicates that plasma metabolomics is a powerful resource to explore the disease mechanism and discover the potential biomarkers for diagnosis in clinic. Conflicts of interest The authors declare no competing nancial interests.
2019-04-02T13:13:49.704Z
2018-02-28T00:00:00.000
{ "year": 2018, "sha1": "6445bb10461c2e4764f928fd2b370181d9f16e56", "oa_license": "CCBY", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2018/ra/c7ra13616a", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "6317bf9a5fae0e5ca16fdeea3927249c74e031c8", "s2fieldsofstudy": [ "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
5099293
pes2o/s2orc
v3-fos-license
A Delayed Choice Quantum Eraser This paper reports a"delayed choice quantum eraser"experiment proposed by Scully and Dr\"{u}hl in 1982. The experimental results demonstrated the possibility of simultaneously observing both particle-like and wave-like behavior of a quantum via quantum entanglement. The which-path or both-path information of a quantum can be erased or marked by its entangled twin even after the registration of the quantum. Complementarity, perhaps the most basic principle of quantum mechanics, distinguishes the world of quantum phenomena from the realm of classical physics. Quantum mechanically, one can never expect to measure both precise position and momentum of a quantum at the same time. It is prohibited. We say that the quantum observables "position" and "momentum" are "complementary" because the precise knowledge of the position (momentum) implies that all possible outcomes of measuring the momentum (position) are equally probable. In 1927, Niels Bohr illustrated complementarity with "wave-like" and "particle-like" attributes of a quantum mechanical object [1]. Since then, complementarity is often superficially identified with "wave-particle duality of matter". Over the years the two-slit interference experiment has been emphasized as a good example of the enforcement of complementarity. Feynman, discussing the two-slit experiment, noted that this wave-particle dual behavior contains the basic mystery of quantum mechanics [2]. The actual mechanisms that enforce complementarity vary from one experimental situation to another. In the two-slit experiment, the common "wisdom" is that the position-momentum uncertainty relation δxδp ≥h 2 makes it impossible to determine which slit the photon (or electron) passes through without at the same time disturbing the photon (or electron) enough to destroy the interference pattern. However, it has been proven [3] that under certain circumstances this common interpretation may not be true. In 1982, Scully and Drühl found a way around this position-momentum uncertainty obstacle and proposed a quantum eraser to obtain which-path or particle-like information without scattering or otherwise introducing large uncontrolled phase factors to disturb the interference. To be sure the interference pattern disappears when which-path information is obtained. But it reappears when we erase (quantum erasure) the which-path information [3,4]. Since 1982, quantum eraser behavior has been reported in several experiments [5]; however, the original scheme has not been fully demonstrated. One proposed quantum eraser experiment very close to the 1982 proposal is illustrated in Fig.1. Two atoms labeled by A and B are excited by a laser pulse. A pair of entangled photons, photon 1 and photon 2, is then emitted from either atom A or atom B by atomic cascade decay. Photon 1, propagating to the right, is registered by a photon counting detector D 0 , which can be scanned by a step motor along its x-axis for the observation of interference fringes. Photon 2, propagating to the left, is injected into a beamsplitter. If the pair is generated in atom A, photon 2 will follow the A path meeting BSA with 50% chance of being reflected or transmitted. If the pair is generated in atom B, photon 2 will follow the B path meeting BSB with 50% chance of being reflected or transmitted. Under the 50% chance of being transmitted by either BSA or BSB, photon 2 is detected by either detector D 3 or D 4 . The registration of D 3 or D 4 provides which-path information (path A or path B) of photon 2 and in turn provides which-path information of photon 1 because of the entanglement nature of the two-photon state of atomic cascade decay. Given a reflection at either BSA or BSB photon 2 will continue to follow its A path or B path to meet another 50-50 beamsplitter BS and then be detected by either detector D 1 or D 2 , which are placed at the output ports of the beamsplitter BS. The triggering of detectors D 1 or D 2 erases the which-path information. So that either the absence of the interference or the restoration of the interference can be arranged via an appropriately contrived photon correlation study. The experiment is designed in such a way that L 0 , the optical distance between atoms A, B and detector D 0 , is much shorter than L i , which is the optical distance between atoms A, B and detectors D 1 , D 2 , D 3 , and D 4 , respectively. So that D 0 will be triggered much earlier by photon 1. After the registration of photon 1, we look at these "delayed" detection events of D 1 , D 2 , D 3 , and D 4 which have constant time delays, τ i ≃ (L i − L 0 )/c, relative to the triggering time of D 0 . It is easy to see these "joint detection" events must have resulted from the same photon pair. It was predicted that the "joint detection" counting rate R 01 (joint detection rate between D 0 and D 1 ) and R 02 will show interference pattern when detector D 0 is scanned along its x-axis. This reflects the wave property (both-path) of photon 1. However, no interference will be observed in the "joint detection" counting rate R 03 and R 04 when detector D 0 is scanned along its x-axis. This is clearly expected because we now have indicated the particle property (which-path) of photon 1. It is important to emphasize that all four "joint detection" rates R 01 , R 02 , R 03 , and R 04 are recorded at the same time during one scanning of D 0 along its y-axis. That is, in the present experiment we "see" both wave (interference) and which-path (particle-like) with the same apparatus. We wish to report a realization of the above quantum eraser experiment. The schematic diagram of the experimental setup is shown in Fig.2. Instead of atomic cascade decay, spontaneous parametric down conversion (SPDC) is used to prepare the entangled two-photon state. SPDC is a spontaneous nonlinear optical process from which a pair of signal-idler photons is generated when a pump laser beam is incident onto a nonlinear optical crystal [6]. In this experiment, the 351.1nm Argon ion pump laser beam is divided by a double-slit and incident onto a type-II phase matching [7] nonlinear optical crystal BBO (β −BaB 2 O 4 ) at two regions A and B. A pair of 702.2nm orthogonally polarized signal-idler photon is generated either from A or B region. The width of the SPDC region is about 0.3mm and the distance between the center of A and B is about 0.7mm. A Glen-Thompson prism is used to split the orthogonally polarized signal and idler. The signal photon (photon 1, either from A or B) passes a lens LS to meet detector D 0 , which is placed on the Fourier transform plane (focal plane for collimated light beam) of the lens. The use of lens LS is to achieve the "far field" condition, but still keep a short distance between the slit and the detector D 0 . Detector D 0 can be scanned along its x-axis by a step motor. The idler photon (photon 2) is sent to an interferometer with equalpath optical arms. The interferometer includes a prism P S, two 50-50 beamsplitters BSA, BSB, two reflecting mirrors M A , M B , and a 50-50 beamsplitter BS. Detectors D 1 and D 2 are placed at the two output ports of the BS, respectively, for erasing the which-path information. The triggering of detectors D 3 and D 4 provide which-path information of the idler (photon 2) and in turn provide which-path information of the signal (photon 1). The electronic output pulses of detectors D 1 , D 2 , D 3 , and D 4 are sent to coincidence circuits with the output pulse of detector D 0 , respectively, for the counting of "joint detection" rates R 01 , R 02 , R 03 , and R 04 . In this experiment the optical delay (L i − L 0 ) is chosen to be ≃ 2.5m, where L 0 is the optical distance between the output surface of BBO and detector D 0 , and L i is the optical distance between the output surface of the BBO and detectors D 1 , D 2 , D 3 , and D 4 , respectively. This means that any information one can learn from photon 2 must be at least 8ns later than what one has learned from the registration of photon 1. Compared to the 1ns response time of the detectors, 2.5m delay is good enough for a "delayed erasure". Figs.3, 4, and 5 report the experimental results, which are all consistent with prediction. Figs.3 and 4 show the "joint detection" rates R 01 and R 02 against the x coordinates of detector D 0 . It is clear we have observed the standard Young's double-slit interference pattern. However, there is a π phase shift between the two interference fringes. The π phase shift is explained as follows. Fig.5 reports a typical R 03 (R 04 ), "joint detection" counting rate between D 0 and "which-path" D 3 (D 4 ), against the x coordinates of detector D 0 . An absence of interference is clearly demonstrated. There is no significant difference between the curves of R 03 and R 04 except the small shift of the center. To explain the experimental results, a standard quantum mechanical calculation is presented in the following. The "joint detection" counting rate, R 0i , of detector D 0 and detector D j , on the time interval T , is given by the Glauber formula [8]: where T 0 is the detection time of D 0 , T j is the detection time of D j ( j = 1, 2, 3, 4) and E (±) 0,j are positive and negative-frequency components of the field at detectors D 0 and D j , respectively. |Ψ is the entangled state of SPDC, where C(k s , k i ) = δ(ω s + ω i − ω p )δ(k s + k i − k p ), for the SPDC in which ω j and k j (j = s, i, p) are the frequency and wavevectors of the signal (s), idler (i), and pump (p), respectively, ω p and k p can be considered as constants, a single mode laser line is used for pump and a † s and a † i are creation operators for signal and idler photons, respectively. For the case of two scattering atoms, see ref. [3], and in the case of cascade radiation, see ref. [9], C(k s , k i ) has a similar structure but without the momentum delta function. The δ functions in eq.(2) are the results of approximations for an infinite size SPDC crystal and for infinite interaction time. We introduce the two-dimensional function Ψ(t 0 , t j ) as in eq.(1), Ψ(t 0 , t j ) is the joint count probability amplitude ("wavefunction" for short), where t 0 ≡ T 0 − L 0 /c, t j ≡ T j − L j /c, j = 1, 2, 3, 4, L 0 (L j ) is the optical distance between the output point on the BBO crystal and D 0 (D j ). It is straightforward to see that the four "wavefunctions" Ψ(t 0 , t j ), correspond to four different "joint detection" measurements, having the following different forms: where as in Fig.1 the upper index of t (A or B) labels the scattering crystal (A or B region) and the lower index of t indicates different detectors. The different sign between the two amplitudes Ψ(t 0 , t 1 ) and Ψ(t 0 , t 2 ) is caused by the transmission-reflection unitary transformation of the beamsplitter BS, see Fig.1 and Fig.2. It is also straightforward to calculate each of the A(t i , t j ) [10]. To simplify the calculations, we consider the longitudinal integral only and write the two-photon state in terms of the integral of k e and k o : where a type-II phase matching crystal with finite length of L is assumed. Φ(∆ k L) is a sinc-like function, Φ(∆ k L) = (e i(∆ k L) − 1)/i(∆ k L). Using eqs. (3) and (6) we find, where f i,j (ω), is the spectral transmission function of an assumed filter placed in front of the k th detector and is assumed Gaussian to simplify the calculation. To complete the integral, we define ω e = Ω e +ν and ω o = Ω o −ν, where Ω e and Ω o are the center frequencies of the SPDC, Ω e + Ω o = Ω p and ν is a small tuning frequency, so that ω e + ω o = Ω p still holds. Consequently, we can expand k e and k o around K e (Ω e ) and K o (Ω o ) to first order in ν: where u e and u o are recognized as the group velocities of the e-ray and o-ray at frequencies Ω e and Ω o , respectively. Completing the integral, the biphoton wavepacket of type-II SPDC is thus: where we have dropped the e, o indices. The shape of Π(t 1 − t 2 ) is determined by the bandwidth of the spectral filters and the parameter DL of the SPDC crystal, where D ≡ 1/u o − 1/u e . If the filters are removed or have large enough bandwidth, we have a rectangular pulse function Π(t 1 − t 2 ). It is easy to find that the two amplitudes in Ψ(t 0 , t 1 ) and Ψ(t 0 , t 2 ) are indistinguishable (overlap in both t 0 −t j and t 0 + t j ), respectively, so that interference is expected in both the coincidence counting rates, R 01 and R 02 ; however, with a π phase shift due to the different sign, R 01 ∝ cos 2 (xπd/λf ), and R 02 ∝ sin 2 (xπd/λf ). If we consider "slit" A and B both have finite width (not infinitely narrow), an integral is necessary to sum all possible amplitudes along slit A and slit B. We will have a standard interference-diffraction pattern for R 01 and R 02 , where a is the width of the slit A and B (equal width), d is the distance between the center of slit A and B, λ = λ s = λ i is the wavelength of the signal and idler, and f is the focal length of lens LS. We have also applied the "far field approximation" for the signal and equal optical distance of the interferometer for the idler. After considering the finite size of the detectors and the divergence of the pump beam for further integrals, the interference visibility is reduced to the level close to the observation. For the "joint detection" R 03 and R 04 , it is seen that the "wavefunction" in eq.(5) (which clearly provides "which-path" information) has only one amplitude and no interference is expected. In conclusion, we have realized a quantum eraser experiment of the type proposed in ref. [3]. The experimental results demonstrate the possibility of observing both particle-like and wave-like behavior of a light quantum via quantum mechanical entanglement. The which-path or both-path information of a quantum can be erased or marked by its entangled twin even after the registration of the quantum.
2017-09-15T20:44:29.608Z
1999-03-13T00:00:00.000
{ "year": 2000, "sha1": "a0b647f8140d72aa2c014ddec19e8b093d2b30f1", "oa_license": null, "oa_url": "http://arxiv.org/pdf/quant-ph/9903047", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "6813bcce6b89d85deb5905a9881cf9255000001d", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
234914302
pes2o/s2orc
v3-fos-license
Model of creation of productive agrocenosis of Echinacea . T he results of many years of field research on the effect of stocking density of different types of Echinacea spp . on its productivity are discussed. It was found that when the Echinacea purpurea crops were thickened, the collected raw material had a high percentage (more than 50%) of stems, which negatively affected its quality. An increase in the density of Echinacea pallida crops has less effect on the formation of generative shoots. Long-term research has revealed patterns that determine the productivity of Echinacea spp . at different planting densities. This made it possible to calculate and recommend for production the density of plants at which the optimal yield of the agrocenosis will be achieved: for Echinacea purpurea – 100-110 thousand/ha, and for Echinacea pallida – 120-140 thousand/ha. Introduction Representatives of the Echinacea genus (Echinacea Moench) are known in the world primarily as medicinal plants with pronounced immunostimulating properties [1]. This is due to unique phytochemical characteristics, among which the main components are chicoric acid and its derivatives, polysaccharides and alkylamides [1,2]. Due to this, drugs with anti-inflammatory, adaptogenic and immunocorrecting activity are produced from echinacea all over the world [1,2]. Today, during a pandemic, these properties of echinacea deserve special attention. It should not be forgotten that echinacea is an excellent honey plant and an ornamental perennial plant [3,4]. Due to the increased demand for raw materials, echinacea has been successfully grown in America and Europe for over a century [2,3]. However, only three species can be seen in culture: Echinacea purpurea (L.) Moench, Echinacea pallida (Nutt.) Nutt. and Echinacea angustifolia DC. Echinacea purpurea is the most studied and used. Although, there are few pharmaceutical varieties in the world, it has mainly a decorative value [3]. Echinacea angustifolia is a raw material for many drugs; but it is harvested more in nature, in places of natural growth such as Canada and North America [4]. In our opinion, Echinacea pallida is the most interesting but underestimated species both in terms of its chemical composition and biological characteristics [2,[4][5]. At the Poltava State Agrarian Academy, due to many years of purposeful work, varieties of Echinacea purpurea ( Zirka Mykoly Vavylova ) and Echinacea pallida ( Krasunya of Prairie ) were developed, technologies for growing and using Echinacea for the needs of pharmacy, animal husbandry, beekeeping, and ornamental gardening were developed [5]. One of the "narrow" issues of cultivation is the creation of productive plantations [6]. If we consider echinacea mainly as a medicinal plant, then it is impractical to grow it for more than two to three years. Therefore, it is necessary to create optimal accommodations and intensive care during the first year of life in order to get the maximum return in the future [7][8][9][10][11][12]. It is to these questions that our article is devoted. Materials and methods The research was carried out in the botanical garden of the Poltava National Pedagogical University named after V.G. Korolenko. During 2006-2015, field experiments were carried out, obtained data were processed and interpreted. Investigations on placement schemes (plant density) 45 x 10, 45 x 20, 45 x 30, 70 x 10, 70 x 20, 70 x 30 centimeters were laid out for three production cycles in a row by seedling on typical medium-humus chernozems of heavy texture. For this, the seedlings were preliminarily grown in cassettes until they had up to two to four true leaves. Due to this, the plants did not get sick and took root 100 %. The productivity of the aboveground mass was assessed during mass flowering, the productivity of the root system was assessed after the end of the growing season. The counts were performed in the third year of the growing season. Correlation analysis and construction of regression equations were performed using MS Excel. Results and discussion The results of the studies showed that the intensity of the formation of generative shoots in Echinacea purpurea depended on the feeding area. Compacted crops resulted in fewer shoots. 3.3-4.0 shoots were formed in the variant where the distance between the plants was 10 cm, and 4.6-7.3 shoots were formed when the indicated interval was increased. There was also a definite tendency towards an increase in the height of the shoots and the size of the inflorescences, and a decrease in the number of inflorescences and leaves with the layouts of 45 x 10 cm and 70 x 10 cm. The productivity (wet weight) and the structure of the yield of the aboveground mass of Echinacea purpurea depending on the placement schemes are presented in Table 1. With the frequent placement of plants (45 x 10 cm), stems comprised the largest percentage in the raw material -51.4 %, which, as you know, reduces its quality. The content of leaves and inflorescences in the raw material was 34.4 % and 14.2 %, respectively, and the productivity of one plant was 158.7 grams. The weight of the stems in the variants with the placement of 45 x 30 cm and 70 x 30 cm was the maximum, but their share in the raw material did not exceed 48.7 %. In these variants, there was a significant increase in the mass of leaves of the plants (68. In general, while analyzing the data on the mass of the entire plant of Echinacea pallida, it can be noted that the area of nutrition did not significantly affect its productivity. In experiments, the smallest indicator of 140.2 grams was observed with a planting pattern of 45 x 10 cm. In other variants, the weight of one plant was 236.7-251.7 grams. It should be noted that in the 70 x 30 cm variant, the raw material was of the highest quality in terms of the yield structure, which is explained by the low content of the stems and high content of the inflorescences. Figure 1 shows the results of studies of the effects of placement schemes on the productivity of the echinacea root system. They indicate a higher productivity of Echinacea purpurea compared to Echinacea pallida. This is especially noticeable in variants with a row spacing of 45 centimeters. Wider row spacings eliminate the difference, mainly due to the increase in the productivity of rhizomes of Echinacea pallida. When the plants were placed according to the scheme of 70 x 30 cm, the productivity of the aboveground part and the root system of both types of echinacea was practically at the same level, which indicated the significant role of the spatial placement of the culture. For Echinacea purpurea -Lsd0.05 = 6.2; for Echinacea pallida -Lsd0.05 = 3.8 The correlation analysis was performed to analyze the above data, which served as the basis for building the correlation pleiads (Fig. 2, 3) that reflect the relationship between the most significant indicators. For Echinacea purpurea, the mass of the aerial part had the most significant correlations with the mass of stems (r = 0.879), the number of inflorescences (r = 0.735), and the mass of the root system (r = 0.815). The mass of the root system also had high correlation with the mass of inflorescences, width and length of the leaf blade. Echinacea pallida had much lower performance indicators. The weight of the aerial part had significant correlations with the weight of stems (r = 0.929), weight of inflorescences (r = 0.859), weight of leaves (r = 0.787) and leaf area (r = 0.787). The mass of the root system had no significant correlations with other parameters of the plant. optimal agrocenosis of Echinacea was developed. For this, graphic images of the most accurate regression equations were used when approximating experimental data (Fig. 4, 5). A -aboveground part B -root system One of the graphs shows the dependence of productivity and yield on the area of plant nutrition, the other -on the density of plants. After combining two trends, the coordinate systems of which are related to each other, we got two lines, the intersection of which indicates the optimal yield per square meter or the productivity of one plant and, accordingly, the number of plants per square meter. It should be noted that the algorithms presented below determine the optimal possible values. According to calculations, it is possible to program a larger yield by increasing the number of plants per unit area, but this will inevitably decrease the mass of individual plants. Figure 4 shows a graphical representation of the algorithm for Echinacea purpurea. According to calculations, to obtain the optimal productivity (180-200 g per plant in our experiments) of the aboveground mass of Echinacea purpurea, it is necessary to form an agrocenosis with a density of 10 plants per 1 m 2 . When sown with 45 cm row spacing, this corresponds to 4.5 plants per linear meter. At the same time, an increase in yield can be achieved by compaction of the agrocenosis to a certain level. The optimal nutritional area for the root system is also 10 plants per 1 m 2 , but, the figure shows that an increase in the number of plants to 18-20 plants/m 2 is the upper limit of the productivity of the root system and further compaction will not increase the yield of roots per unit area. As can be seen in Figure 5, the productivity of the aboveground mass of Echinacea pallida of 240-250 g is optimal when creating a seeding density of 12-14 plants/m 2 , and this is practically the maximum possible level. Further compaction of the sowing causes a decrease in the yield of the aboveground mass. The optimum productivity of the root system is 34-36 g with a density of 12-14 plants/m 2 , and an increase in density also negatively affects its development. Our conclusions are close to the results of the studies [7] that show a high yield of raw materials with a high content of chicoric acid was achieved at an optimal density of Echinacea crops of 9-10 plants per 1 m 2 . Thus, the yield of Echinacea purpurea can be regulated within certain limits by the density of the agrocenosis. At the same time, the optimal density can be considered to be 10-11 plants/m 2 , which corresponds to the schemes of 45 x 20 -45 x 25 cm. For Echinacea pallida, the maximum yield is observed when creating agrocenosis with a density of 12-14 plants/m 2 , which is close to the optimal placement. This corresponds to the schemes of 45 x 13-45 x 15 cm. Planting plants with row spacings of 70 cm allows to get higher quality raw materials by reducing the part of the stems and increasing the part of the inflorescences. This feature must be taken into account when setting up industrial plantations of Echinacea pallida. Conclusions 1. As a result of the studies carried out in 2006-2015, the main regularities of the development of plants of Echinacea purpurea and Echinacea pallida, depending on the placement schemes, were established. It was found that these types of echinacea reacted differently to spatial distribution in the cenosis due to biological characteristics. In thickened crops, Echinacea purpurea formed a limited number of generative shoots (3.3-4.0) and could increase them by 1.8-2.2 times with an increase in the distance between plants, while Echinacea pallida formed 2.7-5.7 stems regardless of the feeding area. 2. The obtained data were processed by the method of correlation analysis, which made it possible to construct correlation pleiades of echinacea productivity depending on the density of placement. It was found that for Echinacea purpurea, the mass of the aboveground part was mostly correlated to the mass of the stems (r = 0.879), the number of inflorescences (r = 0.735), and the mass of the root system (r = 0.815); the mass of the root system correlated with the mass of inflorescences, width and length of the leaf blade. For Echinacea pallida, the mass of the aboveground part was determined by the mass of the stems (r = 0.929), the mass of the inflorescences (r = 0.859) and the mass of the leaves (r = 0.787); the mass of the root system did not have significant correlations with the studied parameters. 3. The mathematical models were developed that allowed to determine the patterns of plant placement in the agrocenosis. According to calculations, the optimal productivity of Echinacea purpurea includes the aboveground mass of 180-200 g/plant and the weight of the root system of 20-25 g/plant; this is achieved by the density of the agrocenosis of 100-110 thousand plants per hectare. For Echinacea pallida, the optimal productivity of the aboveground mass is 240-250 g/plant, and of rhizomes with roots -34-36 g/plant; it can be achieved with an agrocenosis density of 120-140 thousand plants per hectare.
2020-12-24T09:12:45.398Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "607f757c68276d039230060fbc2cf3c31907c05f", "oa_license": "CCBY", "oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2020/82/e3sconf_daic2020_02048.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "6f8ad7831cfb9f901755f3383ad95bed5470753f", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
251790843
pes2o/s2orc
v3-fos-license
Aflatoxins in Maize: Can Their Occurrence Be Effectively Managed in Africa in the Face of Climate Change and Food Insecurity? The dangers of population-level mycotoxin exposure have been well documented. Climate-sensitive aflatoxins (AFs) are important food hazards. The continual effects of climate change are projected to impact primary agricultural systems, and consequently food security. This will be due to a reduction in yield with a negative influence on food safety. The African climate and subsistence farming techniques favour the growth of AF-producing fungal genera particularly in maize, which is a food staple commonly associated with mycotoxin contamination. Predictive models are useful tools in the management of mycotoxin risk. Mycotoxin climate risk predictive models have been successfully developed in Australia, the USA, and Europe, but are still in their infancy in Africa. This review aims to investigate whether AFs’ occurrence in African maize can be effectively mitigated in the face of increasing climate change and food insecurity using climate risk predictive studies. A systematic search is conducted using Google Scholar. The complexities associated with the development of these prediction models vary from statistical tools such as simple regression equations to complex systems such as artificial intelligence models. Africa’s inability to simulate a climate mycotoxin risk model in the past has been attributed to insufficient climate or AF contamination data. Recently, however, advancement in technologies including artificial intelligence modelling has bridged this gap, as climate risk scenarios can now be correctly predicted from missing and unbalanced data. Introduction According to the Food and Agricultural Organization (FAO), globally, more than 197 million hectares of land are cultivated with maize, with a yield of 1.13 billion tons [1]. Therefore, the quality and safety assurance of maize for human and animal consumption is very important, especially with the growing concern of global food insecurity. One major quality and safety concern is the infection of maize kernels with mycotoxin-producing fungi. These are known to be climate-sensitive. Maize contamination is of global concern because of its significant role in the food and feed supply chain and its vulnerability to AF contamination [2]. Increased attention has been paid to AFs due to their role in reducing yields in agriculture, resulting in huge global economic losses [3,4], and their threat to food safety due to their highly toxigenic and carcinogenic nature [5][6][7][8]. Aspergillus flavus has both virulent and non-virulent strains, and under different climatic conditions may produce particular AFs, with aflatoxin B1 (AFB1) being the most carcinogenic AF [5]. Climate change (CC) has caused the alteration of fungal strain distribution and their associated mycotoxins that grow in a maize cultivation with different growing seasons. Kos et al. [9] concluded that increased AF levels in maize are mainly due to climatic extremes such as severe drought and high summer temperatures. Predicting the mycotoxin contamination of maize during its developmental stages or close to harvest allows for proper AF risk management in the industry by partners such as farmers, distributors, or feed producers. Preventive measures against AF contamination can be informed at the pre-harvest stage with field information. Post-harvest practices including prompt and proper drying methods and storage in appropriate conditions will minimize fungal growth and mycotoxin contamination. Global warming is significantly driving altered temperature distributions and extreme precipitation patterns. Agreement exists on the important role of drought, high temperatures, and extreme precipitation patterns on increased AF production in maize [10][11][12][13]. Other studies have found significant correlation between increased AF levels and insect-damaged crops [14,15]. The perception that a warmer year would automatically lead to an increased AF contamination risk has been debunked by the results obtained by Kaminiaris et al. [16], increasing the complexity of the problem. Other factors that contribute to the problem's complexity include mycofloral profile and interactions, differences in each crop's pathosystem, and interaction with an ever-changing climate. It is therefore essential to carry out predictive studies (using climate models with variables such as rainfall, humidity, and temperature) on the effects that CC may have on the presence of AFs in maize. This will address future uncertainties and highlight AF risk situations in order to handle escalated mycotoxin incidence in agricultural products and in the long run to ensure food safety with increasing CC. Researchers have long anticipated that predictive models would be useful tools in the management of plant pathogens and mycotoxins. The rise in mathematical forecasting models allows for the prediction of AF contamination risks and is widely used by stakeholders in the maize supply chain. The complexity of these prediction models varies from statistical tools such as simple regression equations to complex systems such as artificial intelligence models. A lot of these prediction models have been developed in the USA, Europe, and Australia. Predictive model development in Africa is not commonplace, and those that are in development are still in their infancy. This review seeks to explore existing AF contamination risk predictive models that have the potential of being extrapolated to help control AFs in maize cultivated in Africa. This may aid in the quest to develop similar novel models in Africa. Climate Change and Aflatoxin Contamination of Maize Climate change influences interactions among distinct mycotoxigenic species, and their toxins, produced in foods and feeds [17,18]. Countries within the temperate climatic zone that seem safe today, may become more vulnerable to the risk of disease and loss in crop production because of contamination such as changing climatic conditions [9,19]. The impact of CC on agricultural production is greatest in the tropics and subtropics, with sub-Saharan Africa showing high vulnerability to these impacts because of changing stresses and low adaptive capacity [20]. Africa is warming faster than the global rate and maize growing season temperatures are typically increasing [21]. Changes in climatic variables such as precipitation, increase in seasonal and extreme temperature events, and the intensity of droughts during maize growing seasons vary greatly and might result in changes in the yields of maize production [20]. In sub-Saharan Africa, maize is mainly cultivated in subsistence farming systems under non-irrigation conditions; therefore, reliance on rainfall increases the susceptibility of maize crops to CC effects [22]. Low yields in this region are mainly attributable to drought stress, low soil fertility, weeds, pests, diseases, low input availability, and inappropriate seeds. These conditions enable fungal growth and mycotoxin production, making sub-Saharan Africa a vulnerable region to the mycotoxin contamination of crops. The Mediterranean basin is experiencing noteworthy changes in rainfall, giving rise to drought, increased temperatures and elevated CO 2 , allowing for the occurrence of many adverse effects that influence food production and AF contamination in maize [23]. Rainfall variability and increased temperatures are the most significant variables of CC that have severe effects on agriculture, and by extension maize production. Particularly, high temperatures, greater CO 2 concentrations, drought stress, and altered rainfall directly affect maize and A. flavus prevalence, favouring fungal growth, conidiation and spore dispersal, thereby affecting the growth of maize [24,25]. The recurrent and persistent occurrence of drought stimulates AF production by A. flavus in both pre-harvest and post-harvest conditions [11][12][13]26]. For example, in 2015, hot and dry climatic conditions contaminated 6% of maize fields in France with AFs and 69% of isolated strains were known A. flavus strains [27]. Similar results have been reported in African countries [12,28,29]. Fungal development and AF production in agricultural products is primarily based on temperature, moisture, soil type, and storage conditions [9,30]. These fungi colonize many crops and adapt to different environmental conditions, having specific and overlapping ecological niches [31]. Understanding the different climatic factors influencing fungal survival, development, metabolic activity, and interaction with other organisms such as host plants, is vital for deterring their overall behaviour, leading to toxin contamination [26]. In a study by Zuma-Netshiukhwi et al. [32], it was determined that a temperature rise by 1 • C or 2 • C will result in a roughly 20% to 25% decrease in grain yield as a result of CC. Kachapulula et al. [12] reported high levels of AFs in maize and groundnuts in a drier and low rainfall zone as compared to cooler and high-rainfall zones. Likewise, Sirma et al. [33] reported that crops cultivated in semiarid tropical regions were more prone to AF contamination than those in temperate regions. Indirect effects of CC on mycotoxin contamination include increased drought stress and insect damage to the plant. The phenology of the crop can be altered. The reproductive stages (germination, silking, pollen shedding, and grain filling) are sensitive stages of crop development, for this reason, the extent and gravity of drought during this stage can decrease crop yield to approximately 50% [34,35]. Chauhan et al. [36] postulated that the grain filling period is critical for agronomic practices in order to decrease the effects of drought and high temperatures on yield, and to lower the risk of AF contamination. Ding and Wang [37] reported high AF levels in groundnuts grown in regions with limited rainfall and high daily temperatures, or those exposed to heat stress, during the last month of the growing season. Overall, CC drives alterations in factors that have a critical impact on maize growth and yield including rainfall, pests, diseases and temperature [38]. These same conditions are favourable for fungal development and mycotoxin production. Table 1 presents AF contamination levels in maize found in some African nations between 2017 and 2022. All these countries have AF levels above their respective set standards with high contamination rates. This is a clear food safety concern. Aflatoxin Regulation and Food Security in Africa Due to the carcinogenic and toxigenic nature, including hepatic toxicity, of AFs, regulatory limits are placed on the quantity of AFs permitted in food and feed in several countries [51,52]. The intake of AF-contaminated staple foods is a serious health risk as the consumer will be exposed to the effects throughout their lives. Agriculture remains the main contributor to the livelihoods of the rural populations of developing countries. Subsistence farmers and their households consume high quantities of homegrown maize and the rest is sold to their immediate community, creating a milieu conducive to the increased risk of mycotoxin exposure. The regulation of mycotoxins in African countries is lacking, and where these regulations exist, they are typically only applied to export crops. Only fifteen African countries currently have mycotoxin regulatory standards [22]. Most subsistence farmers are not aware of mycotoxin regulations, and crops they consider useless as a result of mould infestation are mostly solely based on visual analysis, which is highly subjective. The enforcement of these regulations in an informal environment such as subsistence farming is unclear and possibly impractical. Therefore, regulatory standards in African countries will be difficult to enforce because of economic and food security reasons. For example, Ambler et al. [53] stated that farmers who report that their crops have lost quality usually do not dispose of them, but use the crop for household consumption. The food and feed movement across the world, including mycotoxin-contaminated products, highlights the importance of global and country-specific mycotoxin occurrence surveys on foods. Regulatory standards are a barrier to business, particularly in regions with high levels of AF contamination, such as the case of Malawi, where it is only possible to export 4% of the maize produced to countries with stricter AF legal limits, such as the European Union and South Africa [54]. Senerwa et al. [55] estimated losses of millions of US dollars in the Kenyan dairy industry because of AF levels exceeding legal limits. Hence, AFs regulations hinder trade in these countries as contamination levels are often above legal regulatory levels ( Table 1). As mentioned before, mycotoxin regulations are most enforced in African nations in crops destined for export purposes; however, many African traders are mainly concerned with domestic and regional trades rather than exports [56]. Since maize quality is reduced by mycotoxin contamination, its monetary value will be diminished because maize that was destined for food will now be degraded and directed toward feed. Hence, the anticipated financial returns based on the quantity of maize produced cannot be fulfilled due to mycotoxin contamination. This low return on investment will negatively affect the livelihood of the seller, resulting in poverty. Food security is a serious global issue topping the development agendas of most countries, especially those in Africa. The prevalence of severe food insecurity in sub-Saharan Africa is common. For instance, one in four households in sub-Saharan Africa cannot access adequate food [57]. This worsening of the food security of this region has been attributed to climate shocks, conflicts, and economic slowdowns [1,58]. Millions of Africans could be stripped of their food supply if mycotoxin regulations were effectively enforced [28]. It has been projected that, by 2027, maize consumption will increase by 16%, especially in sub-Saharan Africa where human and livestock populations are growing rapidly. Whether this growth will increase maize exposure to mycotoxins is a matter of ongoing research [59]. This increase in maize consumption will increase the demand for maize. Factors such as CC and farming systems could directly influence mycotoxin contamination of maize. If the 2027 projections hold true, if conscious decisions are not made now to control the mycotoxin contamination of crops, for example, through substituted irrigation, the use of fungicides, and improvements in storage facilities, among others, mycotoxin contamination risk will keep on increasing, resulting in severe food insecurity. Hoffmann and Moser [60] showed that products with a higher price are less contaminated than products sold at a lower price. Thus, in the face of food insecurity, food safety measures should not be disregarded. When managing AF levels in foods, food producers a charge higher price than other firms without this management action. Ayyat et al. [61] concluded that, when feed is treated with AF-absorbent materials, the treatment reduces toxicity in Nile tilapia, resulting in increased body mass and higher monetary value. The pertinent question is: in the current atmosphere of food insecurity, how many people are willing to pay higher prices for food where cheaper substitutes exist? The lack of awareness and farmers' experiences are considered to be the underlying factors that contribute to their unwillingness to pay for AflaSafe food [62]. Studies carried out in centres of AF endemism in Africa showed that close to 90% of the population understood that mould poses a risk to human health. Few, however, understood what that risk is, and half believed that any toxins would be destroyed by cooking. It is also common practice for farmers who report crops as damaged by AFs to redirect the crop from the market to their personal consumption [53,63,64]. Aflatoxin Predictive Models in Africa Since AF contamination occurs at different stages (pre-harvest and post-harvest) of the food production chain, control measures are based on these contamination stages. Some of these preventative techniques are knowingly or unknowingly implemented by subsistence farmers in Africa, either to reduce the effects of mycotoxins or as routine agronomic practices at the different crop growth stages. Proper disposal methods of aflatoxin-contaminated feed and crops is uncommon, even though the East African Community policy advises that incineration or the burial of contaminated crops or feed be practiced [65]. Care has to be taken because previously infected residues can be re-introduced into the system if not buried properly. A lot of research has concentrated on the pre-harvest and post-harvest control of AF contamination in crops, especially in Africa. Despite all the existing mitigation methods available, AF contamination still continues to be a global food safety issue, with high incidences continually being reported in Africa. Climate change remains the primary factor that drives altered fungal proliferation and mycotoxin contamination [22]. The climate is rapidly changing, which makes it more difficult to rely on mycotoxin research data of a particular season due to high interseasonal variability. Anticipatory studies at this point in time seem promising for addressing and highlighting AF risk situations on a regional basis within the African continent in the face of CC. Mycotoxin contamination risk predictive models, incorporating AF field and climate data, will offer future solutions. Based on model application techniques, mycotoxin predictive models may be grouped as follows: mechanistic, empirical, or hybrid. Mechanistic models replicate the fundamental systems of crop and fungal developmental stages. Such models require an advanced understanding of each living system and substantial experimental research under different environments to obtain the needed input data, including temperature, rainfall, and soil properties [23]. Empirical modelling, on the other hand, uses mathematical functions to explain field conditions and the connected response variables [23]. Such a model will identify the particular environment, weather, and seasons that was involved in the model development and will require recalibration when applied in another area. Empirical models are used for conditions in an area represented by observational data. Thus, these models are unable to forecast situations that never occurred in the model development dataset, such as extreme weather events as a result of CC or new cultivation techniques. Hybrid models apply the principles of both mechanistic and empirical models. Since there are always speculations made about biophysical information and some amount of statistical analyses applied, all models are, to some extent, hybrid in nature. Therefore, their categorization is mainly based on their primary method of prediction. Keller et al. [66] proposed a hybrid model for AF prediction risk to include the advantages of both empirical and mechanistic modelling by the extension of these models to different spatial and temporal domains. Cross-validation of the three modelling methods will be useful to better comprehend the merits and drawbacks associated with each, and to help come up with a simplified model that can easily be used in the prediction of mycotoxin contamination in a given crop. Recently, modelling techniques have been upgraded. For instance, machine learning algorithms have lately been introduced in food safety domains [67,68]. Such techniques can learn from data inputs and make data-driven forecasts. One such machine learning algorithm is Bayesian network (BN) modelling, which has been used to predict mycotoxin contamination in cereals in Serbia [69]. Bayesian network models can blend statistical relationships and expert knowledge. A BN model is a probabilistic model that is based on Bayesian statistics and decision theory in addition to graph theory. BN models can deal well with irregular data [69]. This gives BN modelling an edge over other models in predicting mycotoxins because it allows the model to run in the early maize growing period, when details of the entire growth period are unavailable. It is, therefore, useful for early warning purposes. Unlike linear regression models, BN models effortlessly analyse dependencies between variables: they manage non-linear relationships and blend numerous types of data, such as measurement data, expert skill, and consumer feedback [70]. BN models can include expert's skill, and are pliable in adding new data to the prediction process. Additionally, BN models can provide results with incomplete data on the model input variables, with the caveat that this may influence the accuracy of the outputs. On the other hand, a simple logistic regression equation could provide accuracy of above 60% in the prediction of AF concentration in crops from a particular region [71]. Logistic regression is an advanced modelling approach that has been applied to numerous research fields, including food safety. Logistic regression estimates the parameters of the log odds of the probability of a binary event (e.g., the presence or absence of mycotoxins) [72]. BN models can be contrarily formulated without the speculations of linearity in logit or additivity [69]. This method is highly data-dependent, and the data cannot be used for agricultural conditions other than those introduced in the model development process before actual validation. The APSIM model, a hybrid model which was first developed in Australia, has the potential of being used in Africa as the authors are extending their research to Kenya [36,73]. Peanut was used as the substrate for cardinal temperatures and, therefore, it needs to be verified for maize to increase its accuracy. The design of a practical prediction model for pre-harvest AF contamination from the APSIM model is a challenge because of its inability to reconcile the water activity parameter in field conditions [73]. The AFLA-maize model, a mechanistic model, was developed on maize grown in Italy [74]. It has been successful in predicting aflatoxin contamination in crops and is very adaptable for use in other regions and crops. The ability to assess the impact of climate change on mycotoxin risk is not restricted to Europe [74], but is being extended to other areas. Recently, the AFLA-maize model was effectively adapted to predict AFB 1 occurrence in maize in Malawi [75]. In Greece, the AFLA-maize model was replicated in another pathosystem known as AFLApistachio. The AFLA-maize model is "based on two sub-models, one accounting for the host crop phenology and the other for the A. flavus infection cycle". Each has their own associated advantages and disadvantages and requires different degrees of calibration and validation. Table 2 presents different models that have been developed for mycotoxin prediction in crops, most of these models were developed in the USA, Asia and Europe. Table 2. Aflatoxin predictive risk models in maize with the potential of application in Africa. Conclusions The rationale behind the present review was to evaluate if AFs' contamination of maize can be controlled or monitored in Africa in the face of CC and food insecurity. From literature, the AFLA-maize model is appropriate, since the same pathosystem i.e., the A. flavus infection cycle and maize phenology is being dealt with, albeit in a different geographic location. The maize plant phenology in Africa differs from those in Europe, where the model was initially developed, because of factors such as degree of growth, maize variety, prevailing weather conditions, and the employment of different farming techniques. Hence, the need for recalibration in a new location. High levels of interaction between agricultural practices complicate the undertaking of developing mathematical functions to be included in the creation of a predictive model [77]. With advancements in technology, other machine learning algorithm models or well-designed simple classic logistic regression can be used on African soil. Since all of the different modelling procedures have advantages and disadvantages, a single model that blends all the model types could be a possible solution. This will merge the models in a distinctive way and this will strengthen their merits. Methodology A literature review was conducted, using PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines [84] to gather information on the contamination of maize, foods, and feeds with mycotoxins in Southern Africa. A literature search was performed, using Google Scholar and key words and phrases used to extract peer-reviewed studies on mycotoxin predictive models. Key words and phrases used to access the information were: mycotoxin; aflatoxin; maize; model; prevention; and cereals. Sixty-nine articles with information related to this review were downloaded and evaluated. Institutional Review Board Statement: This is a review paper without animal use and thus institutional review is not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: Not applicable. Conflicts of Interest: The authors declare no conflict of interest.
2022-08-25T15:05:16.654Z
2022-08-01T00:00:00.000
{ "year": 2022, "sha1": "7b6e07e7ec63b31f94e85e3213a3c1f5cc0dbce5", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6651/14/8/574/pdf?version=1661166300", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c1d5ebcf2e5dd8b379f5078ae192d0d94f971319", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
264432362
pes2o/s2orc
v3-fos-license
Measurement of the WW + WZ cross section and limits on anomalous triple gauge couplings using final states with one lepton, missing transverse momentum, and two jets with the ATLAS detector at √s=7 TeV : The production of a W boson decaying to eν or µν in association with a W or Z boson decaying to two jets is studied using 4 . 6 fb − 1 of proton-proton collision data at √ s = 7 TeV recorded with the ATLAS detector at the LHC. The combined W W + W Z cross section is measured with a significance of 3.4 σ and is found to be 68 ± 7 (stat . ) ± 19 (syst . ) pb , in agreement with the Standard Model expectation of 61 . 1 ± 2 . 2 pb. The distribution of the transverse momentum of the dijet system is used to set limits on anomalous contributions to the triple gauge coupling vertices and on parameters of an effective-field-theory model. Introduction The study of vector boson pair production at the Large Hadron Collider (LHC) provides an important test of the electroweak sector of the Standard Model (SM) at the highest available energies.Deviations observed in the total or differential cross sections from the SM predictions may arise from anomalous triple gauge boson interactions [1] or from new particles decaying into vector bosons [2].Vector boson pair production is also an important source of background in studies of the Higgs boson and in searches for signals of physics beyond the SM. The cross sections for W W and W Z production at the LHC have previously been measured in fully leptonic final states [3][4][5][6].The semileptonic final states suffer from larger backgrounds from W or Z boson production in association with jets, but benefit from significantly larger branching fractions than the fully leptonic states and thus represent important complementary measurements.In this paper the W W + W Z cross section is measured in the νjj ( = e, µ) final state using a data sample of proton-proton (pp) collisions with an integrated luminosity of 4.6 fb −1 collected by the ATLAS detector at the LHC.In addition, the reconstructed dijet transverse momentum distribution is used to set limits on anomalous contributions to the triple gauge coupling vertices (aTGCs), after requiring that the dijet invariant mass is close to the mass of the W or Z boson. The combined W W +W Z production cross section (hereafter, W V cross section, where V = W, Z) has been measured in the νjj final state in proton-antiproton collisions at the Tevatron collider by both the CDF [7] and D0 [8] collaborations, and more recently in pp collisions by the CMS [9] collaboration.Limits on anomalous triple gauge couplings in W V → νjj production have also been presented by CDF [10], D0 [11], and CMS [9]. This paper is organised as follows.The overall analysis strategy is described in section 2 and a short description of the ATLAS detector is given in section 3. The Monte Carlo (MC) simulation used for the signal and background modelling is summarized in section 4. Details of the object and event reconstruction and of the event selection are given in sections 5 and 6 respectively.The method to estimate the signal and background processes is discussed in section 7. The cross section measurement is detailed in section 8 and the systematic uncertainties are described in section 9.The results of the cross-section measurement are summarized in section 10, and the extraction of the anomalous triple gauge coupling limits is discussed in section 11.Finally, conclusions are drawn in section 12. Analysis strategy Candidate W V → νjj events are required to contain exactly one lepton (electron or muon), large missing transverse momentum E miss T , and exactly two jets.The selected events are accepted if they pass a set of kinematic cuts chosen to enhance the signal-to-background ratio.The invariant mass distribution of the two jets (m jj ), representing the candidate decay products of the hadronically decaying boson, is obtained from all the selected events.The W W + W Z signal yield (N W V ) is obtained by performing a binned maximum-likelihood fit to the m jj distribution using templates based on MC simulations.The fit is performed on events in an m jj range much larger than the range where the signal peaks, allowing the nearly signal-free m jj regions to constrain the rate of the W + jets events, which are the largest background.Because of the finite dijet mass resolution, there is considerable overlap between the m jj peaks from W W → νq q and W Z → νq q decays.Given the expected uncertainties in this measurement, and the relatively small contribution from the W Z process (about 20% of the total signal yield), no attempt is made to distinguish between the W W and W Z contributions in this analysis.Instead, the signal yield is obtained under the assumption that the ratio of the W W and W Z cross sections is equal to the SM prediction. The fiducial cross section (σ fid ) is evaluated from the measured signal yield.The fiducial phase space is defined to be as close as possible to the phase space defined by the reconstructed event selection.The fiducial cross-section measurement is obtained as: where L is the integrated luminosity and D fid, are factors that correct for the difference between the number of W V → νjj events produced in the fiducial phase space and the number of reconstructed events passing the event selection.The total cross section (σ tot ) is obtained by extrapolating the fiducial cross section to the full phase space using theoretical predictions: where D tot, are factors that depend on acceptances, reconstruction efficiencies, and the branching fractions for W W → νjj and W Z → νjj.Details of the maximum-likelihood fit and the precise definition of the fiducial-volume and of the factors D fid, and D tot, are given in section 8. Lastly, the transverse momentum distribution of the hadronically decaying V candidates (p Tjj ) is used to set limits on the aTGCs affecting the W W Z and W W γ vertices.The event selection is the same as the one used for the cross-section measurement, except that the dijet mass is required to be close to the masses of the W/Z bosons in order to increase the signal-to-background ratio.The aTGC limits are calculated by performing a binned maximum-likelihood fit to the p Tjj distributions.The ratio of the W W and W Z cross sections at each aTGC point is assumed to be that predicted by theory, including the aTGC contribution.Details of the limit extraction are given in section 11. The ATLAS detector The ATLAS detector [12] is a general-purpose particle detector with cylindrical geometry 1 which consists of several sub-detectors surrounding the interaction point, and covering almost the full solid angle.The trajectories and momenta of charged particles are measured within the pseudorapidity region |η| < 2.5 by multi-layer silicon pixel and microstrip detectors and a transition radiation tracker.The tracking system is located in a superconducting solenoid producing a 2 T magnetic field and is surrounded by a high-granularity liquid-argon (LAr) sampling electromagnetic (EM) calorimeter with coverage up to |η| = 3.2.The EM calorimeter is split into a barrel section (|η| < 1.475) and endcaps (1.375 < |η| < 3.2).A scintillating tile hadronic calorimeter using steel as absorber provides coverage in the range |η| < 1.7.In the forward region, LAr calorimeters provide electromagnetic and hadronic measurements and extend the coverage to |η| < 4.9.The muon spectrometer surrounds the ATLAS calorimeter system and it operates in a toroidal magnetic field provided by aircore superconducting magnets and includes tracking chambers for precise muon momentum measurement up to |η| = 2.7 and trigger chambers covering the range |η| < 2.4.The online event selection is based on a three-level trigger system.The hardware-based Level-1 trigger uses a subset of the detector data to reduce the event rate from 20 MHz to below 75 kHz.Two subsequent software-level triggers further reduce the rate to about 300 Hz using the complete detector information. Simulated event samples Simulated event samples are used to model both the signal and all background processes except for the multijet background, which is estimated using a data-driven procedure.The signal and background MC samples are processed using the ATLAS detector simulation [13] based on Geant4 [14] and the same reconstruction algorithms as used for collision data.The simulation includes the modelling of additional pp interactions in the same and neighbouring bunch crossings (pile-up). Diboson signal events are generated using mc@nlo v4.07 [15] interfaced to herwig [16,17] for the parton showering and hadronisation and to jimmy2 [18] for the modelling of the underlying event.The on-shell gauge bosons are generated in mc@nlo and are subsequently decayed by herwig.This leads to a zero width for the decayed W/Z bosons and to the loss of the spin-correlation information for the decay products.The effects arising from this generation procedure are studied and considered, where needed, as systematic uncertainties.The ct10 [19] parton distribution function (PDF) set is used.The diboson samples are normalised to the next-to-leading-order (NLO) cross sections of 43.7 ± 1.9 pb and 17.4 ± 1.1 pb for W W and W Z, respectively.The central values of the diboson cross sections are estimated using mc@nlo, with factorisation and renormalisation scales equal to The uncertainties are evaluated by varying the scales, the PDF, and α s .The combined PDF+α s uncertainties are estimated by varying them within their 68% confidence-level (CL) limits, following the procedure in ref. [20].The gg → W W and H → W W processes are not included in the signal samples nor in the cross-section prediction, since their contributions are small compared to the expected sensitivity of this measurement.The gg → W W process would increase the total predicted W V cross section by about 2-4%.The H → W W process would increase the W V cross section by about 5%, but after applying all event selection criteria (see section 6), it would only increase the expected number of signal events by about 2%.The γγ → W W process [21] is also neglected.Additional signal samples generated with pythia3 [22] are used for systematic studies. The dominant background to the W V → νjj process is vector boson W/Z production in association with jets, which is modelled using alpgen v2.13 [23] with cteq6l1 [24] for the PDF, interfaced to herwig and jimmy.The W/Z + jets cross sections predicted by alpgen are scaled to the QCD next-to-next-to-leading-order (NNLO) inclusive cross section [25] times branching fraction for a single lepton species: σ(W → ν) = 10.46 ± 0.42 nb and σ(Z/γ * → ) = 1.070 ± 0.054 nb for invariant masses of the two leptons (m ) > 40 GeV.Production of a W or Z boson plus heavy-flavour jets is also modeled using the alpgen+herwig+jimmy generator combination described above, and overlap with the inclusive W/Z +jets samples is removed to avoid double-counting.Samples generated using sherpa v1.4.1 [26][27][28][29] with ct10 PDFs are used for cross-checks. Samples of t t events are produced using mc@nlo v4.01 [30] with the ct10 PDF set, interfaced to herwig and jimmy.The t t cross section is σ t t = 177 +10 −11 pb for a top quark mass of 172.5 GeV.It has been calculated at NNLO in QCD including resummation of nextto-next-to-leading logarithmic (NNLL) soft gluon terms with top++2.0[31][32][33][34][35][36].Samples of t t events generated with acermc v3.8 [37] interfaced to pythia are also considered for systematic uncertainty studies. Single-top events from the W t and s-channel processes are generated using mc@nlo v4.01 [38,39] interfaced to herwig and jimmy with cross sections of 15.7 ± 1.2 pb [40] and 4.6 ± 0.2 pb [41], respectively.The ct10 PDF set is used.The single-top t-channel process is generated using acermc v3.8 + pythia with the mrst LO** [42] PDF set, using a cross section of 64.6 +2.6 −1.7 pb [43].The ZZ diboson background process is generated using herwig with mrst LO** PDFs.It is normalised to the NLO cross section of 5.96 ± 0.3 pb (m > 60 GeV), estimated with mcfm [44].The uncertainty is evaluated using the same procedure as for the diboson signal.The W γ process is generated with madgraph v4 [45] interfaced to pythia.After the selection criteria are applied, the contribution of this process to the background is very small (less than 0.5% of the total), and so it is neglected. Object and event reconstruction Events were selected by a single-lepton (electron or muon) trigger with a threshold on the transverse energy (E T ) in the electron case or on the transverse momentum (p T ) in the muon case.The p T threshold for the single-muon trigger was 18 GeV, while for electrons it was required to be E T > 20 GeV for the early part of data-taking and E T > 22 GeV when the instantaneous luminosity of the LHC increased. Proton-proton collision events are identified by requiring that the events have at least one reconstructed vertex with at least three associated tracks with transverse momentum p T,track > 0.4 GeV.If two or more such vertices are found, the one with the largest sum of p 2 T,track is considered to be the primary vertex.Electron candidates are formed by associating clusters of cells in the EM calorimeter with tracks reconstructed in the inner detector [46].The transverse energy (E T ), calculated from the cluster energy and the track direction, must be greater than 25 GeV, in order to be in the region with maximum trigger efficiency.Candidates are accepted if they lie in the region |η| < 2.47, excluding the transition region between the barrel and endcap EM calorimeters, 1.37 < |η| < 1.52.The candidate must satisfy "tight" identification criteria described in ref. [46].For the electron-candidate track, the ratio of the transverse impact parameter, d 0 , to its uncertainty, σ(d 0 ), must satisfy |d 0 /σ(d 0 )| < 10.The longitudinal impact parameter, z 0 , must have an absolute value less than 1 mm.Both d 0 and z 0 are measured with respect to the primary vertex.To ensure isolation from surrounding particles, calorimetric and tracking criteria are applied.The total calorimeter E T in a cone of size ∆R = ∆φ 2 + ∆η 2 = 0.3 around the electron candidate, excluding any E T associated with the candidate itself, must be less than 14% of the electron E T value.The calorimeter response is corrected for the additional energy deposited by pile-up.In addition, the scalar sum of the p T of the tracks within ∆R = 0.3 of the electron candidate (not including the electron track) must be less than 13% of the electron p T value.Muon candidates are identified [47] by associating tracks reconstructed in the muon spectrometer with tracks reconstructed in the inner detector.The momentum of the combined muon track is calculated from the momenta of the two tracks, correcting for the energy loss in the calorimeter.Muon candidates must satisfy p T > 25 GeV and |η| < 2.4.The p T threshold is chosen to be well within the plateau of the trigger efficiency.Muon candidates must also be consistent with originating from the primary vertex, in order to reject muons from cosmic-ray interactions and to reduce background from heavy-flavour decays.Specifically, the d 0 significance must satisfy |d 0 /σ(d 0 )| < 3 and |z 0 | must be less than 1 mm.To reduce misidentification and improve the muon momentum resolution, requirements on the minimum number of hits in the various detectors are applied to the muon tracks.Isolated muons are selected with a requirement that the scalar sum of the p T of the tracks within ∆R = 0.3 of the muon (not including the muon track) be less than 15% of the muon p T , and that the total calorimeter E T in a cone of ∆R = 0.3 around the muon candidate (excluding E T associated with the muon) be less than 14% of the muon p T .The electron and muon isolation requirements are the same as used in ref. [3]. Corrections are applied to MC events in order to account for differences between data and MC simulation in the trigger and identification efficiencies, and in the lepton momentum and energy scale and resolution.The trigger and reconstruction efficiency scale factors are measured using the tag-and-probe method on events with Z-boson candidates events [46,47].The lepton momenta are calibrated with scale factors obtained by comparing the reconstructed mass distribution of Z boson candidate events in data with that of simulated events [47,48]. Jets are reconstructed from calorimeter energy clusters by using an anti-k t algorithm [49,50] with a radius parameter of 0.4.The selected jets must satisfy E T > 25 GeV and |η| < 2.8.Reconstructed jets are corrected for the non-compensating calorimeter response, upstream material and other effects using p T -and η-dependent correction factors derived from MC and validated with test-beam and collision data [50].Jets consistent with being produced from pile-up interactions are identified using the Jet Vertex Fraction variable (JVF).This variable is calculated using tracks that are associated with the jet, and is defined as the ratio of the scalar p T sum of the associated tracks that originate from the primary vertex to the scalar p T sum of all associated tracks.Jets that are within |η| < 2.5 are retained if they have JVF larger than 75% or if they have no associated track.The efficiency of this cut is ∼ 95% up to |η| < 2.5 and is well modelled by the MC simulation.Jets are required to satisfy quality criteria and to lie at a distance ∆R > 0.5 from well-identified leptons. The E miss T is estimated from reconstructed electrons with |η| < 2.47, muons with |η| < 2.7, jets with |η| < 4.9, and clusters of energy in the calorimeter not associated with reconstructed objects having |η| < 4.5 [51].The energy clusters are calibrated to the EM scale or the hadronic energy scale according to cluster characteristics.The expected energy deposit of identified muons in the calorimeter is subtracted. Event selection The W V candidates are selected by requiring exactly one high-p T lepton, missing transverse momentum, and exactly two jets.Events are required to contain exactly one reconstructed lepton candidate with p T > 25 GeV; events with more than one identified lepton are rejected in order to suppress the Z+jets and t t backgrounds.The lepton candidate must be the one that triggered the event.Furthermore, events are required to have E miss T > 30 GeV in order to account for the presence of the unobserved neutrino from the W → ν decays.The transverse mass of the leptonically decaying W boson candidate is , where ∆φ is the azimuthal angle between the lepton momentum and missing transverse momentum vectors, and is required to satisfy m T > 40 GeV.The E miss T and m T criteria highly suppress the multijet background.To further suppress the multijet background the azimuthal angular separation between the leading jet transverse momentum and the missing transverse momentum vectors must fulfil |∆φ(E miss T , j 1 )| > 0.8.Backgrounds containing top quark decays are highly reduced by vetoing events that contain more than two jets with p T > 25 GeV and |η| < 2.8.Events are required to contain exactly two jets with |η| < 2.0 and p T > 25 GeV, with a p T > 30 GeV requirement for the leading jet.In order to improve the signal-to-background ratio, the two jets are required to satisfy |∆η(j 1 , j 2 )| < 1.5.The angular distance between the two jets must satisfy ∆R(j 1 , j 2 ) > 0.7 if the p T of the dijet system is less than 250 GeV.Finally, the dijet invariant mass must be in the range 25 < m jj < 250 GeV.The selection criteria were optimised to both increase the signal-to-background ratio and select a phase space region well described by the Monte Carlo simulation.After applying all event selection criteria, 127 650 events are found in the electron channel and 134 846 in the muon channel. Signal and background estimation The shapes of the expected m jj and p Tjj distributions are used as templates for the crosssection fit and for the aTGC limit calculation, respectively.The expected shapes and rates of the distributions for the W +jets, Z +jets, t t, single-top, and signal processes are obtained from the MC simulation samples.The W + jets and Z + jets predicted rates are corrected using scale factors obtained with a data-driven method as explained below. Multijet background events can pass the event selection if one of the jets is reconstructed as a lepton.The rate and shape of the multijet background are estimated with data-driven methods since the MC simulation does not reliably predict the rate of jets passing the lepton identification. The data-driven method consists of two steps: the first one is designed to estimate the m jj , p Tjj , and E miss T shapes of the multijet background and the second one to measure its rate. The first step exploits suitably modified lepton identification criteria to define data samples enriched in multijet background and with kinematic characteristics as close as possible to those of the standard selection.The lepton identification criteria are modified differently for the muon and electron channels. For the muon channel, the multijet-enriched sample is obtained by applying the full selection but inverting the transverse impact parameter requirement (|d 0 /σ(d 0 )| > 3).The selected sample is composed of muons that do not originate from the primary vertex, as expected for muons produced from heavy-flavour decays in jets.For the electron channel, the multijet-enriched sample is obtained by applying the full nominal selection but requiring the electron candidate to satisfy the "medium" [52] identification criteria but the "tight" ones.This results in a sample enriched in events with a jet that mimics an electron.Finally, the shape of the multijet background is obtained from the data in these multijet-enriched samples, after subtracting the MC-based prediction for non-multijet processes. The second step uses the E miss T shape of the multijet background, determined in the previous step, to obtain the multijet rate and a correction to the W/Z + jets normalisation.This is done fitting the E miss T spectra obtained with the nominal selection but with the E miss T requirement removed.The fit, performed in the range 0 < E miss T < 400 GeV, extracts separate scale factors used to normalise the multijet and W/Z + jets samples.From this fit, the multijet contribution is extrapolated to the signal region (E miss T > 30 GeV) and is found to represent 5.3% and 3.7% of the events for the electron and muon channels respectively.The W/Z + jets scale factors obtained from this fit are close to one and well within the systematic uncertainty of the theoretical prediction both for the electron and muon channel. Table 1 shows the expected number of events for the signal and for each background process after the full selection is applied.The numbers of events observed in data are also listed.The signal-to-background ratio in the subrange 60 < m jj < 120 GeV is about 2%. Figure 1 shows the m jj distributions for data and the SM prediction for the electron and muon channels prior to performing the maximum-likelihood fit to extract the signal W V yield.The bottom plots in figure 1 show the ratios of data to the SM predictions overlaid with systematic uncertainty bands.The sources of systematic uncertainties and the strategy to evaluate them are discussed in section 9.The data distributions are well within the systematic uncertainty bands for all values of dijet mass for both channels. Cross-section definition and fit method As discussed in section 2, W V → νjj candidates are selected in a fiducial phase space designed to increase the signal-to-background ratio.The fiducial phase space, which is identical for the electron and muon channels, is defined for Monte Carlo events by applying to the particle-level objects a selection as close as possible to the analysis selection described in section 6.This selection requires a W boson decaying leptonically and a W or Z boson decaying hadronically.W → τ ν decays are not included in the definition of the fiducial cross section. The leptonically decaying W boson is required to decay to an electron or a muon with p T > 25 GeV and |η| < 2.47.The lepton p T is obtained by summing together the lepton transverse momentum and the transverse momenta of all photons within ∆R = 0.1 of the selected lepton.The transverse mass of the leptonically decayed W boson is required to be m T > 40 GeV.Events must contain a hadronically decaying W or Z boson and two particle-level jets separated by ∆R > 0.5 from the selected leptons.Particle-level jets are reconstructed from particles with a mean decay length cτ > 10 mm using the anti-k t algorithm with radius parameter R = 0.4.Decay products from leptonically decaying W/Z bosons (including photons within ∆R = 0.1 of the charged leptons) are excluded from the particle-level jets.The two selected jets must lie within |η| < 2.0 and have p T > 25 GeV with at least one of them having p T > 30 GeV.Events containing more than two particle-level jets with p T > 25 GeV and |η| < 2.8 are rejected.Moreover, the two selected jets must satisfy |∆η(j 1 , j 2 )| < 1.5, 25 < m jj < 250 GeV and ∆R(j 1 , j 2 ) > 0.7. The last condition is applied only if the transverse momentum of the dijet system is p Tjj < 250 GeV.Finally the E miss T , defined as the transverse momentum of the neutrino from the leptonically decaying W boson, is required to satisfy E miss T > 30 GeV and |∆φ(E miss T , j 1 )| > 0.8.The signal event yield in the fiducial-volume is determined from a simultaneous maximumlikelihood fit to the m jj distributions in the electron and muon channels.This method takes advantage of the difference between the shapes of the m jj distributions of the various processes to separate the signal from the large underlying background.The m jj templates, Table 1.Total number of events in data and expected yields for each process in the e and µ channel.The multijet and W/Z +jets yields are obtained from the fit to the E miss T distribution as explained in section 7. Uncertainties for the expected signal yields are based on the corresponding cross-section uncertainties, while for multijet and the other backgrounds the uncertainties correspond to the total rate uncertainty. Signal normalised to unit area, for the various processes contributing to the total expected m jj distribution are shown in figure 2. Systematic uncertainties (described in section 9) on the signal and background normal-isation as well as on the m jj shapes are included by introducing nuisance parameters ( α) into the fit.The combined likelihood function (L) is expressed as: where β is the parameter of interest extracted from the fit and is a multiplicative factor applied to the signal normalisation; n b is the number of data events in bin b and channel , with = e, µ; ν bkg b and ν sig b are the number of expected events for background and signal processes respectively in bin b and channel ; and f p are Gaussian constraints on the nuisance parameters α p .The expected number of signal events ν sig b contains contributions from both the W W and W Z processes.The measured signal yield N W V is obtained from the product of the fitted β value and the expected number of signal events as The diboson fiducial cross section (σ fid ) is extracted from N W V using eq.(2.1).The factors D fid, account for the fact that two processes, W W → νjj and W Z → νjj, contribute to the signal yield with different cross sections, acceptances and correction factors and are defined as: where C W V are the ratios of the detector-level signal yield after all analysis cuts to the signal yield in the fiducial phase space for the respective processes and lepton flavour.The values of C W W and C W Z vary between 0.61 and 0.74 and depend on the process and on the channel (electron, muon) considered.The factor f W W fid represents the ratio of the W W to the W W + W Z fiducial cross sections.The two processes are not separated by this analysis, so f W W fid is fixed to the SM value of 0.82, calculated with mc@nlo.The total cross section is obtained by extrapolating the fiducial event yield to the full phase space using eq.(2.2).The factors D tot, are obtained from theoretical predictions and are defined as: where the acceptances A W W and A W Z are calculated as the fraction of signal events satisfying the fiducial-volume selection criteria; they vary in the range 0.08-0.09depending on the process and are independent of the lepton flavour.B W W and B W Z are the branching fractions for the decays W W → νjj and W Z → νjj respectively [53]. Systematic uncertainties The total systematic uncertainties on the fiducial and total cross sections are obtained by summing in quadrature the uncertainties on the signal yield, on the factors D fid or D tot , and on the integrated luminosity.Systematic uncertainties that affect the fitted signal yield are accounted for by including nuisance parameters with Gaussian constraints in the maximum-likelihood fit ("profiled" systematic uncertainties), with a few exceptions that are described below.The nuisance parameters describe the estimated rate or shape variations of the templates for the various processes.Systematic uncertainties arising from the same source are assumed to be 100% correlated between the electron and muon channels.Uncertainties from different sources are assumed to be independent. Two of the largest systematic uncertainties are the jet energy scale (JES) and jet energy resolution (JER) uncertainties, determined as described in refs.[50] and [54].The JES uncertainty also includes the effect of energy deposits due to pile-up, and the uncertainties on the JES and JER are propagated to the E miss T .The main impact of the JES and JER uncertainties on the measurement of the signal yield is due to the effect of these uncertainties on the shapes of the background distributions. The largest contribution to the background is from the production of a W or Z boson in association with jets; this background was modelled using alpgen.Variations of the factorisation and normalisation scales are considered in evaluating the systematic uncertainty; also, the parameters that describe the matching scheme in the matrix element to initial/final-state radiation (ISR/FSR) particles are varied.Alternative W/Z + jets samples generated with sherpa [26] were also analysed; the m jj and p Tjj distributions from these samples are consistent with the alpgen samples within the aforementioned alpgen generator uncertainties, so no additional systematic uncertainty is assigned for alpgen-sherpa differences.The total rate uncertainty assigned to the W/Z + jets processes is 20% and it includes rate changes due to cross-section, MC modelling, JES, and JER uncertainties. The uncertainties on the modelling of the t t and single-top processes include shape and rate uncertainties due to variation of the ISR/FSR description.These are calculated with dedicated samples generated with acermc.The total rate uncertainty assigned to the single-top and t t processes is 15% and includes contributions from cross-section, MC modelling, JES, and JER uncertainties. The estimates of the multijet rate and shape uncertainties are based on a data-driven method using a multijet control region.Shape and rate uncertainties for the electron and muon channels are assumed to be uncorrelated.The shape uncertainties are described in the likelihood fit by means of two independent nuisance parameters, one for the electron channel and one for the muon channel.The uncertainty assigned to the multijet rate is 15%, and its effect on the extracted signal yield is estimated using pseudo-experiments as mentioned below. The signal shape modelling uncertainty (including sources such as fragmentation, partonshower, underlying-event and hadronisation modelling) is assessed by considering alternative templates obtained with samples produced with the pythia generator.Varying the PDF is found to have a negligible impact on the shape of the m jj and p Tjj distributions.Some uncertainties on the fitted signal yield were not described through nuisance parameters, either in order to limit the number of parameters in the fit, or because of the difficulty of fully parameterising the possible systematic variation in terms of a nuisance parameter.In such cases the impact of these uncertainties on the signal yield is estimated using an ensemble of pseudo-experiments.These uncertainties include the multijet rate uncertainty and the uncertainty due to the size of the MC event samples.The finite size of the MC event samples produces an uncertainty since it limits the precision with which the m jj templates are known.This systematic uncertainty is one of the largest, and is dominated by the size of the event sample for the W + jets process. The total uncertainty on the signal yield is obtained by summing contributions from the profiled and non-profiled sources in quadrature. The fiducial and total cross sections are also affected by uncertainties on the values of D fid and D tot , respectively.The following sources of uncertainty are considered for these factors: JES, JER, PDF, signal modelling (fragmentation, underlying-event, parton-shower, hadronisation, loss of spin-correlation information), lepton trigger and reconstruction efficiencies, and lepton energy scale.The largest contributions to the D fid and D tot uncertainties come from the JES and JER uncertainties while the uncertainties affecting the leptons give very small contributions. Table 2 summarizes the percent contributions to the systematic uncertainties on the cross sections from the different sources.In the case of profiled systematic uncertainties, the contribution of each individual source to the total uncertainty on N W V is estimated by repeating the fit while fixing the nuisance parameter associated with the source under consideration to its best-fit value.The uncertainty on N W V from this modified fit is subtracted in quadrature from the uncertainty on N W V given by the nominal fit, and the result is taken to be the systematic uncertainty due to the source in question.The data statistics uncertainty is calculated as the fit uncertainty on N W V when all nuisance parameters are fixed to their best-fit values.The largest source of uncertainty is the W/Z + jets rate, dominated by the W + jets rate uncertainty. Cross-section results The m jj maximum-likelihood fit, including all the nuisance parameters, is performed on the data, and yields a value of β = 1.11 ± 0.26, where β is defined in eq.(8.1).The uncertainty includes all the systematic uncertainties from the profiled sources; the purely statistical uncertainty on β is 10%.The total systematic uncertainty on the signal yield, including unprofiled systematic uncertainties, is 26%.The measured signal yields are N W V e = 1970 ± 200 (stat.)± 500 (syst.)and N W V µ = 2190 ± 220 (stat.)± 560 (syst.) in the electron and muon channels respectively.This signal yield translates into a fiducial cross section of σ fid = 1.37 ± 0.14 (stat.)± 0.37 (syst.)pb (10.1) for the W W and W Z production processes summed over the muon and electron channels, and a total cross section of in good agreement with the Standard Model prediction obtained with mc@nlo of σ tot = 61.1 ± 2.2 pb. The signal yield significance is estimated using the likelihood ratio, defined as the ratio of the maximum-likelihood with the signal fixed to zero, to the maximum-likelihood including the signal component in the fit [55,56].The expected significance is estimated to be 3.2σ by performing fits with and without the signal component to pseudo-data generated from MC samples with and without the signal component.The observed significance is 3.4σ.The effect of systematic uncertainties is included in the significance calculations.The m jj distribution of the data overlaid with the fit result is shown in figure 3 for the sum of the electron and muon channels.In addition, the background-subtracted data is shown overlaid with the fitted signal distribution. As a cross-check, separate fits to the electron and muon channels were performed to extract the most probable β values for the two channels.The values obtained, 1.00 ± 0.37 for the electron channel and 1.13 ± 0.36 for the muon channel, are in agreement with the value obtained with the simultaneous fit. Anomalous triple gauge couplings The measured W V cross section agrees well with the SM predictions; in this section limits are set on anomalous triple gauge couplings affecting the W W Z and W W γ vertices.Anomalous couplings tend to enhance the diboson cross section at high boson p T .Limits on the anomalous couplings are set by fitting the distribution of the transverse momentum of the reconstructed hadronically decaying V , p Tjj .The event selection is the same as used for the cross-section measurement, except that m jj is additionally required to be between and 95 GeV to improve the signal-to-background ratio.The m jj range and the binning of the p Tjj histogram are chosen to optimise the expected aTGC limits. To quantify possible deviations from the SM affecting triple gauge boson vertices, the couplings of the W W Z and W W γ vertices are described in terms of five dimensionless parameters: λ γ , λ Z , κ γ , κ Z , and g Z 1 , only considering couplings that conserve C and P and satisfy electromagnetic gauge invariance [57].No form factors are applied to these parameters in this analysis.In the SM, λ γ = λ Z = 0, and κ γ = κ Z = g Z 1 = 1.Various assumptions can be made to decrease the number of free parameters.In this analysis, limits are given using the so-called LEP scenario [58] in which the following additional constraints, derived from SU (2) × U (1) gauge invariance, are imposed: ) where ∆κ γ ≡ κ γ − 1, ∆κ Z ≡ κ Z − 1, and ∆g Z 1 ≡ g Z 1 − 1.In this scenario, there are three free parameters: λ, ∆κ γ , and ∆g Z 1 .An alternative approach to the aTGC parametrisation describes deviations from the SM in terms of an effective-field-theory (EFT), valid only up to some mass scale Λ.This EFT [1,59] contains three C-and P -conserving dimension-6 operators.The coefficients of these operators are denoted by c W , c B , and c W W W , and can be related to the LEP-scenario parameters by the following equations: ) where g is the electroweak coupling constant.The diboson signal with anomalous couplings is modeled using the same generator (mc@nlo+herwig) as for the SM signal.The dijet p T distribution is shown in figure 4 for data and MC simulation, along with the signal prediction for an aTGC of λ = 0.05.The limits on the anomalous couplings are calculated by performing a binned maximumlikelihood fit to the p Tjj spectrum.To determine whether a point α in the anomalous coupling parameter space is excluded by the data, the likelihood ratio L( α)/L( α max ) is computed, where α max is the value of the anomalous coupling(s) that maximizes the likelihood.Then the probability of observing such a small likelihood ratio is determined through pseudo-experiments, in which pseudo-data are generated by randomly sampling the probability density function.Systematic uncertainties are incorporated in the fit via nuisance parameters which affect the rates and p Tjj distribution shapes of the signal and background processes.The same sources of systematic uncertainty are included as are described for the m jj fit in section 9, except for those found to be negligible, such as the effect of PDF uncertainties on the signal.In addition, an uncertainty is included on the p Tjj distribution shape of the signal due to increasing and decreasing the scales by a factor of two.The factorisation and renormalisation scales are varied simultaneously by the same amount.As can be seen in figure 4, at very high p Tjj the statistical uncertainties dominate, whereas at lower values of p Tjj the systematic uncertainties are more important. The expected and observed 95% CL limits for λ, ∆κ γ , and ∆g Z 1 in the LEP scenario are given in table 3.If there were no systematic uncertainties at all, the expected aTGC limits would improve by about 25%. In figure 5, the observed limits are compared with previous limits from ATLAS [3,4,60], CMS [6,9,61], D0 [11], and LEP [62], in a variety of channels including W W → ν ν, W Z → ν , W V → νjj, and W γ → νγ.All limits are given at 95% CL, and calculated within the LEP scenario.The form factor Λ FF used for each limit calculation is specified on the figure 5; Λ FF = ∞ is equivalent to no form factor.The limits for each parameter are obtained while fixing the other two parameters to zero.In the CMS νjj analysis and the in ATLAS and CMS W γ analyses, no limits on ∆g Z 1 were given.The ATLAS W W and W Z analyses gave limits on ∆g Z 1 , but with ∆κ Z = 0 rather than ∆κ γ = 0, so they are not comparable with these results and are thus excluded.For the ATLAS W W result, the published limits on ∆κ Z are converted to limits on ∆κ γ using the formula ∆κ Z = −∆κ γ tan 2 θ w .The ATLAS W Z analysis published limits on ∆κ Z , which can also be converted to ∆κ γ , but those limits are not shown, since they are much larger than the other limits in this figure.The limits obtained in this analysis are competitive with the limits from the other analyses.Compared to the fully leptonic W W analyses from hadron colliders, the limits shown here are slightly more stringent for λ and ∆g Z 1 and slightly worse for ∆κ γ . In table 4, the limits are shown for each of the five aTGC parameters when no relationship between the different parameters is imposed.In this scenario, ∆g Z 1 has very little effect on the W W process, whereas ∆κ Z has very little effect on the W Z process.Thus, analyses that restrict themselves to either the W W process or the W Z process have limited sensitivity to at least one of the aTGC parameters.In contrast, this analysis combines the two processes, and therefore has good sensitivity to all five aTGC parameters.As an illustration, this analysis has four times better expected limits on ∆g Z 1 than the ATLAS W W → ν ν analysis [3], and four times better expected limits on ∆κ Z than the ATLAS W Z → ν analysis [4]. Finally, table 5, gives limits on the EFT parameters.The limits on the EFT parameters c W , c B , and c W W W are in the range (10-70)×(Λ/TeV) 2 .In all cases, when computing the limits on one parameter, all the other parameters are fixed to zero.The observed two-dimensional 95% CL limits are shown in figure 6 for the LEP scenario.The limits on ∆κ γ and ∆g Z 1 are significantly correlated, but the limits on the other pairs of parameters do not have large correlations.In addition, the observed two-dimensional 95% CL limits on the EFT parameters are shown in figure 7. None of the EFT parameter pairs exhibit strong correlations.The error bars represent statistical uncertainties, and the stacked histograms are background and signal predictions as described in the legend.The effect of an aTGC of λ Z = λ γ = 0.05 is shown for comparison (white histogram) on top of the SM predictions (coloured histograms).The rightmost bin includes overflow.The bottom panels show the ratio between the data and the SM prediction overlaid with the systematic uncertainty on the shape of the p Tjj distribution.The binning in the plots is the same as that one used to perform the calculation of the limits.The red vertical line indicates that the event selection is different for p Tjj less than and greater than 250 GeV, as described in section 6. -22 -A measurement of the pp → W V cross section (V = W, Z) at √ s = 7 TeV is performed with 4.6 ± 0.1 fb −1 of data collected by ATLAS at the LHC, using the W V → νjj ( =e,µ) decay channels.The total W W + W Z cross section is measured to be σ(W W + W Z) = 68 ± 7 (stat.)± 19 (syst.)pb, where the observed significance of the signal is 3.4σ.This measurement is consistent with the mc@nlo cross-section prediction of 61.1 ± 2.2 pb.In addition, a fiducial cross section is measured in a phase space corresponding closely to the event selection used in the analysis, and is found to be σ fid = 1.37 ± 0.14 (stat.)± 0.37 (syst.)pb. 95% CL Limits The same process is also used to place limits on anomalous triple gauge couplings (aTGCs) and on the coefficients of dimension-6 operators of an effective-field-theory.Within the LEP scenario, the observed 95% CL limits on the anomalous triple gauge parameters are −0.039< λ < 0.040, −0.21 < ∆κ γ < 0.22, and −0.055 < ∆g Z 1 < 0.071.The limits on anomalous couplings are similar to those obtained by other diboson analyses. l Also at Department of Physics, St. Petersburg State Polytechnical University, St. Petersburg, Russia m Also at Louisiana Tech University, Ruston LA, United States of America n Also at Institucio Catalana de Recerca i Estudis Avancats, ICREA, Barcelona, Spain o Also at Department of Physics, The University of Texas at Austin, Austin TX, United States of America p Also at Institute of Theoretical Physics, Ilia State University, Tbilisi, Georgia q Also at CERN, Geneva, Switzerland r Also at Ochadai Academic Production, Ochanomizu University, Tokyo, Japan s Also at Manhattan College, New York NY, United States of America t Also at Institute of Physics, Academia Sinica, Taipei, Taiwan Figure 1 . Figure 1.Distributions of the dijet invariant mass for (a) the electron and (b) the muon channels before the likelihood fit.The error bars represent statistical uncertainties, and the stacked histograms are SM predictions.The lower panel displays the ratio of the data to the MC expectation.The systematic band contains only systematic uncertainties that affect the shape of the background and signal processes. Figure 2 . Figure2.The nominal templates for the reconstructed dijet invariant mass for (a) the electron and (b) the muon channels.The templates for W W/W Z, W/Z + jets and top quarks, including singletop production, are obtained from MC, while the multijet template is obtained using a data-driven method.All templates are normalised to unit area. Figure 3 . Figure 3. (a) Distributions of the dijet invariant mass for the sum of the electron and muon channels after the likelihood fit.The error bars represent statistical uncertainties, and the stacked histograms are the signal and background contributions.The normalisations and shapes of the histograms are obtained from the best fit to the data, after being allowed to vary within their systematic uncertainties.The lower panel displays the ratio between the data and the total fit result, including both signal and backgrounds.The hatched band shows the systematic uncertainty on the fitted signal plus background.(b) Distribution of the background-subtracted data for the sum of the electron and muon channels.The error bars represent the statistical error on the data.The superimposed histogram shows the fitted signal and the hatched band shows the systematic uncertainty on the background after profiling the nuisance parameters. Figure 4 . Figure 4.The observed distribution of the transverse momentum of the two jets, compared to the expectation for SM signal plus background, for (a) the electron channel and (b) the muon channel.The error bars represent statistical uncertainties, and the stacked histograms are background and signal predictions as described in the legend.The effect of an aTGC of λ Z = λ γ = 0.05 is shown for comparison (white histogram) on top of the SM predictions (coloured histograms).The rightmost bin includes overflow.The bottom panels show the ratio between the data and the SM prediction overlaid with the systematic uncertainty on the shape of the p Tjj distribution.The binning in the plots is the same as that one used to perform the calculation of the limits.The red vertical line indicates that the event selection is different for p Tjj less than and greater than 250 GeV, as described in section 6. Figure 5 . Figure 5.Comparison of limits on anomalous triple gauge coupling parameters obtained in this analysis with limits quoted by other experiments and/or in different channels (see text for details). u Also at LAL, Université Paris-Sud and CNRS/IN2P3, Orsay, France v Also at Academia Sinica Grid Computing, Institute of Physics, Academia Sinica, Taipei, Taiwan w Also at Laboratoire de Physique Nucléaire et de Hautes Energies, UPMC and Université Paris-Diderot and CNRS/IN2P3, Paris, France x Also at School of Physical Sciences, National Institute of Science Education and Research, Bhubaneswar, India y Also at Dipartimento di Fisica, Sapienza Università di Roma, Roma, Italy z Also at Moscow Institute of Physics and Technology State University, Dolgoprudny, Russia aa Also at Section de Physique, Université de Genève, Geneva, Switzerland ab Also at International School for Advanced Studies (SISSA), Trieste, Italy ac Also at Department of Physics and Astronomy, University of South Carolina, Columbia SC, United States of America ad Also at School of Physics and Engineering, Sun Yat-sen University, Guangzhou, China ae Also at Faculty of Physics, M.V.Lomonosov Moscow State University, Moscow, Russia af Also at National Research Nuclear University MEPhI, Moscow, Russia ag Also at Institute for Particle and Nuclear Physics, Wigner Research Centre for Physics, Budapest, Hungary ah Also at Department of Physics, Oxford University, Oxford, United Kingdom ai Also at Institut für Experimentalphysik, Universität Hamburg, Hamburg, Germany aj Also at Department of Physics, The University of Michigan, Ann Arbor MI, United States of America ak Also at Discipline of Physics, University of KwaZulu-Natal, Durban, South Africa al Also at University of Malaya, Department of Physics, Kuala Lumpur, Malaysia * Deceased Table 2 . Statistical and systematic uncertainties, in %, on the measured fiducial and total cross sections.The uncertainties are split according to the quantity (N W V , D fid , D tot , L) they are affecting. Table 3 . The observed and expected 95% CL limits on the anomalous triple gauge coupling parameters λ, ∆κ γ , and ∆g Z 1 in the LEP scenario with no form factor applied.The limits on each parameter are calculated while fixing the other two parameters to zero. Table 4 . The observed and expected 95% CL limits on the anomalous triple gauge parameters λ Z , ∆κ Z , ∆g Z 1 , λ γ , and ∆κ γ , not subjected to any constraints between them.No form factors are applied to the aTGC parameters.The limits on each parameter are calculated while fixing the other four parameters to zero. Table 5 . The observed and expected 95% CL limits on the effective field theory parameters c W W W /Λ 2 , c B /Λ 2 , and c W /Λ 2 .The limits on each parameter are calculated while fixing the other two parameters to zero.
2015-01-26T19:53:31.000Z
2014-10-27T00:00:00.000
{ "year": 2014, "sha1": "4f146504a076f84266354d32e4909e5d0bb9e16c", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP01(2015)049.pdf", "oa_status": "GOLD", "pdf_src": "ArXiv", "pdf_hash": "e513870b5f0fa9900be0a7549c2ee6f0e0b6839c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
204509604
pes2o/s2orc
v3-fos-license
Effects of Depth, Width, and Initialization: A Convergence Analysis of Layer-wise Training for Deep Linear Neural Networks Deep neural networks have been used in various machine learning applications and achieved tremendous empirical successes. However, training deep neural networks is a challenging task. Many alternatives have been proposed in place of end-to-end back-propagation. Layer-wise training is one of them, which trains a single layer at a time, rather than trains the whole layers simultaneously. In this paper, we study a layer-wise training using a block coordinate gradient descent (BCGD) for deep linear networks. We establish a general convergence analysis of BCGD and found the optimal learning rate, which results in the fastest decrease in the loss. More importantly, the optimal learning rate can directly be applied in practice, as it does not require any prior knowledge. Thus, tuning the learning rate is not needed at all. Also, we identify the effects of depth, width, and initialization in the training process. We show that when the orthogonal-like initialization is employed, the width of intermediate layers plays no role in gradient-based training, as long as the width is greater than or equal to both the input and output dimensions. We show that under some conditions, the deeper the network is, the faster the convergence is guaranteed. This implies that in an extreme case, the global optimum is achieved after updating each weight matrix only once. Besides, we found that the use of deep networks could drastically accelerate convergence when it is compared to those of a depth 1 network, even when the computational cost is considered. Numerical examples are provided to justify our theoretical findings and demonstrate the performance of layer-wise training by BCGD. Introduction Deep learning has drawn a lot of attention from both academia and industry due to its tremendous empirical success in various applications (Krizhevsky et al., 2012;Silver et al., 2016;Wu et al., 2016). One of the key components in the success of deep learning is the intriguing ability of gradient-based optimization methods. Despite of the non-convex and non-smooth nature of the loss function, it somehow finds a local (or global) minimum, which performs well in practice. Mathematical analysis of this phenomenon has been undertaken. There are several theoretical works, which show that under the assumption of over-parameterization, more precisely, very wide networks, the (stochastic) gradient descent algorithm finds a global minimum (Allen-Zhu et al., 2018;Du et al., 2018a,b;Zou et al., 2018;Oymak and Soltanolkotabi, 2019). These theoretical progresses have its own importance, however, it does not directly help practitioners to have better training results. This is mainly because there are still many parameters to be determined a priori; learning rate, the depth of network, the width of intermediate layers, optimization algorithms with its own internal parameters, to name just a few. The learning rates from existing theoretical works are not applicable in practice. For example, when a fully-connected ReLU network of depth 10 is trained over 1,000 training data, theoretically guaranteed learning rate is either η ≈ 1 1000 2 ·2 10 ≈ 10 −9 (Du et al., 2018a) or η ≈ 1 1000 4 ·10 2 ≈ 10 −14 (Allen-Zhu et al., 2018). Thus, practitioners typically choose these aforementioned parameters by either a grid search or trial and error. Despite its expressive power, training deep neural networks is not an easy task. It has been widely known that the deeper the network is, the harder it is to be trained (Srivastava et al., 2015). Empirical success of deep learning heavily relies on numerous engineering tricks used in the training process. These includes but not limited to dropout (Srivastava et al., 2014), dropconnect (Wan et al., 2013), batch-normalization (Ioffe and Szegedy, 2015), weight-normalization (Salimans and Kingma, 2016), pre-training (Dahl et al., 2011), and data augmentation (Cireşan et al., 2012). Although these techniques are shown to be effective in many machine learning applications, it lacks rigorous justifications and hinders a thorough mathematical understanding of the training process of deep learning. The layerwise training is an alternative to the standard end-to-end back-propagation, especially for training deep neural networks. The underlying principle is to train only a few layers (or a single layer) at a time, rather than train the whole layers simultaneously. This approach is not new and has been proposed in several different contexts. One stream of layer-wise training is adaptive training. At each stage, only a few layers (or a single) are trained. Once training is done, new layers are added. By fixing all the previously trained layers for the rest of the training, only newly added layers are trained. This procedure is repeated. The works of this direction include (Fahlman and Lebiere, 1990;Lengellé and Denoeux, 1996;Kulkarni and Karande, 2017;Belilovsky et al., 2018;Marquez et al., 2018;Malach and Shalev-Shwartz, 2018;Mosca and Magoulas, 2017;Huang et al., 2017). Another stream of layer-wise training is the block coordinate descent (BCD) method (Zhang and Brand, 2017;Carreira-Perpinan and Wang, 2014;Taylor et al., 2016). The BCD is a Gauss-Seidel type of gradient-free methods, which trains each layer at a time by freezing all other layers, in a sequential order. Thus, all layers are updated once in every sweep of training. This paper concerns with the layer-wise training in this line of approach. In (Hinton and Salakhutdinov, 2006;Bengio et al., 2007), layer-wise training is employed as a pretraining strategy. Deep linear network (DLN) is a neural network that uses linear activation functions. Although DLN is not a popular choice in practice, it is an active research subject as it is a class of decent simplified models for understanding the deep neural network with non-linear activation functions (Saxe et al., 2013;Hardt and Ma, 2016;Arora et al., 2018b,a;Bartlett et al., 2019). DLN has a trivial representation power (product of weight matrices), however, its training process is not trivial at all. It has been studied the loss surface of DLNs (Lu and Kawaguchi, 2017;Kawaguchi, 2016;Laurent and Brecht, 2018) and it is shown that although the loss surface is not convex, there are no spurious local minima. That is, all of the local minima of DLNs are global minima. The work of (Arora et al., 2018a) studied a convergence analysis of gradient descent for DLNs. It shows that under the assumptions of whitened data, balanced initialization, deficiency margin, and sufficiently small learning rates, the vanilla gradient descent finds a global optimum. However, the learning rate from the analysis is not applicable in practice as it requires prior knowledge of the global minimizer. The theoretically guaranteed learning rate of (Arora et al., 2018a) should meet η ≤ c (4L−2)/L 6144L 3 W * (6L−4)/L F , where W * is the global minimizer, c is a constant related to the initial error, and L is the depth. Examples of (Arora et al., 2018a) use the learning rate from a grid search. In this paper, we study a layer-wise training for DLNs using a block coordinate gradient descent (BCGD) (Tseng and Yun, 2009b,a). Similar to BCD, the BCGD trains each layer at a time in a sequential order by freezing all other layers at their last updated values. However, a key difference is the use of gradient descent in every update. Thus, the BCGD is a gradient-based optimization method. We establish a general convergence analysis and found the optimal learning rate, which leads to the fastest decrease in the loss. More importantly, the optimal learning rate can directly be applied in practice. We also identify the effects of depth, width, and initialization in the training process. When the orthogonallike initialization is employed, as long as the width of intermediate layers is greater than or equal to both the input and output dimensions, the width plays no role in any gradientbased training. Also, we rigorously show that when (i) the orthogonal-like initialization is used, (ii) the initial loss is sufficiently small, the deeper the network is, the faster the convergence is guaranteed. Here, the speed of convergence is based on the number of sweeps, not the amount of computation. We remark that this criterion is commonly adopted in the literature (Saxe et al., 2013;Arora et al., 2018b). In an extreme case where the depth is sufficiently large, the convergence to the global optimum is guaranteed by updating each weight matrix only once. Similar behavior was empirically reported in (Arora et al., 2018b) as implicit acceleration. We emphasize that our analysis reveals the optimal learning rate and the effects of depth, width, and initialization in the training process. Therefore, neither trial and error nor a grid search for tuning parameters are required (especially, learning rate). Furthermore, we found that a well-chosen depth could result in a significant acceleration in convergence when it is compared to those of a single layer, even when the computational cost is considered. This clearly demonstrates the benefit of using deep networks (over-parameterization via depth). We also establish a convergence analysis of the block coordinate stochastic gradient descent (BCSGD). Our analysis indicates that the BCSGD cannot reach the global optimum, however, the converged loss will be staying close to the global optimum. This can be understood as an implicit regularization, which avoids over-fitting, due to the stochasticity introduced by the random selection of mini-batch. Numerical examples are provided to justify our theoretical findings and demonstrate the performance of layer-wise training by BCGD. The rest of paper is organized as follows. In Section 2, we present the mathematical setup and introduce the block coordinate (stochastic) gradient descent. We then present a general convergence analysis and the optimal learning rate in Section 3. In Section 4, several numerical examples using both synthetic and real data sets are presented to demonstrate the effectiveness of the layer-wise training by BCGD and justify our theoretical findings. Setup and Preliminary Let N L : R d in → R dout be a feed-forward linear neural network with L layers and having n neurons in the -th layer. We denote the weight matrix in the -th layer by W ∈ R n ×n −1 . Here n 0 = d in and n L = d out . Let θ = {W } L =1 be the set of all weight matrices. Then the L-layer linear neural network can be written as which minimize the loss function L(θ) defined by Here (a; b) is a metric which measures the discrepancy between the prediction and the output data. For example, the choice of (a; b) = (a − b) p /p results in the standard L p -loss function. For a matrix A ∈ R m×n , the spectral norm, the condition number and the scaled condition number are defined to be respectively. Here · 2 is the Euclidean norm, · F is the Frobenius norm, σ max (·) is the largest singular value, and σ r (·) is the r-th largest singular value. Also, we denote the min{m, n}-th largest singular value by σ min (·). When r = min{m, n}, we simply write the condition number as κ(·). The matrix L p,q norm is defined by , p, q ≥ 1, and the max norm is A max = max i,j |a ij |. Global minimum of L 2 loss Since this paper mainly concerns with the standard L 2 -loss, here we discuss its global minimum, which depends on the network architecture being used. Let X = [x 1 , · · · , x m ] ∈ R n 0 ×m be the input data matrix and Y = [y 1 , · · · , y m ] ∈ R n L ×m be the output data matrix. Then, the problem of minimizing the L 2 -loss function is min W j ∈R n j ×n j−1 ,1≤j≤L This problem is closely related to min W ∈R n L ×n 0 W X − Y 2 F , subject to rank(W ) ≤ min{n 0 , · · · , n L }. Since the rank of W L:1 is at most n * := min{n 0 , · · · , n L }, the minimized losses from (2) and (3) should be the same. Thus, if {W * } L =1 is a solution of (2), W * L:1 should be a global minimizer of (3). Therefore, a global minimizer of (2) and its corresponding minimized loss can be understood through (3). Thus, in what follows, we briefly discuss the solutions of (3). Without the rank constraint, the solution of (3) is where I n is the identity matrix of size n × n and X † is the Moore-Pensore pseudo-inverse of X. Assuming X is a full row rank matrix, we have W * = Y X † , which allows an explicit formula W * LSQ = Y X T (XX T ) −1 . If X is not a full row rank matrix, (3) allows infinitely many solutions. In this case, the least norm solution is often sought and it is W * = Y X † . Also, for any W , the following holds: Thus, the minimizing L 2 -loss is equivalent to minimizing W X − W * X 2 F . Furthermore, for whitened data, the least norm solution is simply W * = Y X T . With the rank constraint, we consider two cases. If rank(Y X † ) ≤ n * , the rank constrain plays no role in the minimization. Thus, the global minimizer is (4). Let us consider the case of rank(Y X † ) > n * . Let r x = rank(X), and X = U x Σ x V T x be a compact singular value decomposition (SVD) of X where only r x left-singular vectors and r x right-singular vectors corresponding to the non-zero singular values are calculated. Then, It then can be shown that the problem (3) is equivalent to To be more precise, if Z * is a solution (the best n * -rank approximation to Y V x ) to the above, W * = Z * Σ −1 x U T x is a solution of (3), which can be explicitly written as where s = min{n * , r * } and D s is the principal submatrix consisting of the first s rows and columns ofΣ y . We remark that in general, (5) and the best n * -rank approximation to Y X † are not the same. Gradient-based Optimizations Gradient-based optimization methods require the gradient of the loss function at every iteration. For reader's convenience, here we present the calculation of the gradient. First, let us define the Jacobian matrix J ∈ R m×dout , Proof Let us consider the case of L = 2. Let θ = {W 2 , W 1 }, i.e., N 2 (x) = W 2 W 1 x, where W 1 ∈ R n×d in , and W 2 ∈ R dout×n . For a matrix M , let us denote the j-th row of M by M (j,:) and the i-th column of M by M (:,i) . Since L = 2, the loss function is For general L, it readily follows from the case of L = 2 by letting X → W −1 · · · W 1 X, W 1 → W , and W 2 → W L · · · W +1 . We present four different gradient-based optimization methods; the standard gradient descent (GD), the block coordinate gradient descent (BCGD), the stochastic gradient descent (SGD), and the block coordinate stochastic gradient descent (BCSGD). All methods commence with an initialization θ k 0 = {W (0) } L =1 . Let k = (k 1 , · · · , k L ) be a multi-index, where each k indicates the number of updates of the -th layer weight matrix W . After the k-th iteration, we obtain a multi-index For notational completeness, we set W i:j = I whenever i < j. Also, we simply write W k L:1 as W k . • Gradient Descent (GD): The weight matrices are iteratively updated according to where k k = (k, · · · , k). We remark that a single iteration of GD updates all weight matrices once. • Block Coordinate Gradient Descent (BCGD): Let i( ) = if the ascending (bottom to top) ordering is employed and i( ) = L − + 1 if the descending (top to bottom) ordering is employed. Let k (k,0) = k k = (k, · · · , k), k (k,L) = k k+1 = (k + 1, · · · , k + 1), and e j = (0, · · · , 0, j-th 1 , 0, · · · , 0). At the (Lk + )-th iteration, the i( )-th layer weight matrix is updated according to where k (k, ) = k (k, −1) + e i( ) . Here if the ascending ordering is employed, and if the descending ordering is employed. We refer the BCGD with the bottom to top (top to bottom) ordering as the ascending (descending) BCGD. Given a linear neural network of depth L, a single sweep of the ascending (descending) BCGD consists of L-iterations starting from the first layer (the last layer) to the last layer (the first layer). That is, after a single sweep, all weight matrices are updated only once, in the order of from W 1 to W L (W L to W 1 ). When L = 1, the BCGD is identical to GD. We also remark that in order to update every weight matrix once, GD requires a single iteration and the BCGD requires a single sweep (L-iterations). Let A be a matrix of size m × n and B be of size k × s where m ≥ k, n ≥ s. We say A is equivalent to B upto zero-valued padding if A = B 0 0 0 , and write A B. Suppose min{m, n} > k = s. We then write A 1 B if A B whereB is a square matrix of size min{m, n} such that Here I n is the identity matrix of size n. We consider the following weight initialization schemes. • Orthogonal Initialization (Saxe et al., 2013): where Q n is an orthogonal matrix of size n. -Identity Initialization (Hardt and Ma, 2016;Bartlett et al., 2019): • Balanced Initialization (Arora et al., 2018a): Given a randomly drawn matrix W (0) ∈ R n L ×n 0 , let us take a singular value decomposition Often σ 2 j is chosen to 1/n j−1 so that the expected value of the square norm of each row is 1. The orth-indentity initialization can be viewed as a hybrid initialization between the orthogonal and the identity initialization schemes. This paper primarily concerns with the orth-indentity initialization. Convergence Analysis In this section, we present a convergence analysis of BCGD and establish the optimal learning rate. The optimality is defined to be the learning rate which results in the fastest decrease in the loss at the current parameters. The standard L 2 -loss will be mainly discussed. However, we also present a convergence result for general differentiable convex loss functions whose gradient are Lipshitz continuous in a bounded domain, such as L p -loss where p is even. We measure the approximation error in terms of the distance to the global optimum. For example, when the L 2 -loss is employed, the error is F . We first identify the effects of width in DLNs in gradient-based training under either the orth-identity or the balanced (Arora et al., 2018a) initialization (Section 2.3). Theorem 2 Suppose the weight matrices are initialized according to either the orth-identity or the balanced initialization, described in Section 2.3. Let n be the width of the -th layer. Then, the training process of any gradient-based optimization methods (including GD, SGD, BCGD, BCSGD) is independent of the choice of n 's as long as it satisfies Proof The proof can be found in Appendix A. Theorem 2 implies that the width does not play any role in gradient-based training if the condition of (8) is met and the weight matrices are initialized in a certain manner. However, the same conclusion does not follow if the random initialization is employed. This indicates that the role of width highly depends on how the weight matrices are initialized. Also, with a proper initialization, over-parameterization by the width can be avoided. Convergence of BCGD We first focus on the standard L 2 loss function and present a general convergence analysis of BCGD. We do not make any assumptions other than range(Y X † ) ⊂ range(W (0) L ). For example, the input data matrix X needs not be full rank. We follow the convention of where i( ) = if the ascending BCGD is employed and i( ) = L − + 1 if the descending BCGD is employed, satisfies where W * = Y X † , r x = rank(X), r = dim(K), and Furthermore, the optimal learning rate is and with the optimal learning rate of (11), we obtain The proof can be found in Appendix B. The assumption of all columns of W where Q is orthogonal and range(Q) = range(Y X T ). We remark that in many practical applications, the number of training data is typically larger than both the input and the output dimensions, i.e., m > max{n 0 , n L }. Also, the input dimension is greater than the output dimension, i.e., n 0 > n L . For example, the MNIST handwritten digit dataset contains 60, 000 training data whose input and output dimensions are 784 and 10, respectively. Theorem 3 indicates that as long as n ≥ min{r x , r}, the approximation error is strictly decreasing after a single sweep of BCGD if either κ 2 Also, our analysis shows the ineffectiveness of training a network which has a layer whose width is less than max{r x , r}. This is because if n < max{r x , r}, either is zero and thus, γ k (k, −1) = 1. This indicates that in order for the faster convergence, one should employ a network whose architecture satisfying n ≥ max{r x , r} for all 1 ≤ < L. Also, if W (0) 1 is initialize in a way that all rows are in range(X), one can expect to find the least norm solution. In order for an iteration of BCGD to strictly decrease the approximation error, it is important to guarantee the condition of In what follows, we show that if the initial approximation error is sufficiently close to the global optimum under the orth-identity initialization (Section 2.3), the convergence to the global optimum is guaranteed at a linear rate by the layer-wise training (BCGD). Theorem 4 Under the same conditions of Theorem 3, let X be a full-row rank matrix and n ≥ max{n 0 , n L } for all 1 ≤ < L. Suppose the weight matrices are initialized from the orth-identity initialization (Section 2.3) and the initial loss W k 0 − W * F is less than or equal toσ min /c, whereσ min = σ min (W * X)/ X , and . Then, with the learning rates of (9), the k-th sweep of where γ = 1 − η 5κ 2 (X) and 0 < η ≤ 1. Proof By Lemma 5, the proof readily follows from Theorem 3. Lemma 5 Under the same conditions of Theorem 4, we have . Proof The proof can be found in Appendix C. We remark that the rate of convergence for a single sweep is γ 2L . When the speed of convergence is measured against the number of sweeps, this implies that the deeper the network is, the faster convergence is obtained. Thus, if the depth of a linear network is sufficiently large, the global optimum can be reached by the layer-wise training (BCGD) after updating each weight matrix only once. Theorem 4 relies on the assumption that the initial approximation is sufficiently close to the global optimum W * X in terms of X, σ min (W * X) and the depth L. As a special case of d out = 1, a similar result can be obtained without this restriction. Theorem 6 Under the same conditions of Theorem 3, let n L = 1, n ≥ n 0 for all 1 ≤ < L and X is a full-row rank matrix. Suppose the weight matrices are initialized from the orth-identity initialization (Section 2.3), and the global minimizer is not where c is defined in (13) and 0 < η ≤ 1. Then, the k-th sweep of descending BCGD with the learning rate of (9) satisfies where γ = 1 − η 5κ 2 (X) . Proof The proof can be found in Appendix D. We now present a general convergence analysis of the layer-wise training (BCGD) for convex differentiable loss functions. For general loss functions, let W * be the solution to min W L(W ). Theorem 7 Suppose (z; b) is convex and twice differentiable (as a function of z), and that its second derivative satisfies | (z; b)| ≤ C(z). If the learning rates satisfy where C is applied element-wise and where • The (near) optimal learning rate is is a stationary point. IfŴ L:1 is a local minimum, then it is the global minimum. Proof The proof can be found in Appendix E. Theorem 7 shows that as long as the learning rates satisfying (15) are bounded below away from 0 and above by 1 for all k but finitely many, the BCGD finds a stationary point at the rate of O(1/kL) where k is the number of sweeps and L is the depth of DLN. Also, since the loss is known a prior, the (near) optimal learning rate can directly be applied in practice. For example, when the p-norm is used for the loss, i.e., (z; b) = |z − b| p /p where 1 < p < ∞ and p is even, the (near) optimal learning rate is Note that when p = 2, the above is identical to the optimal learning rate of (11). Convergence of BCSGD In this subsection, a convergence analysis of BCSGD (7) is presented with the standard L 2 -loss, i.e., (a; b) = (a − b) 2 /2. Given a discrete random variable i ∼ π on [m], we denote the expectation with respect to i conditioned on all other previous random variables by E i . Theorem 8 Let {W (0) } L =1 be the initial weight matrices. At the (Lk + )-th iteration, a data point x i Lk+ is randomly independently chosen where i Lk+ is a random variable whose probability distribution π k (k, −1) is defined by Then, the approximation by BCSGD (7) with the learning rates of satisfies . Proof The proof can be found in Appendix F. Under the assumption that κ 4 (W :1 X) uniformly bounded above by M upp and γ k (k, −1) low is uniformly bounded below away from zero by γ low > 0, one can conclude that Similarly, under the assumption thatκ 4 (W k (k, −1) (i( )−1):1 X) uniformly bounded below by M low , and γ k (k, −1) upp is uniformly bounded above by γ upp < 1, we have This indicates that unlike the BCGD, if a randomly chosen datum is used to update a weight matrix, an extra term, which is proportional to L(W * ), is introduced in both upper and lower bounds of the expected error. Therefore, the BCSGD would not achieve the global optimum, unless L(W * ) = 0. However, the expected loss by BCSGD will be within the distance proportional to L(W * ) from L(W * ). In practice, L(W * ) will almost never be zero. This indicates that the stochasticity introduced by the random selection of mini-batch (of size 1) results in an implicit regularization effect, which avoids over-fitting. We defer further characterization of BCSGD to future work. Numerical Examples We provide numerical examples to demonstrate the performance of layer-wise training by BCGD and justify our theoretical findings. We employ three different initialization schemes, described in Section 2.3. In all examples, the network architectures are met the condition of n ≥ max{d in , d out } unless otherwise stated. According to Theorem 2, when either the orth-indentity or the balanced initialization is employed, we simply set n = max{n 0 , n L } for all 1 ≤ < L. The approximation error is measured by the normalized distance to the global optimum, i.e., 1 m L(W k k ) − 1 m L(W * ). When the L 2 -loss is employed, the error after the k-th sweep is 1 We note that the speed of convergence can be measured by either the number of sweeps or the number of iterations. Note also that updating each weight matrix once in a deep network will require more time than doing so in a shallow network. When it comes to compare the speed of convergence in deep neural networks, the number of times each weight matrix is updated is a commonly employed criterion (Saxe et al., 2013;Arora et al., 2018b). In what follows, we employ the layer-wise training by BCGD for deep linear neural networks. The learning rate is chosen to be (near) optimal according to (17). We emphasize that the (near) optimal learning rate of (17) does not require any prior knowledge, and can completely be determined by the loss function, the current weight matrices and the input data matrix. This allows us to avoid a cumbersome grid-search over learning rate. Random Data Experiments Unless otherwise stated, we generate the input data matrix X ∈ R d in ×m whose entries are i.i.d. samples from a Gaussian distribution N (0, 1/n 0 ) and the output data matrix Y ∈ R dout×m whose entries are i.i.d. samples from a uniform distribution on (−1, 2). The number of training data is set to m = 600. On the left of Figure 1, the approximation errors are plotted with respect to the number of sweeps of the descending BCGD at different depths L. The input and output dimensions are d in = n 0 = 128 and d out = n L = 10, respectively. The width of the -th layer is n = 128 = max{n 0 , n L } and the orth-identity initialization (Section 2.3) is employed. We see that the faster convergence is obtained as the depth grows. In an extreme case of the depth L = 400, the global optimum is achieved by only after updating each weight matrix once. These results are expected from Theorem 3. It is typical that the speed of convergence is measured in terms of the number of updates of weight matrices, not the amount of computation (Saxe et al., 2013;Arora et al., 2018b). However, to fairly compare the effects of depth in the acceleration of convergence, the approximation errors need to be plotted with respect to the number of iterations. On the right of Figure 1, the errors are shown with respect to the number of iterations. We now see that training a depth 1 network multiple times results in the fastest decrease in the loss. This implies that in order for the faster convergence, it is better to train a depth 1 network L times than to train a depth L network once in this case. We remark that the condition number of the input data matrix was 2.6614. In this case, we do not have any advantages of using deep networks over a depth 1 network. iterations of the descending BCGD with the optimal learning rate (11) at different depths L = 1, 10, 50, 100, 200, 400. The width is set to max{n 0 , n L } = 128 and the orth-identity initialization is employed. When the depth is 400, the global optimum is achieved by after updating each weight matrix only once. However, when the errors are compared against the number of iterations, updating a single layer L times results in the faster loss decay than updating a L layer network once. We now consider the input data matrix X whose condition number is rather big. To do this, we first generate X as in the above and conduct the singular value decomposition. We then assign randomly generated numbers from 10 −5 + U(0, 1) to the singular values. In our experiment, the condition number of X was 236. The output data matrix Y is generated in the same way as before. In Figure 2, the approximation errors are plotted with respect to the number of (left) sweeps and (right) iterations of the descending BCGD at different depths L = 1, 3, 5, 7, 9, 11. When the speed of convergence is measured against the number of sweeps, we see that the deeper the network is, the faster the convergence is obtained. When the amount of computation is considered, unlike the case where X has a good condition number, we now see that the errors by deep linear networks decay drastically faster than those by a shallow network of depth 1. This demonstrates that over-parameterization by the depth can indeed accelerate convergence, even when the computational cost is considered. We note that from Theorem 2, the width plays no role in gradient-based training, as the width of intermediate layers is max{d in , d out }. Furthermore, the optimal learning rate is employed and adding more layers does not increase any representational power. Therefore, this acceleration is solely contributed by the depth and this clearly demonstrates the benefit of using deep networks. We also observe that the error decrease per iteration does not grow proportionally to the depth. In this case, either depth 5 or 7 performs the best among others. iterations of the descending BCGD with the optimal learning rate (11) at different depths. The width is set to max{n 0 , n L } = 128 and the orth-identity initialization is employed. The condition number of the input data matrix is 236. In terms of the number of sweeps, the deeper the network is, the faster convergence is obtained. In terms of the number of iterations (i.e., the computational cost is considered), unlike Figure 1 where cond(X) ≈ 2, the use of deep networks drastically accelerates convergence of the loss when it is compared to those by a depth 1 network. Next, we show the ineffectiveness of training a network which has a layer whose width is less than max{d in , d out }. Figure 3 shows the approximation errors with respect to the number of iterations of the descending BCGD. The input and output dimensions are d in = 128 and d out = 20, respectively. Two deep linear networks of depth L = 100 are compared. One has the architecture (Arch 1) of n = 20 for all 1 ≤ < L. The other has the architecture (Arch 2) of n = 128 for all 1 ≤ < L, but n 50 = 20. Note that at the k-th iteration where k = L − + 1 mod L, the (L − + 1)-th layer weight matrix is the only matrix updated. For the network of Arch 1, we see that the errors decrease mostly only after updating the first layer weight matrix. The errors before and after updating the first layer are marked as the circle symbols (•). For the network of Arch 2, we see that the errors decrease mostly after updating from the 50th to the 1st layer weight matrices. The errors before and after updating the 50th and the 1st layer matrices are marked as the asterisk symbols ( * ). These are expected from Theorem 3, as either σ min (W This demonstrates the ineffectiveness of training a network which has a layer whose width is less than max{n 0 , n L }. We now compare the performance of layer-wise training by BCGD with two update orderings (top to bottom and bottom to top). Figure 4 shows the approximation errors with respect to the number of iterations of both the ascending and descending BCGD at three different initialization schemes (Section 2.3). We employ the DLNs of depth L = 50 and set the width of the -th layer to n = max{n 0 , n L }. On the left, the input and output dimensions are d in = 50 and d out = 300, respectively. It can be seen that for the orth-identity initialization, the errors by the ascending BCGD decay faster than those by the descending BCGD. For the balanced initialization, the opposite is observed. For the random initialization, the errors by both the ascending and descending orderings behave similarly. We see that the ascending BCGD with the orth-identity initialization results in the fastest convergence among others. On the right, the input and output dimensions are d in = 300 and d out = 50, respectively. It can be seen that for the balanced and the random initialization, the errors by the ascending BCGD decay faster than those by the descending BCGD. For the orth-identity initialization, the opposite is observed. In this case, the descending BCGD with the orth-identity initialization results in the fastest convergence among others. In all cases, we observe that the orth-identity initialization outperforms than other initialization schemes, regardless of the update ordering. Also, we found that when the orth-identity initialization is employed, the ascending BCGD performs better than the descending BCGD if the output dimension is larger than the input dimension, and vice versa. (Right) n j = 300, n L = 50 for 0 ≤ j < L. When n 0 = 50, n j = 300, the ascending BCGD with the orth-identity initialization results in the fastest convergence among others. When n j = 300, n L = 50, the descending BCGD with the orth-identity initialization results in the fastest convergence among others. Real Data Experiments We employ the dataset from UCI Machine Learning Repositorys Gas Sensor Array Drift at Different Concentrations (Vergara et al., 2012;Rodriguez-Lujan et al., 2014). Specifically, we used the datasets Ethanol problem a scalar regression task with 2565 examples, each comprising 128 features (one of the largest numeric regression tasks in the repository). The input and output data sets are normalized to have zero mean and unit variance. After the normalization, the condition number of the input data matrix is 70,980. We note that this is the same data set used in (Arora et al., 2018b). The width of intermediate layers is set to max{d in , d out } and the identity initialization (Section 2.3) is employed. On the left of Figure 5, we show the errors by the descending BCGD with respect to the number of sweeps at five different depths L = 1, 2, 3, 4, 5. We use the optimal learning rate (11), which does not require any prior knowledge. It is clear that the errors by deep networks decay faster than those by a depth 1 network. In order to take the computational cost into account, on the right, we re-plot the figure with respect to the number of iterations. We clearly see that even considering the amount of computation, the over-parameterization by depth significantly accelerates convergence. We remark that in the work of (Arora et al., 2018b), although a different optimization method is used, the same problem is considered and the learning rate is chosen by a grid search. Similar implicit acceleration was demonstrated only for L 4 -loss, not L 2 -loss. In our experiment, by exploiting the layer-wise training and the optimal learning rate, we demonstrate implicit acceleration for L 2 -loss even when the computational cost is considered. We remark that (Arora et al., 2018b) measured the speed of convergence in terms of the number of updates on each weight matrix, rather than the amount of computation (the computational cost for updating a depth 1 network L times is comparable to those for updating a depth L network once). In all depths, even the amount of computation is considered, the errors by deep linear networks decay faster than those by a single layer one. In Figure 6, we show the results by L 4 -loss, i.e, 1 m W (k) X − Y 4 4,4 − W * X − Y 4 4,4 . The near optimal learning rate of (18) is employed. On the left and the right, the errors are plotted with respect to the number of sweeps and iterations, respectively. If the speed of convergence is measured in terms of the number of sweeps, i.e., the number of updates of each weight matrix, we see that the faster convergence is achieved by adding more layers. However, in terms of the number of iterations, i.e., when the amount of computation is considered, updating a single layer multiple-times results in the fastest error convergence than updating multiple layers once. For reference, we also plot the best error shown at (Arora et al., 2018b) after 1,000,000 iterations as the dashed line. We remark that when L = 1, the training procedure is identical to our setting and the only difference is in the selection of learning rate. It can be clearly seen that the (near) optimal learning rate results in a drastically faster loss decay than those by a grid search. We now train DLNs on the MNIST handwritten digit classification dataset. For an input image, its corresponding output vector contains a 1 in the index for the correct class and zeros elsewhere. The input and output dimensions are d in = 784 and d out = 10, respectively. In order to strictly compare the effect of depth, we employ the identity initialization to completely remove the randomness from the initialization. Also, we set The width is set to n = 128. In terms of the number of sweeps, the deeper the network is, the faster the convergence is observed. However, when the amount of computation is considered, updating a depth 1 network L times results in a faster loss decay than updating a depth L network once. the width to 784 = max{d in , d out } according to Theorem 2. The networks are trained over the entire MNIST training dataset of 60,000 samples. The input data matrix X is not full rank. Figure 7 shows the distances to the global optimum by L 2 -loss with respect to the number of iterations of the descending BCGD at ten different depths L = 1, · · · , 10. Thus, the speed of convergence is measured against the amount of computation. We observe the accelerated convergence by the network whose depth is even but not odd. We also see that the results by DLNs of odd-depth are very similar so that the lines are overlapping each other. In this case, the depth 2 network performs the best among others. We suspect that there is a connection between the parity of depth and the acceleration in convergence. We defer such further investigation to future work. Conclusion In this paper, we studied a layer-wise training for deep linear networks using the block coordinate gradient descent (BCGD). We established a convergence analysis and found the optimal learning rate which results in the fastest decrease in the loss. More importantly, the optimal learning rate can directly be applied in practice as no prior knowledge is required. Also, we identified the effects of depth, width, and initialization in the training process. Firstly, we showed that when the orthogonal-like initialization is employed and the width of the intermediate layers is great than or equal to both the input and output dimensions, the width plays no roles in gradient-based training. Secondly, under some assumptions, we proved that the deeper the network is, the faster the convergence is guaranteed. In an extreme case, the global optimum is achieved after updating each weight matrix only once. Here, the speed of convergence is measured against the number of updates in each weight matrix, not the amount of computation. Thirdly, we empirically demonstrated that adding more layers could drastically accelerate convergence, when it is compared to those of a single layer, even when the computational cost is considered. Lastly, we establish a convergence analysis of the block coordinate stochastic gradient descent (BCSGD). Our analysis indicates that the BCSGD cannot reach the global optimum, however, the converged loss will be staying close to the global optimum. This can be understood as an implicit regularization, which avoids over-fitting, due to the stochasticity introduced by the random selection of mini-batch (of size 1). Numerical examples were provided to justify our theoretical findings and demonstrate the performance of the layer-wise training by BCGD. Appendix A. Proof of Theorem 2 Proof For a matrix A of size m × n and a matrix B of size k × s where m ≥ k, n ≥ s, we say A is equivalent to B upto zero-valued padding if R n L ×dmax , and W j W j ∈ R dmax×dmax for 1 < j < L, since n j ≥ d max for 1 < j < L, we have W L:(j+1) W L:(j+1) and W (j−1):1 W (j−1):1 for any 1 < j < L. Specifically, It then follows from the gradient descent update where i( ) = if the ascending BCGD is employed and i( ) = L − + 1 if the descending BCGD is employed, that we obtain By the assumption on W , the proof is completed. If the initial weight matrices satisfy it follows from Lemma 9 that for any s and j, there existsW which completes the proof for the balanced initialization. Suppose min{m, n} > k = s. We then write A 1 B if A B whereB is a square matrix of size min{m, n} such that Let W j be a matrix of size n j × n j−1 and n j ≥ max{n 0 , n L } for all 1 ≤ j ≤ L. Suppose Let W L = W L 0 , whereW L ∈ R n L ×max{n 0 ,n L } . Then, where n i min (j) = min j−1≤ ≤i n for 1 ≤ j ≤ i + 1. Thus, W L:(j+1) W L:(j+1) . Similarly, W (j−1):1 W (j−1):1 . It then follows from a similar argument used in Lemma 9 that if the initial weight matrices satisfy (21), then the weight matrices updated by any gradient based optimization also satisfy (21). This completes the proof for the identity initialization. Appendix B. Proof of Theorem 3 Proof For notational convenience, for j > i, let By definition, it follows from the update rule that By multiplying W Then we have Since A (k) and B (k) are symmetric, we have diagonal transformations, where V (k) and U (k) are orthogonal matrices, λ ,1 ≥ · · · ≥ λ (k) ,dout , and µ (k) ,1 ≥ · · · ≥ µ (k) ,m . We remark that µ Then, the (i, j)-entry of∆ k k, is and we have We then choose the learning rate which minimizes F(η k (k, −1) ) and it is Thus, with the optimal learning rate of (24), we obtain For a matrix M , the j-th column and the i-th row of M are denoted by (M ) j and (M ) i , respectively. We note that all rows of ∆ k (k, −1, ) are in range(X T ) and span{(V and zero otherwise. Suppose that (W (0) L ) j ∈ K for all 1 ≤ j ≤ n L−1 where range(Y X † ) ⊂ K ⊂ R n L . It then can be checked that (W (k) L ) j ∈ K for all k and j and thus (∆ k (k, −1) ) j ∈ K. Also, from the similar argument used in the above, we have span{(U (k) ) j |j = 1, · · · , r} = K, r = dim K. Thus, ((U (k) ) T ∆ k (k, ) )) i = 0 for i > r and we have and zero otherwise. If the learning rate η k (k, −1) is chosen to satisfy we have Note that from the relation of M 2 F = Tr(M M T ), we have Therefore, By recursively applying the above, we obtain which completes the proof. Appendix C. Proof of Lemma 5 Proof Suppose W k 0 − W * F ≤σ min − c/ X whereσ min = σ min (W * X)/ X , where c will be chosen later. It then follows from the assumption that Then for any W satisfying W X − W * X F ≤ W k 0 X − W * X F , we have that σ s (C) > c AB . Similarly, σ s (A) > c BC . By applying the induction on the number of iterations of the BCGD, we claim that there exists 0 < R < 1 such that R(k) ≤ R, ∀k. Since R(0) = 0, the base case holds trivially. Suppose R(sL + − 1) ≤ R. We want to show that R(sL + ) ≤ R. Note that since W for j = , it suffices to consider W k (s, ) i( ) . Suppose the learning rates satisfy (9). It follows from the BCGD updates Using (29), we obtain Also, note that by the induction hypothesis and (28), we have σ max (W k (s, −1) j ) < 1 + R and where z = 1. Here, we set z to be the right singular vector of W k (s, −1) j which corresponds to σ min (W k (s, −1) j ). Then, z has zero-values from (max{n 0 , n L } + 1)-th to n j−1 -th entries. Recall that W (0) j is equivalent to an orthogonal matrix upto zero-valued padding. This allows us to conclude W (0) j z = 1, which makes the fourth equality of (33) hold. Thus, we have It then follows from (10) that for 0 ≤ k < s with 1 ≤ j ≤ L and k = s with 1 ≤ j < . From (32), (34) and Theorem 3, we obtain The recursive relation with respect to s gives where This can be checked as follow. First, we note that the maximum of x where 0 < x < 1 is obtained at x = R L . It also follows from the assumption of Thus, by induction, we conclude that R(k) < R for all k. Furthermore, it follows from that lim L→∞ LR L = 1 5 and lim L→∞ R L = 0. Also, since LR L and R L are decreasing functions of L, we have Hence, we can conclude that which completes the proof. Appendix D. Proof of Theorem 6 Proof Since n ≥ max{n 0 , n L } and the initial weight matrices are from the orth-identity initialization (Section 2.3), it follows from Theorem 2 that W Note that since X is a full row-rank matrix, XX T is invertible. In what follows, we will show that W (1) Suppose W * does not satisfy the condition of (36) for all . For = 1, we have η which contradicts to the assumption of W * . Hence, W L:(L− +1) = 0. By induction, we conclude that W (1) L:(L− +1) = 0 for all . Thus, it follows from Theorem 3 that Since L is chosen to satisfy where c is defined in (13), it follows from Lemma 5 and Theorem 4 that W (s) L:j = 0 for all j and s, and W ks X − W * X F ≤ W k 1 X − W * X F (γ L−1 ) s−1 1 − η κ 2 (X) s−1 . For notational convenience, for j > i, let W j W j−1 · · · W i = W j:i . That is, the BCGD finds a critical point. Since all local minima are global (see, Laurent and Brecht (2018)), {W * } L =1 is a global minimizer. and let M k k k ,j := −η k k k λ k k k ,j I + 2B k k k . Then, since λ min (M k k k ,j ) = 2λ min (B k k k ) −η k k k λ k k k ,j > 0, M k k k ,j is a positive definite symmetric matrix for all j. Thus, and similarly, we have , where 0 < η < 2, we have −η k k k λ k k k ,i λ min (M k k k ,i ) ≤ −λ 2 min (B k k k ) 1 − (1 − η/κ(A k k k )) 2 := −γ k k k . Thus, we obtain By summing up with respect to j, we have , whereκ(·) is the scaled condition number defined to bẽ κ(X) = X F |σ min (X)| .
2019-10-14T00:50:55.000Z
2019-10-14T00:00:00.000
{ "year": 2019, "sha1": "30644d785619bc6a0b23a16bda412ae10d919947", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "30644d785619bc6a0b23a16bda412ae10d919947", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
250933399
pes2o/s2orc
v3-fos-license
Industry-Led Standards, Relational Contracts and Good Faith: Are the UK and Australia Setting the Pace in (Construction) Contract Law? The law of contract is changing. “Good faith” and “relational contracts” are used by parties more than ever before in commercial disputes. Yet, their definition and what it really means to act in good faith are still unsettled in the UK and Australia, reducing the (judicial and doctrinal) utility and impact of such conceptual tools. In contrast, the construction industry is trying to move forward in policy terms. Over the last 30 years, industry-led initiatives have been working to improve collaboration. In the UK and Australia, new collaborative frameworks contain express provisions asking parties to act with mutual trust and cooperation among other collaborative schemes. Examination of the judicial approach and industry initiatives demonstrates that there is – underpinning both – a project-centric approach (even if that is yet to be fully recognised or articulated). It is the aim of this paper to further articulate this understanding by examining at the judicial and industry positions in the UK and Australia. Introduction There has been a growing move towards acceptance of good faith and relational contracts in the common law. In the UK, 1 in 2020, the British Institute of International and Comparative Law concept note on Covid-19 2 and commercial contracts endorsed the view expressed extra-judicially by Mr Justice Leggatt (as he then was) in his lecture to the Commercial Bar Association in 2016 that good faith obligations are important in allowing the commercial flexibility required in modern contracts in common law countries, and beyond. (Leggatt 2016). This view might appear widely shared by the English courts; exemplified by Yam Seng v ITC Ltd 3 and Sheikh Al Nehayan v Ionnis Kent, 4 which applied good faith as an implied term to contract law, and Bates v Post Office Ltd (No 3), 5 which discussed the notion in the context of the emerging category of relational contracts. Yet these notions, aimed at developing flexibility, remain (stubbornly) divisive 6 and their doctrinal impact is consequently limited (Tan 2019;Collins 2016: 37). The UK is far from alone in its struggle with those notions. In Australia, ever since Renard Construction v Minister for Public Works 7 parties have been toying with good faith in contract law but without landing on anything particularly significant. The struggle also exists in the USA (MacMahon 2015). There are echoes in the experience of the construction industry where there have been efforts in both Australia and the UK, over the last few decades, to instill a more collaborative form of working, but efforts have come up somewhat short. The recent developments in case law ought to provide a means to drive forward these goals but as it stands, the policy has not been fully recognised and articulated. As a result, the law has yet to meet the policy goal. We will discuss how this lack of articulation makes it difficult to advance the discussion -instead creating a vicious circle of rejection. Why do these notions cause so much controversy? Looking at case law from both jurisdictions, a similar theme is emerging: the notions are too vague 8 to allow clear enumeration of what they mean for the parties in terms of their rights and obligations. This vagueness, in turn, fails to give parties the necessary certainty of their position to form a protective boundary for their own interests. This present state of affairs relies on a traditional view of contract, which focusses too narrowly on the individual parties and tests the question of what is required of them by sole reference to their own contractual self-interest. In this paper, we refer to this as a party-centric approach to contracts. In taking this approach, the courts fail to give effect to the bigger picture of using the concepts of the relational contract and good faith as mechanisms to add in the desired flexibility and cooperation. This comes despite a more general move in the courts to give full effect to the intentions of the parties (Robertson 2019: 231). Although the courts regularly deal with 'open-textured' concepts and use language which is open to some ambiguity (Hogg 2017(Hogg :1673Rowan 2021;Chen-Wishart and Dixon, 2020), good faith and relational contracts seem to be particular stumbling blocks. This party-centric approach, in focusing on the parties' contractually-defined legal obligations, is too restrictive. We argue that a better solution is to formally give effect to a project-centric approach, which places the parties' individual interest within the wider lens of their shared interests in achieving the agreed common purpose of the contract. That project-centric approach, which allows one to decouple the parties' individual interests from the wider purpose of the contract (Robertson 2019: 234), is embedded in the case law and is slowly being articulated through the concepts of relational contracts and good faith, but has not yet been given full effect. We further argue that the policy drive within the construction industry can be characterised as being project-centric in its aims and that it meets both the judicial drive (most notably led by Leggatt J (as was) in Yam Seng and Sheikh Al Nehayan, but also Fraser J in Bates) and the most recent doctrinal drive (Tan 2019; Gounari 2021) towards further development of the concepts of relational contracts and good faith. However, the existing reliance on party-centric concepts in the case law means that there is currently insufficient vocabulary for that development to happen. This lacuna is not surprising, the legal vocabulary in case law and doctrine simply reflects the traditional contract law theories and is therefore embedded in the traditional view of the law. Existing legal concepts can only be described using this traditional language and so the party-centric view is difficult to depart from: simply put, the words/vocabulary to describe a different approach do not exist yet. This is of course problematic since without that vocabulary, it is difficult to articulate and develop the words to reflect and explain the multifaceted nature of those concepts away from their party-centric focus. Without capturing the essence of what the concepts mean, the legal understanding and language applicable to these concepts 8 Compass Group (UK and Ireland) remains limited … and so, the vicious circle of rejection goes. We argue that the concepts of relational contracts and good faith help to give effect to a project-centric approach by providing a framework to exit this circle of rejection and provide a vocabulary which can then be used to develop the law. The article is organised as follows. First, the debate over the notions of good faith and relational contracts is placed in its context, and in the construction industry context in particular. A second part will then show that the project-centric approach as articulated here is clearly embedded in the courts' understanding of good faith and draws from existing ideas in theory but that the consequences of this need to be more fully understood. Particular reference to the construction industry will be made. A third part will then suggest next steps to develop thinking in that respect and reflect on what we consider shows a project-centric approach to construction contracts. The existence and limits of "good faith" and its role in the construction industry Both the UK and Australia have debated the place and role of good faith in contract, its definition, and its application for a long time. Since Carter v Boehm, 9 the notion has appeared before the English courts without their taking a definite stance on the matter. Australian courts are also ill-at-ease with it and prefer adjudicating on other matters to avoid dealing with the concept of good faith and its application in Australian contract law. 10 Critical voices have described the concept as a 'legal irritant', (Teubner 1998;White 2000) a 'strange and worrying chameleon' (Shalev 1992: 820, Chunlin 2010, something that academics 'can get a little over-excited about' (Coulson LJ speaking extra-judicially, 2019) or an amorphous notion that is not needed and could in fact be dangerous to the law of contract in the UK. 11 These discussions on good faith are not limited to the UK and Australia. Other common law jurisdictions have also tried to determine the place, if any, of good faith in contract law. 12 In spite of all this, 'good faith', far from vanishing, reappears time and again in the UK and Australia. Indeed, as the idea of the relational contract has moved from 'academic theory' 13 to case law and legal practice, it has brought further references 9 (1766) 3 Burr 1905. Inc .v. Zollinger (2020) SCC 45. 13 It is beyond the scope of this paper to critique what relational contract theory is about (and even whether it exists as such). A good summary by Gounari is that 'it has developed as a response to the perceived failing of classical contract which looks to the parties' express bargain as paramount' (Gounari 2021: 179). For details of reference of MacNeil's work, see too Gounari, esp pp 179-181 and Tan 2019. and discussion of 'good faith' along with it. The notions are inextricably linked. The wider theory should help inform the emerging understanding in practice. Moreover, "good faith" emerges in different ways in different cases whether as an implied term, a more general value or as an express term of the parties' agreement. All of these show some attempts to recognise the ideas noted above: to try and adjust the parties' approach to contracting. Indeed, the 'other-regarding' values which are inherent in relational contracts, and are given effect to by good faith obligations, are important in this respect (Gerhart 2020). They seem to capture the zeitgeist of greater cooperation and commercial flexibility noted above. In short, good faith (understood as a standard of cooperation in achieving the agreed-upon results (Corcoran 2012: 9)) and relational contracts help to articulate the values behind the project-centric approach. Their lack of formal recognition and the vocabulary for articulation that would bring, therefore, fuels the circle of rejection described above. In parallel to this, formal efforts to make the construction industry less adversarial, more flexible and more cooperative have been clear in the UK since the publication of the Latham report in the early 1990s (Latham 1994). That has led to a number of initiatives, and we suggest that the recent discussion on good faith should help in understanding those further. Writ large is the use of partnering and alliancing contracts where the economic framework of the contract leads to significant alignment of the parties' economic incentives. Writ smaller is the use of collaborative frameworks in contracts, in particular the NEC suite (Christie 2018) -including its specific and explicit "mutual trust and cooperation" provision as the first substantive clause in the contract. 14 This suite of contracts provides for particular communication mechanisms of 'early warnings' of changes, which promote early working out of risks. More recent innovations include the use of enterprise contracts, which focus on parties' conduct emerging through the project (Mosey and Jackson 2020). Other initiatives also include early involvement of contractors in the design and planning of the project (Harvey 2018). Finally, practical efforts to improve collaboration in the industry, such as the payment mechanism, coupled with speedy dispute resolution, are found in the UK 15 , and in several Australian states (summarised in Jones Day 2020), as well as other jurisdictions including Ireland, Singapore, and Malaysia. Most recently, the UK Government's 'Construction Playbook', published in December 2020 (UK Government, 2020), represents a vision of how the UK Government will engage and operate its own construction contracts, and is infused with ideas of longer-term approaches to relationship management, a focus on outcomes, and the promotion of flexible and collaborative working. The emerging jurisprudence on relational contracts increasingly resonates with the policy outcomes sought here. A policy drive towards a less adversarial construction industry is also clear in Australia. In 2018, the New South Wales Government released its 'NSW Government 14 This has recently been confirmed as synonymous with good faith by the Inner House of the Court of Session in the judgment in Van Oord UK Ltd v Dragados UK Ltd [2021] CSIH 50, 2021 SLT 317.The precise content of that obligation was not discussed in detail but was considered a matter for proof. Ibid at [23]. As such there is no particular development of the concept in this judgment, albeit it represents a further step towards general acceptance of good faith. For discussion see (Christie: 2022). 15 Housing Grants, Construction and Regeneration Act 1996 (as amended) ss 104 to 113. Action Plan-a ten-point commitment to the construction sector' (New South Wales Government, 2018). In 2020, notions of good faith and collaboration were brought to the fore by the release of empirical studies and reports on the construction industry and its current adversarial state (Australian Constructors Association, 2020; Sharkey et al. 2020). The Australian Constructors Association is currently actively promoting the use of more collaborative commercial frameworks but these need 'to align the interests of all parties to the greatest extent possible and be drafted to achieve best for project outcomes rather than favouring any one particular stakeholder' (Australian Constructors Association, 2020a: 9). The Association is also recommending a 'playbook' for the industry. This trend is gaining momentum in 2021 with the release in Australia of the NEC 4 suite of contracts, and the launch by the federal government, of the 2021 infrastructure plan points to the need to enhance project outcomes by reducing risk and improving value for money by using common and best commercial arrangements, standard form contracts and a delivery approach to infrastructure (The Australian Government, 2021: 269). State and territories are presented as proposed leaders in implementing this recommendation. Beyond the construction industry, industry codes of conduct are appearing almost yearly and regulate particular longterm contracts with an explicit duty for parties to act in good faith. 16 This trend for a more collaborative industry is developing at a faster pace than the common law. The legal framework to support the less adversarial and more collaborative environment sought could helpfully be underpinned by the 'relational' contract theory. However, after thirty years of discussion (and 9 years since Yam Seng), it appears to be as difficult to embed these concepts in construction law practice as it is in the wider law. This party centric approach exacerbates the cycle of rejection noted above and creates a doctrinal gap. This gap in turn weakens any chance of building a proper legal framework. The result of all this leads to more detailed and complex written agreements. These are firstly, to specify and identify all that might be needed to deliver the project to define the baseline for further work in clear -and indeed exacting -terms, and secondly, to set out and agree to steps that might govern changing circumstances and conflicts. Although this forces parties to plan ahead and think through matters, there is increasing scope for errors or differences in interpretation to emerge, in turn putting an even higher burden on contract managers and opening up the scope for disputes. Simply put, seeking to define and specify further meaning to contracts to capture the same sort of ideas as "good faith" might provide more words to interpret and argue about. Thus, the failure to develop the concepts of relational contract and good faith in case law -and the circle of rejection it causes -not only fails to recognise the growing idea of contracts in construction policy and practice but also that the contract is not solely an adversarial process (Gounari 2021: 182), but also 'a cooperative endeavour' (Finn 1989: 76). This is serious since it 'hinders the reality of the relation' (Gounari 2021: 182), which therefore creates a 'fissure between the law on the book and the law on the ground' (Gerhart 2020: 95). To fill this 'fissure', it is crucial to move the debate beyond whether or not good faith has a place in contract law, to how to recognise and utilise it. To do so, we propose the replacement of the existing party-centric approach by one which is project-centric. Consideration of the literature and the case law demonstrates that this would involve an evolutionary development rather than a revolutionary one and would help move the law closer to the more collaborative policy goal. Defining the project-centric approach The project-centric approach is yet to be fully recognised or articulated by judges. Yet, it can be seen in, amongst others, Sheikh Al Nehayan v Kent, 17 Amey Birmingham Highways Ltd v Birmingham City Council 18 and Bates v Post Office Ltd 19 where the courts recognised the parties' common purpose in seeing the contract performed. In essence, the project-centric approach revolves around the following founding statements: 1. It is an approach which is relational in its basis and requires the parties to act in good faith. This requirement can emerge from an implied term of good faith -or the use of express obligations of good faith. 2. The approach relies on bringing together the internal "legal" content of good faith with its wider context. 3. Consequently, the existing, understood content of the rights and obligations flowing from good faith are required. 4. The further issue of filling the 'fissure' is helped by ensuring that the conduct of the parties within the wider relational framework is interpreted in a projectcentric way. 5. Flowing from this, the contract itself is to be interpreted in a project-centric way. This is linked to -but different from -a purposive interpretation. In short, we consider the dictum of Lord Justice Jackson in Amey v Birmingham articulates how relational contracts should be treated and that dictum should form the foundation of the way forward. In that case, he said: Any relational contract of this character is likely to be of massive length, containing many infelicities and oddities. Both parties should adopt a reasonable approach in accordance with what is obviously the long-term purpose of the contract. They should not be latching onto the infelicities and oddities, in order to disrupt the project and maximise their own gain. It echoes Jackson LJ's earlier extra-judicial suggestion that there should be a 'bold' approach where there is an 'express' obligation of good faith in a contract: "to be slightly more willing to give effect to the obvious purpose underlying the contract" (Jackson 2017: [6.10]). Without wishing to give a strained interpretation to the wording in a lecture, the "slightly" is telling here. It suggests that only a relatively minimal change in approach is required. We agree. Taking that step would draw upon existing authority and the current direction of travel in policy terms. Yet, despite this -and despite the relatively limited additional requirement on the parties -the step is not being taken. We therefore argue that although this step might appear small, it would nevertheless be significant in its impact. The project-centric approach puts the delivery of the parties' agreed outcome, i.e. the project, at the centre of the operation of the contract. The agreed objective is the reason for the parties' contractual relation. It is therefore not independent of the contract but very much part of the contract and highly relevant to understand the parties' duties to each other and to the project: it is what (contractually) brings the parties together. The project-centric approach helps to articulate the shared values that the parties have in the project, beyond their own respective contractual obligations. It is therefore crucial to understand how relational contracts and good faith, as concepts, might be synthesised from the academic approach to the existing policy and practice. The next steps are then to explain (A) how the project-centric approach, as outlined above, operates within the context of the discussion of broader contract theory, and (B) how it would apply within the developing understanding of relational contracts in construction law. Thirdly (C), the interaction between the project-centric approach and the existing case law is discussed. Applying the project-centric approach within contract theory Good faith and relational contract, as concepts, transcend existing contract law theories as we know them. Relational contract is a concept where contract law and wider cultural considerations interact. Good faith, as a concept, is infused with 'otherregarding values' (Gerhart 2020: 95) which are not clearly articulated in traditionally party-centric values of contract law theory. The courts are sensitive to context and the reasonable expectation of the parties when interpreting contracts, but this is still not enough. The work of Mitchell is relevant here -in particular her placing of the formal, legal contract within its wider relational context is important. We agree that 'to truly embrace the relational approach, interpretation can not only consider the contractual but the entire relationship' (Mitchell 2013: 239). This means not just interpreting the agreement but also finding a way to account for externally-generated notions (that is, those which arise from outside of the parties' written agreement) such as good faith. By only recognising 'internally generated norms' and not 'externally generated' ones (Mitchell 2013: 239), the courts are closed to the possibility that 'the wider context and the norms generated by it are relevant to how the contractual relation should be understood, obligations derived and interpreted, and disputes resolved' (Mitchell 2013: 238). The discussion on 'externally generated norms' clashes with the idea that courts should not use their own judgement to rewrite a 'bad bargain', and so is met with resistance in both the UK and Australia (although the manner in which this is dealt with in the different jurisdictions is slightly different.) In the UK, the courts tackle this challenge through a binary distinction to contractual interpretation: as either textual (within the written agreement) or contextual (looking more widely). Mitchell argues that to say that there is a 'real deal' and a 'paper deal' perpetuates a binary divide, which is false (Mitchell 2013: 240). The problem faced by the judiciary lies in the inherently adversarial norms of party-centric contracting that prevent the extent of these distinctions being understood and then engaged with. Indeed, given the role of the law in respecting the 'parties' expectations', the link must be acknowledged. We ought to think of 'contract and relations as related by distinct institutional frameworks' (Mitchell 2013: 98). Arguably, the law in its party-centric approach only acknowledges the contract and the legal obligations that derive from it as a source of norms. This only sees the parties' individual obligations, based on their own selfinterest. In Australia, there are questions still be to answered as to the place of the intention of the parties in the interpretation of written terms. The court will have to ascertain what the parties have intended. In application of the parol evidence rule, once parties have agreed to the contract in writing, extrinsic evidence, the so-called factual matrix is excluded, unless there is ambiguity. 21 Whether this is still applicable in commercial contracts is yet to be resolved. 22 Both approaches are too restrictive. As a contract is part of and within the wider framework of the 'relation', that wider framework needs to be acknowledged as a source of norms, too. This is where the project-centric approach helps to decouple the parties' individual obligations (as contractually defined) from the wider purpose of the relation (Robertson 2019: 234), and, we argue, the project. It is indeed 'artificial to separate the legal obligations from the relational context' (Corcoran 2012: 12). However, that relational set of norms also needs its own tools. The NEC contracts, codes of conduct and other policy and practice driven initiatives with the projectcentric approach help towards this wider cultural view. They link the law with its wider socio-cultural-economic context. These aim to guide the parties' performance of their obligations and not just define them. Thus, the project-centric approach provides a wider, more flexible context for following the instructions contained within the contract. However, that wider context nevertheless remains focused on the parties' agreement.The project itself is something that they have agreed upon. Thus, the norm is immune from the critique of reference to "commercial common sense" as a tool for interpretation of contracts. 23 It relies on the parties' agreed intention rather than something superimposed from outside. That outside context is nevertheless crucial as a tool for determining and interpreting any other obligations flowing from and to it; in particular, the good faith obligations of the parties. The parties' individual obligations are seen within the wider purpose of the relationship. Good faith is owed not to each other but the agreed purpose (Collins 2016: 51). In short, repeating the mantra that good faith is the standard of cooperation to achieve the agreed-upon result (Corcoran 2012: 9). In providing a context for following the instructions contained within the contract, the project-centric approach straddles both law and practice. One reason why the notion has resonance within construction law is because that area has been traditionally one where practice has been given particular weight. As Lord Dyson says, it was traditionally considered that construction law was 'all about the facts, not about the law' (Dyson 2016: 160). The project-centric approach, which requires consideration of both the terms of the contract and the actions which are required by those terms, fits well within construction law. However, there is a danger that attempts to explain the approach without articulating the underlying concept fall into the fissure between doctrine and practice described above. Instead, ideas pile up on either side of the fissure, making it deeper rather than filling it in. Relational contract, as a concept, provides the bridge. Relational contract is therefore the foundation to give effect to the framework of project-centric approach and move away from the either/or distinctions of the current approach. In short, the project-centric approach allows for the recognition of a 'relationally constituted contract law' (Mitchell 2013: 243)) which recognises legal and wider norms as binding the parties. Relationalism, linked to good faith, is the foundation which gives effect to the project-centric approach recognising that, in certain circumstances, the parties' competing interests are nevertheless merged in the adoption of a common set of goals. 'Once a particular aim is recognised as a contractual purpose then it can inform the interpretation of the contract' (Robertson 2019: 235). This is where the link between relationalism and good faith is most relevant. Good faith as a concept is not autonomous since 'its application depends on two issues relating to the context, the characterisation of the relation or activity where good faith is inserted and the nature of the legal obligations in which good faith obligations must be interpreted' (Corcoran 2012: 100). This role, and the interaction between relational contract and good faith are exemplified by Professor Collins' summary of the main components of such a contract as follows: 1. A long term business relationship that will provide sufficient pay-offs to both parties to continue with the relationship even through periods of considerable adversity. 2. Obtaining the benefits of the business relationship will require adaptation, cooperation, and evolution of performance obligations, so that indeterminate implicit obligations of this kind must be central to the deal. 3. These implicit indeterminate obligations must be understood as arising not from general moral standards or norms of reciprocity such as honesty, but will be tailored to achieve what is necessary to secure the success of the venture. Business necessity in this context requires acceptance of obligations derived from the general concepts of cooperation and loyalty or commitment to the project (Collins, 2016: 43). (emphasis added)" This makes the project vital to the relationship -but makes it clear that this also benefits the parties individually and that it helps to measure the obligations of good faith owed to it. It helps to reconcile the individual interest of the parties with the wider purpose of the contract. As such, it recognises the reality of the contracting experience, that a rational party can be 'pursuing their own self-interest in the contract' whilst also 'agreeing to cooperate as a way to achieve that objective' (Gounari 2021: 183)). The focus of the enquiry to determine whether the contract is relational is therefore to see what brings the parties together. For example, a (long-term) project that both parties are heavily invested in (for some, all contracts are therefore relational in some way (Eisenberg 2000:821)). The implicit obligations of cooperation and loyalty of commitment are owed to the project and not to each other. Depending on what the project is, those implicit obligations necessary to achieve the success of the venture will vary. The context is therefore important. This project-centric approach can also be gleaned from Yam Seng, Bristol Ground School v Intelligent Data Capture Ltd 24 and D&G Cars Ltd v Essex Police Authority 25 (what Collins referred to as 'the trilogy of relational contracts' (Collins 2016: 39)). In all these cases, the breach was established when one party failed to act for the success of the project and instead acted for their own interest. This shows how good faith, as an 'other-regarding value' is relevant to assess the duties of the parties by taking a project-centric approach. This is important and these dimensions were refined by Leggatt J, in Sheikh Al Nehayan, when he said that what 'was intended to be a long-term collaboration' in which the interests of the parties 'were inter-linked' 26 was a 'classic example of a relational contract'. 27 This link between the parties' interests, we argue, highlights the projectcentric approach. Leggatt J continued, 'while the parties to the joint venture were generally free to pursue their own interests and did not owe an obligation of loyalty to the other,' 28 anything that prevented this common purpose would be a breach of good faith. Our proposal to give effect to a project-centric approach is born from the promise of the parties to the bargain, its performance and the benefits the end project will give to the parties. This is not entirely a novel approach in itself since it appears to be implicitly guiding the reasoning of the courts in construction contracts and also exists, to some extent, in the purposive approach (Robertson 2019: 230). Yet implicit guidance is not enough. We now turn to the construction examples to show how to distil, explicitly, what the courts do into a working tool. Applying a project-centric approach to construction law The emerging nature of collaborative construction contracts places them particularly closely to the crucible in which ideas of relational contracts and good faith are developing. The various policy and practice innovations are sketched out above, reflecting continuing movement towards increased commercial flexibility and managing conflict within contracts. That said, examples of the sort of contractual frameworks identified have been said, themselves, to demonstrate the relational quality of construction contracts (McInnis, 2003;Circo 2014). There is, however, a question surrounding whether construction contracts would fit the criteria for a relational contract established in Bates. As Fraser J noted, these criteria are not exhaustive and not all relational contracts would comply with them, except perhaps the very first criterion that the agreement must not contain specific express terms in the contract that prevent a duty of good faith being implied into the contract. 30 So, the question remains an open one for construction contracts. Shy Jackson, who has written extensively on good faith (Jackson 2017(Jackson , 2018(Jackson , 2019 has provided a useful summary of this from a practitioners' perspective in a client focused briefing on the Bates criteria: Indeed, the increasing use of collaborative models is driven by the recognition that construction projects, by their nature, require close cooperation over a lengthy time period in order to deal with the inevitable risks that arise on such projects. If a contract to distribute Manchester United-branded toiletries in the Far East was considered a relational contract, as in the Yam Seng decision, it is difficult to see why a construction contract -especially one in which the parties chose to use a collaborative form of contract -will not be seen as a relational contract. (Jackson, 2019). It is obvious that some Bates criteria will apply to some construction contracts. It is not certain that they will in every instance. As Jackson notes above, construction projects are long-term contracts and can involve the management of significant levels of change -two of the key hallmarks of a relational contract. However, there are also distinctions. For instance, while the construction contract can be performed over several years, this is aimed at a particular end point: the delivery of the 'thing' contracted for. In this sense it is closer to a classical contract than a relational one. It distinguishes the type of contract agreed for the building of a factory, which will make Manchester United-branded products from the contract for the distribution of those goods. In the former, the contract will end successfully when the project is delivered. In the latter, there is no end point: a successful relationship could continue indefinitely. (It is also, of course, distinct from the eventual contract formed for the purchase of those goods by consumers.) The construction contract will usually bear some aspects of the relational contract. Acknowledging that relational contracting is a spectrum (Macneil 1974: 736-7), construction contracting models can be found at all points in the spectrum. At one end, we find so-called modular construction. In this situation, the bulk of the construction work is carried out by the contractor away from the eventual location of it. This generates a kit or even pre-constructed product, which is then delivered to site and installed. While that transaction perhaps takes longer to complete than a simple sale of goods, it has the elements of a classical discrete agreement, based on the provision of a product by a seller to a buyer. At the other end of the spectrum there are joint venture agreements which embody the principles of a partnership and have parties' commercial interests aligned. These parties have fiduciary duties to each other and may go beyond the sort of relational contract envisaged in Bates. The third category that sits across the middle reaches of the spectrum is the most difficult to categorise as these contracts can rarely be described with one particular label. Some may meet the Bates criteria and some may not. Increasingly, however (in response to the commercial and policy drivers noted above) these contracts include some form of 'relational' provisions within them. That might amount to express good faith obligations, detailed communications mechanisms, pain/gain share provisions, early warning mechanisms, early contractor involvement, enterprise agreements and the sort of complex enumeration of rights and obligations and which incentivise and support such mechanisms as the parties might agree within the scope of their own agreement, which terms are of course offered without limitation. This then places construction contracts in a position where they have a number of 'relational' features. Many of these features are expressly provided for within the contract. We argue that these features facilitate a project-centric approach by aiding communication and providing mechanisms to address changes within the project. These are helpful on their own but do -of course -add complexity to the contract administration. The projectcentric approach would apply where the parties' contract was interpreted as having sufficient relationality -whether expressly agreed or implied from the wider context of the agreement to merit it. It is therefore so important for that reason that there is some form of broader obligation, which can help facilitate these mechanisms. That broader obligation aligns with values such as good faith, cooperation and other relational values. The crucial importance of values such as good faith is perhaps best exemplified by the fact that where there is doubt about whether the contracts meet the Bates criteria for a relational contract, many construction contracts attempt to resolve such doubt by providing for express good faith obligations. As with the broader debate on relational contracts and good faith, the difficulty of then articulating the content of the good faith obligation poses a problem. One of the key points made against implied terms of good faith was by Sir Rupert Jackson saying that, "[parties] all need to know what the contract requires and what the contract permits. To that end, they do not speculate about ethics or metaphysics. …. They look at the black letter provisions of the contract. That is what the court should do as well" (Jackson 2017: [6.11]). This is true and militates against the implication of 'good faith' in a contract. More generally this criticism might be levelled against using the concept of good faith itself. However, it does not address what should be done when the contract itself contains blackletter provisions creating obligations of good faith. What is then needed is a practical and understandable approach to interpreting this good faith provision. There are two steps to this. Firstly, the relational character of the construction contracts should be given meaning and effect -especially where there is an express good faith obligation in the contract representing the parties' agreement to incorporate aspects of relationalism. That meets the parties' intention. The existing values which arise from good faith, such as honesty, fair dealing and not acting capriciously 31 should therefore be recognised and enforceable. Embedding these values will assist in managing the complex provisions for dealing with change in construction contracts. Thus, the implied obligations which arise from relational contracts should be part of -or at least aligned with -the project-centric approach. The second point arises because the current definition still leaves the 'fissure' between law and practice identified above. It poses the broader issues of parties understanding what it is that the contract needs them to do. One of the key ways to ensure that there are fewer disputes is to make sure that the contract is well understood by those who use it. We argue this goal is not necessarily assisted through increased complexity and volume of the documents attempting to create processes to deal with different issues and/or defining terms ever further. Rather simplification and clarity should be the aims. That is helped by an approach which intuits something about how things should be understood. Sir Rupert Jackson -and others -have correctly identified that the resort to the blackletter terms of the contract should not go as far as relying on technicalities. 32 However, we argue that where contracts contain an overarching duty of good faith (whether express or implied), the judicial reticence to give effect to it seems to cut against the need to give effect to the parties' bargain. Courts have the ability to resolve such disputes, even in the context of the 'adversariality' often present in construction contracts, through a project-centric approach where the 'other regarding' context -anchored on achieving the parties agreed project -is taken into consideration. These developments have happened within the more sceptical discussion of good faith. Within the construction context specifically, Lord Justice Coulson and Sir Rupert (formerly Lord Justice) Jackson, both senior construction judges, have both voiced somewhat sceptical views of good faith and relational contracts as concepts (even while Sir Rupert gave the best articulation of a project-centric approach in Amey Birmingham. 33 ) This can be seen in some of Sir Rupert's remarks, noted above, and the comment by Coulson LJ of good faith being something of only academic interest (Coulson 2019 within the choice of words rather than the underlying policy. Indeed, it is striking that both Jackson (Jackson 2020: 7) and Coulson (Coulson 2019: 9) view cooperation as fulfilling part of the need. This still leaves the question of how far cooperation goes. Indeed, cooperation is, in itself, a relational value and somewhat open textured in terms of how it might be understood. To the extent that the answer to how good faith is used can be found within the terms of the contract itself: we agree. As the following case analysis shows, parties have to cooperate to ensure the project is delivered. Using a project-centric approach helps in determining the boundaries of good faith and relational contract by using other-regarding values. It is worth repeating, good faith (understood as a standard of cooperation (among other things)) helps in achieving the agreed upon objectives (Corcoran 2012: 9). Applying the project-centric approach within the existing case law The project-centric approach and its attempt to give substance to the idea of the relational contract and good faith may seem on its face to be somewhat esoteric but its centrality is clear when the judicial and academic discussion of the definition of good faith are considered. The need to focus on the aim of the contract comes across clearly in the leading cases, to which we now turn. An analysis of case law shows that there is already the articulation of an attempt to break the vicious circle of rejection we have laid out above, and that this uses the language of the project-centric approach. The key is to recognise and emphasise this. As noted above, this can be seen in cases such as Amey v Birmingham City Council 34 and Sheikh Al Nehayan v Kent 35 but also seen in Bates v Post Office Ltd (No3), 36 by explicitly recognising the parties' common aim in seeing the contract performed. The following paragraphs present judicial decisions that are advancing a project-centric approach, albeit not consciously. An analysis of the Australian and English case law highlights the dialogue between the two jurisdictions. This is particularly true in the few relevant construction cases in the UK. While the following discussion presents cases of each system of law, we have also decided to highlight the judicial conversation between Australian and English judges which is present in both contract (generally) and construction cases. This project-centric approach started to appear in the New South Wales Court of Appeal decision of Renard Construction v Minister for Public Works (1992). 37 Renard Constructions was contracted to build pumping stations for a sewerage project in New South Wales. By focusing on the infidelities of the contract, the principal 34 , the criteria for whether or not a contract is relational include "3. The parties must intend that their respective roles be performed with integrity, and with fidelity to their bargain. 4. The parties will be committed to collaborating with one another in the performance of the contract. 5. The spirits and objectives of their venture may not be capable of being expressed exhaustively in a written contract." These draw from case law which consistently indicate some form of project centricity in their text. 37 (1992) 26 NSWLR 234. did not adopt a project-centric approach, and this came to the fore in litigation. 38 This was an instance of a discretionary right not exercised reasonably. Priestley JA also reflected, in obiter, on reasonableness and good faith, and highlighted the resemblance between the concepts, describing them as 'standards of fairness, and community expectations'. 39 Priestley JA's obiter would become most commented upon (Peden 2003;Carter et al., 2003;Warren 2010;Dixon 2011). Some subsequent cases used the obiter to recognise an implied term of good faith, such a term consequently limiting the exercise of a discretional right. This project-centric approach has also been adopted in Bundanoon v Cenric 40 where once again, the principal had already made up its mind by the time the notice was issued, thereby breaching an implied duty to act in good faith. 41 Later decisions have used Renard to imply a duty to act in good faith in the performance of a discretionary right in different contexts. 42 The Federal Court of Australia has considered good faith numerous times, 43 but has yet to enforce it. 44 Australia is still awaiting a decision of the High Court on the status of good faith in Australian contract law. In 2015, the High Court of Australia heard a dispute on the validity of late payment fees for credit cards. 45 Allsop CJ took an opportunity to summarise good faith as: …this summary is also consistent with the English case law as it has so far developed, with the caveat that the obligation of fair dealing is not a demanding one and does no more than require a party to refrain from conduct which in the relevant context would be regarded as commercially unacceptable by reasonable and honest people. 47 .Most recently, Fraser J endorsed Legatt J's approach in Bates. 48 The judicial discussion between the UK and Australia in mainstream contract law is therefore clear and is also present in construction cases to which we turn. A project-centric approach recognises the need for a flexible attitude to successful contract performance, reliant on the contextual backdrop of the agreement. Indeed, shared expectations go beyond the contract terms (Gerhart 2020: 98). In Australia, this idea was highlighted in Automasters Australia Pty Ltd v Bruness Pty Ltd, decided by the Western Australian Supreme Court, 49 where the retention of a report and decision to issue a notice for default without careful analysis demonstrated a lack of good faith, the franchisor having made up their mind to terminate the agreement. 50 In addition, good faith was said to: import a duty to have due regard to the legitimate interests of both parties in the enjoyment of the fruits of the contract. In some circumstances a cynical resort to the black letter or literal meaning of a contractual provision may be taken into account in determining whether there has been a lack of good faith. 51 The importance of context and the flexible nature of good faith was also highlighted. 'What constitutes good faith will depend on the circumstances of the case and upon the context of the whole of the contract.' 52 The key step is then developing what this means in terms of practice. It should not simply lead to further debates on definitions but provide a tool to be considered by the judiciary. The same year the New South Wales Supreme court rendered its judgment in Overlook v Foxtel. 53 Barrett J considered how selfish a party can be before potentially breaching a standard of conduct to act in good faith. 54 Considering Peden's work, Barrett J stated that: the implied obligation of good faith underwrites the spirit of the contract and supports the integrity of its character. A party is precluded from cynical resort to the black letter. It is, rather, a duty to recognise and to have due regard to 47 This reasoning is not foreign to construction cases -especially when it chimes with a clear policy aim. Courts already take a robust view of parties' conduct when it comes to the operation of payment provisions in contracts, and their enforcement through construction adjudication. So, for example, an 'over-literal' reading of the payment provisions in the UK security of payment legislation gave way to their clear purposive interpretation. 56 The judicial dialogue between these jurisdictions can be seen from the specific discussion of the Automasters and Overlook Australian judgments in the English case of Costain v Tarmac Holdings Ltd. 57 In this case, then Mr Justice Coulson was asked to consider the scope of the "mutual trust and cooperation" clause within the NEC 3 standard form of contract in considering a dispute about the extent to which one party to a contract might be obliged to correct the other party's apparent misinterpretation of the dispute resolution provisions. While he recognised that there is some content to be given to a good faith obligation, he said that it 'did not require the parties' to act against their own self-interest. 58 In saying this, he echoed the understanding of good faith as set out in the textbook Keating on NEC 3. (Thomas, 2012) In summary, these reasons delineated good faith narrowly but set out, as a conclusion in this exercise, that the duty is one 'to have regard to the legitimate interests of both the parties in the enjoyment of the fruits of the contract as delineated by its terms'. 59 In Costain, Coulson J commented on this saying he was broadly in agreement 'although … a little uneasy about a more general obligation to act "fairly"; that is a difficult obligation to police because it is so subjective.' 60 We argue that a project-centric approach brings the objectivity needed to help determine whether a party has acted in good faith -and therefore met the requirements of the contract. We also argue that whether the parties' enjoyment of the fruits of the contract is overtly affected can be considered objectively in light of the facts of the case and the context of the transaction. From the above, there is a clear strand within the Australian and UK case law which can be seen to be speaking to a project-centric approach: taking a view of giving effect to the parties' bargain, and more generally supportive of a broader view of the contractual approach. However, it has yet to be fully understood how 'the fissure' between the law and the practice can be bridged. The work of Tan is helpful in this matter. It sets out the framework and criteria that might apply to the different ways in which doctrinal development of good faith could occur. Let us therefore turn to it. 55 Ibid. at [124]. 59 Ibid. at para. 120. 60 Ibid. at [123]. Putting the project-centric approach into practice -issues of theory The doctrinal basis of a project-centric approach Tan (2019) has discussed the emerging discussion on relational contracts and has identified three ways in which it could develop. These are re-interpretive relationalism (Tan, 2009: 105-107) where existing rules and doctrines are reinterpreted along relational lines; re-orientative relationalism, (Tan, 2009: 107-111) described as the 'process of making explicit salience and additive changes to the content, structure and priority of rules and standards within a doctrine' (Tan, 2009: 105); and reconstructive relationalism, (Tan, 2009: 111-116) described as a more complete overhaul of contract law. The analysis of the project-centric approach for relational contracts, or those incorporating express obligations of good faith, would fit within the second category. Re-constructive relationalism runs in line with the policy push within the construction industry -to remake the way in which the culture runs. However, this meets the more conservative approach in the case law, which looks to develop the law without legislative change, and without the necessary development of language. Therefore, there is some tension between these two drivers for change and the result is the complexity of the options discussed here. The re-orientative approach provides the best explanation of the way to develop the project-centric approach as a response to judicial and construction industry developments. The reorientation also facilitates the bringing in of 'standards', through good faith, which can give rise to implied terms -in particular good faith -and which have a 'higher normative demand' (Tan 2019: 110). Increasing the salience and weight placed on the requirement for the parties to meet the agreed common purpose, and developing that by giving a standard to which the parties should be held would act as a means of developing the existing understanding. It would also fit the existing, articulated prism of meeting that agreed common purpose. On the analysis put forward, this gives effect to the under-emphasised characteristic of relational contracts (and indeed potentially all contracts) while using existing concepts to meet the recognised normative standards which are clear from the construction industry's contracting developments. From the above, the context of the construction industry efforts to map out the cooperative aspect of contracting is crucial. The industry is creating a new vocabulary which builds on the understanding and definitions which the courts are trying to articulate but cannot because they adopt a party-centric lexicon, as opposed to a project-centric one. This is not only important in relation to implied obligations but is also linked to contract interpretation. The debate surrounding the 'relational contract' has always been linked to the contextual enquiry (Mitchell 2013: 238). Our framework, when put into practice, shows that it is linked, but nevertheless different from the purposive interpretation (Robertson 2019). Moving towards a project-centric interpretation of contracts and bridging law and practice The Supreme Court in Wood v Capita Insurance Services Ltd 61 stating that the interpretation was an 'iterative process' 62 seems to indicate that the courts are ready to move towards a project-centric approach. This iterative process is important to highlight 'the fact that contractual purposes are bilateral does not mean they are conflicting … the contract as a whole, and any individual provision, may be understood to represent an accommodation and reconciliation of two competing sets of interests' (Robertson 2019: 234). Interestingly, the purposive approach was recently explicitly recognised in the Scottish case of Ardmair Bay Holdings Ltd v James Douglas Craig. 63 The Inner House, treated a purposive interpretation to the contract as self-evident and considered external norms (in this case commercial knowledge) to be relevant in interpreting contracts. Moreover, there is a consistency of approach -the purposive interpretation clearly echoes the approach suggested by Jackson LJ in Amey v Birmingham, to keep focussed on the 'fundamental purpose' and not be distracted by infelicities in drafting. 64 We agree, but suggest that project-centric good faith, rather than purposive interpretation, is a better both norm to consider as it is anchored in the parties' agreement and links it to external norms. The project-centric approach helps to do so by focusing on what unites the parties (project) rather than what separates them (individual interests). Although the interpretative process does hint at this through the purposive approach (Robertson 2019), it is not yet firmly established as a working tool which therefore prevents its wider application. We argue that this approach can be a conduit to give effect to the wider context that needs to be taken into consideration. In Commonwealth Bank of Australia v Barker, 65 Kiefel J's obiter on good faith is only an indication that good faith could be considered as a standard of conduct rather than a fixed rule. 66 Although purposive interpretation allows the separation of the individual interests from the core purpose of the contract (Robertson 2019: 234) and highlights the project-centric approach, it however does not go far enough in inserting 'other regarding values' such as good faith or recognise the relationality of the contract. By formally recognising the project-centric approach, courts and contractual parties are given the vocabulary to recognise all norms and values which form part of the contractual journey. Moreover, while the external values are inserted into the discussion, they remain rooted in the parties' agreement. The project is defined by them. Beyond that, in many projects, because of the multiparty network of contracts there is a necessity for the project to be defined outside of the individual contract (there will be a design for the construction of the eventual house, factory, hotel etc. which will be the basis for the project). If there is a large and complex public-private partnership for the construction of a hospital then the sub-sub-contract for electrical installation will be being carried out by reference to the same project (even if only part of it) as the agreement between the financial institutions which form the funding special purpose vehicle. This is not to be overly technical on the nuance and definition of project in the formal sense -each contract will be governed by its own terms. Moreover, it is not to suggest that there is some anthropomorphic entity 'the project', which develops its own rights. That would be to move the focus of the discussion away from the parties and onto something else. Rather, the project-centric approach interpretation focusses on an interpretation of the parties' agreement, which benefits the project -assessed in broad terms. Raising the salience of the project-centric approach also highlights the crucial role of the approach as a tool to improving the articulation between law and practice on three levels. First, it gives effect to the relational element of the contractual journey and breaks the binary distinction in contract law between internal and external elements. As such it relieves the artificial tension between 'individual interests and the agreed purpose' (Robertson 2019: 234). Both are important but in different ways, they are therefore not in conflict. Second, the project-centric approach is capable of intuitive understanding and pithy expression. This both helps resolve issues at the outset and during performance when problems occur. It shows that good faith and relational contracts, although complex ideas, can be applied and given effect to by the project-centric approach. Finally, it sits at the crux of law and practice and to some extent defies categorisation. However, given the undeniable link between interpretation and implied terms (Robertson 2019: 230; Robertson, 2016), it equally applies to both. Thus, the project-centric approach brings together the various threads, which have highlighted for a while the limits of a purely adversarial position as not showing the whole contractual experience and the artificiality of 'separating the legal obligations from the relational context' (Corcoran 2012: 12). The project-centric approach gives effect to what Mitchell was articulated: that it is not only that contract law must follow commercial law practice but that these practices must fit within a legal framework (Mitchell 2013: 441). Conclusions The debate in the construction industry in both the UK and Australia demonstrates that a party-centric approach to contracts is not necessarily applicable to all commercial dealings. The development of collaborative frameworks, new industries and policies have highlighted the inadequacy of holding to a purely traditional perspective of contract law of parties as adversaries, and that instead a project-centric approach, based upon good faith and relational contract, would better reflect the reality of the contracting experience as a more cooperative experience (Gounari 2021: 182). Although present in some cases, the centrality of the project is understood but not articulated. The current hesitation of the courts relates to the place of these doctrines but also the lack of vocabulary and framework surrounding the two notions of good faith and relational contract. We argue that the project-centric approach is a means by which to provide a fresh approach to the ideas of good faith and relational contract. We are therefore proposing to re-orientate the debate and finally acknowledge the doctrinal impact of both doctrines as a basis upon which we can bridge law and practice. Crucially, it meets a practical, policy need within the construction industry in both the UK and Australia. Conflict of Interest No conflict of interest for any of the authors. Consent to Publish All authors consent to publish. Consent to Participate All authors have participated equally to the writing of the article. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/ licenses/by/4.0/.
2022-07-22T15:17:43.436Z
2022-07-20T00:00:00.000
{ "year": 2022, "sha1": "f8e8073de6db3bb22fec588d0d017ca712182039", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10991-022-09307-5.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "ff4d83a3bd815d9d84e0fb3df12af23234e5fccc", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [] }
225078287
pes2o/s2orc
v3-fos-license
Interference of the Zika Virus E-Protein With the Membrane Attack Complex of the Complement System The complement system has developed different strategies to clear infections by several effector mechanisms, such as opsonization, which supports phagocytosis, attracting immune cells by C3 and C5 cleavage products, or direct killing of pathogens by the formation of the membrane attack complex (MAC). As the Zika virus (ZIKV) activates the classical complement pathway and thus has to avoid clearance by the complement system, we analyzed putative viral escape mechanisms, which limit virolysis. We identified binding of the recombinant viral envelope E protein to components of the terminal pathway complement (C5b6, C7, C8, and C9) by ELISA. Western blot analyses revealed that ZIKV E protein interfered with the polymerization of C9, induced on cellular surfaces, either by purified terminal complement proteins or by normal human serum (NHS) as a source of the complement. Further, the hemolytic activity of NHS was significantly reduced in the presence of the recombinant E protein or entire viral particles. This data indicates that ZIKV reduces MAC formation and complement-mediated lysis by binding terminal complement proteins to the viral E protein. INTRODUCTION The complement system is an effective arm of innate immunity. It is a family of membraneanchored and soluble proteins circulating in the blood in their inactive form (1,2). Upon activation by harmful exogenous or endogenous ligands, one of the three complement pathways is triggered. The classical pathway is induced by immune complexes or by direct binding of C1q to the surface of pathogens, while the lectin pathway is activated by mannose binding lectin (MBL) bound to pathogen-associated molecular patterns, respectively. The third pathway, referred to as the alternative pathway, is initiated by spontaneous hydrolysis of C3 protein. All three pathways result in C3 activation and the formation of C3 convertases, and they merge in the induction of the terminal pathway. This final step generates the membrane attack complex (MAC) consisting of C5b-8 and 12-18 molecules of C9, which makes a pore on the cell surface to kill pathogens or infected cells (1,2). To avoid destruction by the complement system, viruses have acquired strategies that can be condensed to a few successful mechanisms: 1) inactivation by enzymatic degradation; 2) the recruitment or mimicking of complement regulators; and 3) the modulation or inhibition of complement proteins by direct interactions (3). Different viral families take advantage of at least one of the above-mentioned mechanisms. Among them are retroviruses, orthopox or herpes viruses to name only a few (4)(5)(6)(7)(8). In addition, flaviviridae have adapted strategies to escape complement-mediated lysis (9)(10)(11). The flavivirus group includes several human pathogens, such as Dengue (DENV), yellow fever (JFV), West Nile (WNV), Japanese encephalitis (JEV), Zika virus (ZIKV), and the closely related hepatitis C virus (HCV), all of which share close similarities in structure (12,13). The family belongs to enveloped viruses with a single stranded RNA with positive polarity, which is translated into a single polyprotein. This precursor protein is processed by both host and viral proteases and gives rise to three structural and seven nonstructural (NS) proteins (12,13). The structural proteins include E protein (envelope protein) PrM, which is the precursor for membrane (M) and plays an important role in virus maturation and capsid (C). The envelope E protein mediates viral entry and modulates infection mainly in its glycosylated form (14). It binds to the different receptors on the surface of human cells and aids in the fusion and subsequent entrance of the virus via endocytosis (receptor-mediated endocytosis) (15). NS proteins are responsible for the regulation of RNA transcription, replication, and evasion or attenuation of the host immune response. By NS1, flaviviruses escape from complement-mediated lysis by binding complement regulator proteins such as factor H, C4bp, or vitronectin (9,10). In line with this, Zika virus (ZIKV) takes advantage of NS1 by binding vitronectin, a regulator protein that interferes with MAC formation by binding to C5, C6, C7, and C9 (16,17). Furthermore, NS1 may directly reduce C9 polymerization and thus prevent lysis by the terminal pathway complement (18). Although different by mechanism, HCV inhibits C9 polymerization by the acquisition and incorporation of CD59 into the viral envelope (11,19). Our data indicates that, similar to NS1, the E protein binds to terminal pathway complement proteins, interferes with the formation of MAC on the surface of the cells and further reduces complement-mediated lysis. Cells and Viruses A549 cells for Western blot and virus production and Aedes albopictus C6/36 mosquito cells for virus propagation were kindly provided by Prof. Dr. Karin Stiasny, Medical University of Vienna. Sheep erythrocytes for the hemolysis assay were obtained from Virion (Würzburg, Germany). Two strains of the virus, MRS_OPY_Martinique_PaRi_2015 (GenBank: KU647676) and ZIKV strain MR766 (GenBank: DQ859059) were kindly provided by the European Virus Archive (Marseille, France). The virus was propagated as described elsewhere (21). Buffers and Mediums Cell lysates were analyzed by western blot, RIPA buffer for inducing lysis of the A549 cells was purchased from Cell Signaling Technology (Frankfurt, Germany). Veronal-buffered saline (VBS) was provided by Virion. Dulbecco's modified Eagle's medium (DMEM) and phosphate buffered saline (PBS) were purchased from Sigma-Aldrich (Vienna, Austria). Binding Assay Complement proteins including C5b6, C7, C8, and C9 in 1:2 dilutions starting from 5 µg/ml were coated on the microtiter 96 well ELISA plate and incubated overnight at 4°C. After washing with PBS-Tween 0.01%, E protein (10 µg/ml) was added to each well and incubated for 1 h at room temperature (RT) with slow continuous shaking. The ELISA plate was washed three times and blocked with 5% BSA for 1 h. Antibody against envelope protein (4G2) was added at a concentration of 1:500 and incubated for 1 h. Finally, horseradish peroxidase-labeled goat anti-mouse antibody was added (1:10,000). The TMB substrate from Sera Care (Tornesch, Germany) was used. The optical density (OD) was measured at a wavelength of 650 nm. To test if native C9 from NHS was also able to bind to the E protein, serial dilutions of NHS were incubated with a constant amount of Eprotein (10 µg/ml coated overnight in the ELISA plate. BSA at the same concentration (10 µg/ml) or dilutions of C9-depleted serum (DC9 NHS) were applied as negative controls. Anti-C9 (WU 13-15) at a concentration of 1 µg per well was added. After washing, samples were incubated with secondary antibody (HRP-goat-anti-mouse Ab; 1:10,000) and finally, TMB substrate solution. Optical density (OD) was measured at a wavelength of 650 nm. Inhibition of MAC Formation A C9 polymerization assay was performed to study the formation of a membrane attack complex in the presence or absence of E protein. For this, A549 cells were seeded a day before the experiment at 1 × 10exp5 cells per well in 24 well plates purchased from Szabo-Scandic (Vienna, Austria) in complete DMEM (10% fetal calf serum (FCS; Thermofisher, Vienna, Austria), 2 mM L-glutamine, 100 units/mL penicillin G, 100 µg/ml streptomycin). The next day, cells were washed three times with VBS and C5b6 protein (5 µg in 300 µl) was added to the cells and incubated for 2 h in a humidified incubator supplied with 5% CO 2 at 37°C. In parallel, the E protein was incubated with 5% NHS at 37°C for 30 min a total volume of 300 µl for each reaction). After three washing steps of the cells with VBS, the mixture of the E protein and NHS was added to the cells and incubated for 60 min at 37°C. Cells were washed again, lysed on ice with RIPA buffer for 30 min (100 µl of lysis buffer), and the lysate was loaded on an 8% acrylamide gel under non-reducing conditions. Lysates were blotted and the membrane was blocked with 5% nonfat dried milk in Tris-buffered saline with 0.1% Tween20 (TBST) for 60 min. The first antibody against C9 protein (WU 13-15) (1:2,000) was added to the blocking solution and incubated overnight at 4°C. The following day, the blot was washed three times with TBST and a horseradish peroxidase-labeled goat anti-mouse antibody was added (1:10,000). After incubation for 2 h at room temperature, the membrane was washed three times and developed using the ImageQuant LAS-4000 (GE Healthcare, Vienna, Austria). In further assays, anti-human HLA-ABC was used in a sublytic amount (1:1,000) as an activator of the classical pathway (instead of purified C5b6 protein). Ab was added to the A549 cells and incubated for 60 min at 37°C. In parallel, different amounts of E protein were incubated with 5% NHS at 37°C for 30 min. After washing the cells with VBS, the mixture of the E protein and NHS was added to the cells and incubated for 50 min at 37°C. Deposition of C9 on the cell surface was analyzed by western blotting as described above. Hemolytic Assay To analyze the activity of the complement system, sheep erythrocytes (1 × 10 8 cells/ml) resuspended in VBS were sensitized with C5b6 (1 µg) for 60 min at RT using a Ubottom microtiter plate from Greiner Bio-one (Kremsmünster, Austria). In a separate preparation, E protein (10 µg), Vn (20 µg), and mixtures of Vn-and NS1 (containing 20 µg VN and 10 µg NS1) and Vn and-E protein (containing 10 µg E protein and 20 µg Vn) were each incubated with C7-C9 (C7 (1 µg), C8 (0.5 µg), and C9 (1 µg) for 15 min at 37°C. Next, the prepared mixtures were added to the sheep erythrocytes coated with C5b6 and incubated for 30 min at 37°C in a total volume of 100 µl per reaction (9). After centrifugation, the hemolytic activity of the complement system was measured by quantitating the released hemoglobin in the supernatant at 415 nm. To test whether virus particles interfere with complement activation, a two-fold serial dilution of NHS was pre-incubated with ZIKV for 30 min on ice. Sensitized sheep erythrocytes (20 µl, 2 × 10 8 cells/ml) were added to the samples and the mixture was incubated for 30 min at 37°C. The amount of hemoglobin released from the lysed cells was measured by determining the absorbance of the supernatant at an optical density (OD) of 415 nm. To distinguish between the effects of NS1 or E-Protein on the reduction of hemolysis, viral proteins were incubated separately or as a mixture with NHS (1:160 in VBS) before sensitized sheep erythrocytes were added and the lysis assay was performed as described above. Statistical Analyses Statistical analyses were performed using the GraphPad Prism 7.0 software. All experiments were repeated at least three times always performed in duplicate. The difference between the two groups was assessed by t-test. When comparing more than two groups, ANOVA followed by Bonferroni post-hoc tests was performed. A 95% significance level (p < 0.05) was considered statistically significant (*<0.05, ** <0.01, ***<0.001, and ****<0.001). ZIKV E Protein Binds to Components of the Terminal Pathway of Complement As ZIKV activates the classical pathway of complement, we were interested in whether the virus adapted means to reduce virolysis. In a first attempt, we assessed whether purified C7, C8, or C9 bind ZIKV E in ELISAs. In contrast to C7, both C8 and C9 interacted with the viral recombinant E protein ( Figure 1A). Both C8 and C9, but also C5b6 dose-dependently bound to ZIKV E protein ( Figure 1B). Significance was reached for C8 down to 0.63 µg/ml of ZIKV E ( Figure 1B), and for C9 and C5b6 down to 1.25 µg/ml. Finally, the binding of ZIKV E protein to the already generated terminal complement cascade (TCC) was assessed in NHS. For this, a constant amount of ZIKV E or BSA was coated onto the ELISA plates and incubated with different dilutions of NHS. As a further control, DC9 NHS was included. Significant interaction of TCC with the viral protein was observed up to an NHS dilution of 1:8 compared to the DC9 NHS ( Figure 1C). With regard to BSA, only background binding to TCC was detected, even at the highest concentration of NHS ( Figure 1C). ZIKV E Protein Reduces C9 Polymerization on Cellular Surfaces To test whether the binding of ZIKV E to components of the TCC interferes with the polymerization of C9, A549 cells were incubated with purified C5b6. As a source of C7 to C9, 5% NHS was used in the absence (Figure 2; 0 = no E) or the presence of different amounts of ZIKV E (Figure 2, 12.5 to 50 µg). The polymerization of high molecular weight C9 at the cellular surface was confirmed by western blot of the cell lysates employing the C9 neo-epitopespecific anti-C9 antibody (WU 13-15). Polymeric C9 was markedly reduced in a ZIKV E-concentration-dependent manner ( Figure 2A), while BSA had no effect ( Figure 2B) To further analyze the effect of ZIKV E on C9 polymerization, A549 cells were first incubated with sublytic amounts of an anti-MHC-I antibody as a trigger for the classical complement pathway. After removing the antibody by washing, NHS was added to the cells, which were preincubated with different amounts of ZIKV E. Cell lysates were analyzed by western blotting. C9 polymerization was reduced in a dose-dependent manner ( Figure 3). However, in contrast to the induction of TCC by incubation of the cells with purified C5b6, which gave rise to high-molecular-weight C9 polymers, the activation of the classical pathway of complement induced C9 oligomers of about 210 kDa in size ( Figure 3). Again, the band observed for high molecular C9 polymers were reduced when compared to BSA. ZIKV E Protein Inhibits the Formation of a Membrane Attack Complex The interference of ZIKV E with the proteins of the terminal complement pathway might affect complement-mediated lysis, similar to that described for other pathogens (18,(22)(23)(24). Therefore, as sensitive functional readout, hemolytic assays with sheep erythrocytes were performed using purified TCC components. For this, ZIKV E was pre-incubated with C7, C8, and C9 and added to C5b6 pre-coated erythrocytes. When compared to lysis of the cells in the absence of viral proteins, which was set at 100% lysis, ZIKV E significantly reduced hemolysis ( Figure 4) similar to that observed for vitronectin, a known inhibitor of the TCC (16). We confirmed that also NS1 interferes with complement-mediated hemolysis (not shown) and reproduced the data of Conde and coworkers, who showed a synergistic effect of NS1 with vitronectin (18). This synergy was not observed by combining ZIKV E with vitronectin ( Figure 4). Finally, we were interested in whether not only recombinant viral proteins, but also ZIKV itself interferes with hemolysis of sensitized erythrocytes. Erythrocytes were lysed by NHS in a dose-dependent manner and were not affected by the mock control, which employs the DMEM buffer used to cultivate the cells for virus propagation ( Figure 5). In contrast, hemolysis was significantly diminished when ZIKV (2.5 × 10 exp5 PFU) was present in the system, starting from a 1:40 dilution of NHS ( Figure 5), indicating that not only recombinant viral proteins, A B C FIGURE 1 | Binding of terminal complement proteins to ZIKV E. Constant amounts (5 µg/ml; A) or serial dilutions (B, for significancenot always depicted due to limits in spacesee text) of purified complement proteins were coated onto ELISA plates and incubated with 10 µg/ml ZIKV E. To visualize binding, an E-specific Ab (4G2) followed by a HRP-goat-anti-mouse Ab and TMB as a substrate were added. To test whether already generated TCC is interacting with ZIKV E too (C), the recombinant viral protein was coated into ELISA plates and incubated with serial dilutions of NHS. BSA and DC9 NHS served as controls. Binding to TCC was determined by incubation with neoepitopespecific anti-C9 (WU 13-15) followed by HRP-goat-anti-mouse Ab. Again, TMB was used as a substrate. Optical density (OD) was measured at a wavelength of 650 nm. Experiments were repeated three times and were performed in duplicates. For statistical analysis GraphPad Prism software was used (A, 1-way ANOVA; B, C, 2-way ANOVA, respectively). *< 0.05, ** < 0.01 and *** < 0.001. but also viral particles interfere with MAC formation. To check whether NS1 and the E protein show additive effects, the viral protein was incubated separately and as a mixture with NHS before the erythrocytes were added. As expected, both viral proteins reduced hemolysis when compared to the buffer control (VBS). The mixture of NS1 and E further decreased cell lysis, indicating that both viral proteins contribute to the inhibition of complement-mediated lysis ( Figure 6). DISCUSSION Interference with complement-mediated lysis is a common complement evasion mechanism for many viruses (9, 10). (1:1,000). After washing, cells were incubated with 5% NHS, which was preincubated with different amounts of ZIKV E. After washing, cells were blotted and oligomerization of C9 was analyzed as described in Figure 2. A representative Western blot out of three independent experiments is shown. Thus, it is no surprise that members of the flavivirus family have developed strategies to escape from virolysis by interacting with proteins of the complement cascades and their regulators with NS1. WNV, for example, binds factor-H (25), a regulator of complement in the fluid phase, which interferes with the convertases and acts as a co-factor for C3b inactivation (1, 2). C4b binding protein (C4bp), a regulator of the classical and lectin pathway, is not only recruited by DENV (26), but also by WNV or YVF (27). Furthermore, the viruses' complex C4 together with C1s/proC1s in the fluid phase to decrease C4b deposition on the viral surface. Consequently, the classical pathway convertase is reduced and less MAC is induced (22). Clusterin (28) and vitronectin (18), two inhibitors of the TCC can interfere with MAC-formation by binding to NS1. In addition, flaviviral NS1, including ZIKV, can directly decrease C9 polymerization on cell surfaces and thus evade complementinduced damage (18). As mentioned above, these different strategies are attributed to the NS1 protein. Here, we report that besides NS1 also the E protein of ZIKV can directly interact with proteins of the TCC. In contrast to NS1, which also binds to C7, besides C5, C6, and C9, but not C8 (18), the viral E-protein was capable of interacting with C8, and only a poor interaction with C7 was observed. Consequently, the polymerization of C9 was reduced by the E protein in a dose-dependent manner when purified proteins were used. Of note, at basis of the molecular weight, more C9 was necessary to bind the same amount of ZIKV E. Thus, it would be interesting to check the effect of ZIKV E on the association of C8 or C5b6 during MAC formation. However this is beyond the scope of this paper. Activation of the classical pathway by antibodies bound to the cell surface resulted in a decrease of C9 oligomers in the presence of ZIKV E. Beside the bands for high molecular polymers, additional bands were identified compared to that of the experiments, in which the purified components were used. This corresponded to the size of trimerized C9 oligomers, which could be formed due to sublytic amounts of antibody used for complement activation. According to our data, ZIKV E interfered with complementmediated hemolysis comparable to vitronectin. Although interacting with vitronectin (data not shown), lysis induced by E protein was not further enhanced when both proteins were coapplied. In contrast, hemolysis was enhanced by vitronectin when the protein was added together with NS1, which confirms the recently published data of Conde et al. (18). As we were interested in whether lysis was impaired not only by purified complement proteins, but also in NHS as a source of complement, hemolysis assays were performed in the presence of ZIKV particles. Indeed, about four times more NHS was needed for complement-induced lysis of the cells when ZIKV was present. However, this experiment does not allow us to distinguish whether this effect is attributed to the E protein, NS1, or both. Therefore, purified recombinant viral proteins were used. Hemolysis assays showed that NS-1 and E proteins have additive effects, and thus, both proteins may contribute to the reduction in complement activity. Of note, more ZIKV E protein than NS1 was necessary to show comparable effects in the hemolysis assay, which might be due to a higher affinity of NS1 to proteins of the terminal pathway. In summary, not only NS1, but also ZIKV E protein can reduce the formation of the MAC. As ZIKV activates the classical pathway by direct binding of C1q to the E protein and infection by this virus upregulates the expression of complement proteins (29), the virus has adopted several strategies to interfere with complement attack assembly. This corroborates the view that multiple evasion strategies are used by microorganisms, and in particular viruses, to limit damage by the complement system. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/supplementary material. Further inquiries can be directed to the corresponding authors.
2020-10-28T13:07:17.064Z
2020-10-28T00:00:00.000
{ "year": 2020, "sha1": "5f063711053c1e175dff392adbdab85c941c5bce", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2020.569549/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5f063711053c1e175dff392adbdab85c941c5bce", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
260755606
pes2o/s2orc
v3-fos-license
Use of microaxial flow pumps in adolescents Objectives The Impella 5.5 has been successfully used in the adult population; however, safety and efficacy data in patients aged less than 18 years are limited. Methods Six pediatric patients, aged 13 to 16 years and weighing 45 to 113 kg, underwent axillary artery graft placement and attempted placement of the Impella 5.5 device at our institution between August 2020 and March 2023. Results Indications for implantation were heart failure secondary to myocarditis (2), rejection of prior orthotopic heart transplant, idiopathic dilated cardiomyopathy (2), and heart failure after transposition of the great arteries repair. Placement was unsuccessful in a 13.8-year-old female patient due to prohibitively acute angulation of the right subclavian artery, and venoarterial extracorporeal membrane oxygenation cannulation was performed via the axillary graft. In 5 patients with successful Impella 5.5 placement, median duration of support was 13.5 days (range, 7-42 days). One experienced cardiac arrest secondary to coagulation-associated device failure, requiring temporary HeartMate3 implantation. Four patients were bridged to transplant; 3 patients received a transplant directly from Impella 5.5, and 1 patient received a transplant after HeartMate3. The final patient received the HeartMate3 on Impella day 42 and is awaiting transplant. Conclusions Although exact size cutoffs and anatomy are still being determined, our experience provides a framework for use of the Impella 5.5 in adolescents. The Impella 5.5 (Abiomed) is a minimally invasive option for mechanical circulatory support (MCS) in patients with advanced heart failure, with the ability to provide up to 5.5 L/min of flow.This offers a minimally invasive option for left ventricular support, enabling recovery, or serving as a bridge to transplantation or durable left ventricular assist device (LVAD) placement.The advantages of a high-flow, minimally invasive MCS are numerous.These devices may avoid multiple sternotomies in patients likely to receive transplant, while still offering functional benefits of more durable devices.In addition, axillary insertion of the Impella 5.5 may enable patients to participate in physical therapy earlier and more rigorously in the postoperative period and avoid deconditioning before heart transplantation or durable LVAD placement. These advances in minimally invasive MCS have transformed adult heart failure therapy; however, widespread use in young patients is inherently limited by the discrepancy between device and patient size.Although options for MCS in children have expanded with Food and Drug Administration approval of the RotaFlow (Maquet), Centri-Mag (Abbott), and PediMag (Abbott) centrifugal flow pumps, there are fewer options for minimally invasive circulatory support in the adolescent population.[3][4][5][6][7][8][9][10][11][12] However, use of the Impella 5.5 in the adolescent population is limited to a single case report describing a 14-year-old patient who received the Impella 5.5 as a bridge to heart transplant. 8The Impella 5.5 not only provides greater circulatory support than past iterations of the device but also has different mechanical properties including a stiffer body and lack of pigtail to better facilitate axillary insertion.Further, the Impella 5.5 is the only iteration of Impella devices approved for total circulatory support in children.In this case series, we describe our institutional experience with the Impella 5.5 in 6 adolescent patients. MATERIALS AND METHODS Ethical Statement This study was approved by the institutional review board of Duke University Medical Center (PRO00101472), approved January 2, 2019.Individual patient consent was waived. Patient Population and Data Collection This retrospective observational study identified all patients aged less than 18 years who underwent Impella 5.5 placement from August 2020 to March 2023.Data were obtained from a prospectively maintained institutional database and manual chart review.Demographic data, including age, sex, height, weight, body mass index, body surface area (BSA), and family history of cardiomyopathy or congenital heart disease, were recorded.Bony chest wall width was measured at the level of the diaphragm on chest x-ray.Information regarding clinical course and outcomes was recorded.All data were maintained in protected servers. Statistical Analysis and Visualization Data were analyzed and visualized in GraphPad Prism (Dotmatics).Data are expressed as median (interquartile range) or mean AE standard deviation as indicated. Operative Technique Impella 5.5 insertion in the adolescent population presents unique operative considerations.Before incision, we inspect with echocardiogram for 5 key characteristics: (1) left ventricular thrombus, (2) right heart function, (3) significant septal defects, (4) aortic valve competency, and (5) device fit.For device fit, the cage must be inside the ventricle and the outflow must be above the aortic valve, and the device must fit through the aortic valve without occluding coronary or other arterial flow.Manufacturer contraindications include aortic valve diameter less than 1.5 cm and ventricular long axis length less than 7 cm.A 4-to 6-cm right subclavian incision facilitates exposure of the right axillary artery.After administration of 5000 units of heparin, the right axillary artery is then clamped.A longitudinal arteriotomy is made, and we anastomose an 8-mm or 10-mm beveled ($45 -60 ) Dacron chimney graft to the axillary artery, regardless of the patient's axillary artery diameter.The graft is tunneled superficial to the chest wall muscles and out through a separate incision inferior on the axilla.After clamp removal and assurance of hemostasis at the anastomosis, additional heparin is administered to achieve a goal activated clotting time of greater than 250 seconds.With assistance of transesophageal echocardiography and fluoroscopy, a 0.035-inch diagnostic J-wire is manually placed through the graft and advanced to the ascending aorta.An AL-1 catheter advances the J-wire through the aortic valve and into the left ventricle (LV).After exchanging the J-wire for a 0.018-inch placement guidewire, the Impella 5.5 is advanced over the wire into the LV pointing toward the apex, with the bend below the aortic valve.This often requires graft palpation under fluoroscopy to guide the rigid motor housing through the vascular graft anastomosis, along the curvature of the subclavian and innominate, and into the ascending aorta then ventricular apex.Additional maneuvers that can be helpful include (1) papaverine solution topically on the axillary artery, (2) exchange for a stiffer wire such as a 0.027-inch size, (3) dilator passage along the subclavian artery with fluoroscopic guidance, or (4) moving the arm up above the head. After removal of the wire, positioning of the device in the LV is assessed with fluoroscopy and transesophageal echocardiography.Generally, the distance from the center of the inlet cage to the aortic valve annulus is approximately 5 cm.After ensuring appropriate positioning, the device is started, and further adjustments to device position are made as needed.The Dacron graft is then cut to the appropriate size, and the insertion sheath is advanced, and the device is secured in place.The right subclavian incision is then closed, and sterile dressings are applied.In cases where the device cannot be placed due to small artery size or acute angulation of the great vessels, options include aborting to left-sided axillary placement, direct aortic placement via partial or full sternotomy, or conversion to venoarterial extracorporeal membrane oxygenation (VA-ECMO) with a cannula placed within the Dacron graft as described in case 3 of this report. Perioperative Considerations Given that use of the Impella 5.5 is not widespread in the adolescent population, there are several perioperative considerations when caring for these patients.All adolescent patients undergoing nonemergency evaluation for Impella 5.5 therapy received preoperative computed tomography (CT) or ultrasound imaging to exclude aberrant right subclavian artery, which is not uncommon in patients with congenital heart disease (Table 1).Further, caring for these patients requires nursing staff to be familiar with Impella 5.5 management.Initially, all patients aged less than 18 years with the Impella 5.5 were cared for in the adult cardiothoracic surgical intensive care unit (ICU); however, with implementation of programmatic training of congenital nursing staff, these patients now remain in the pediatric cardiac ICU.After device placement, our patients are carefully monitored for hemolysis and initially receive twice-daily hemolysis laboratory tests, which include plasma-free hemoglobin, lactate dehydrogenase, and haptoglobin (Table 2).Anticoagulation is also carefully monitored.Ensuring proper placement of the device is critical to optimize function and avoid arrhythmia, and daily echocardiograms are performed to confirm device placement and facilitate any necessary repositioning.Generally, pulmonary artery catheters are used to monitor device-assisted output Case 1 A 13-year-old male patient (59.6 kg, BSA 1.58 m 2 ) with a history of Coxsackie myocarditis complicated by dilated cardiomyopathy, severe pulmonary hypertension, and severe mitral regurgitation presented with acute on chronic systolic dysfunction.An intra-aortic balloon pump (IABP) was placed; however, ongoing cardiogenic shock led to placement of femoral VA-ECMO on post-IABP day 4.The patient was transferred to our institution, and the IABP was removed.Given worsening pulmonary hypertension and concern for left atrial hypertension, right heart catheterization was performed, which revealed elevated pulmonary pressures (pulmonary capillary wedge pressure [PCWP] 38 mm Hg, pulmonary vascular resistance [PVR] 108 dynes/sec/cm À5 , central venous pressure [CVP] 10 mm Hg).Balloon atrial septostomy was performed to alleviate left atrial hypertension and left atrial pressure at the time of septostomy was 43 mm Hg, with a left-to-right gradient of 14 mm Hg after septostomy creation.On ECMO day 11, an acute upper gastrointestinal bleed was identified, and the patient was taken to the operating room for ECMO decannulation and placement of a right axillary Impella 5.5. Preoperative imaging identified axillary artery diameter of 4.1 mm, aortic annulus diameter of 1.5 cm, aortic annulus to LV apex distance of 10.0 cm, and bony chest width of 32 cm.There were no operative complications, and postoperative care was completed in the adult cardiothoracic surgical ICU.The patient was extubated on postoperative day 1, and the upper gastrointestinal bleed resolved after decreasing anticoagulation while on the Impella 5.5.On postoperative day 17, a suitable donor was identified, and the patient underwent orthotopic heart transplantation and removal of the Impella 5.5.The patient's pulmonary hypertension resolved with continued inotropic support and diuresis, and catheterization on post-transplant day 7 demonstrated favorable hemodynamics (PCWP 10, CVP 5), and all inotropic support was weaned.Post-transplant course was complicated by positive crossmatch requiring 5 plasmapheresis sessions and intravenous immunoglobulins given signs of antibody-mediated rejection on initial cardiac biopsy.He was discharged on post-Impella 5.5 day 30.Eight days after discharge, he was briefly readmitted after routine cardiac catheterization demonstrated decreased cardiac function, raising concern for continued antibody-mediated rejection.However, endomyocardial biopsy did not demonstrate signs of cell-or antibodymediated rejection, and cardiac function improved with diuresis.He continues to do well with routine outpatient management. Case 2 A 13-year-old male patient (52.5 kg, BSA 1.59 m 2 ) with a history of dilated cardiomyopathy diagnosed at birth presented with emesis, poor oral intake, and radiating chest pain after 1 week of viral symptoms.On arrival to the emergency department, the electrocardiogram demonstrated no ST changes, and infectious workup was positive for parainfluenza virus.His echocardiogram demonstrated acutely decompensated heart failure, with an ejection fraction of 18% from a baseline of 28%.He was transferred to our institution for further evaluation.The patient demonstrated some improvement in hypotension and cardiac output with milrinone and epinephrine but experienced refractory atrial fibrillation/flutter requiring rate control with diltiazem.Given persistently inadequate cardiac output and refractory arrhythmia, the team decided to pursue Impella 5.5 therapy to support electrical cardioversion and assist in possible functional recovery or as bridge to durable ventricular assist device or transplant.Preoperative imaging revealed an aortic annulus of 2.0 cm, right axillary artery diameter of 7.5 mm, aortic annulus to LV apex distance of 9.9 cm, and bony chest wall width of 28 cm.The Impella 5.5 was placed successfully and permitted electrical cardioversion.He was extubated on post-Impella day 1 and subsequently listed status 1A for heart transplantation.On post-Impella day 7, the patient experienced ventricular tachycardia that resolved after Impella repositioning.The patient underwent transplantation on Impella day 21 and continues to do well. Case 3 A 13-year-old male patient (72.5 kg, BSA 1.82 m 2 ) with a history of D-transposition of the great arteries, status postaortic translocation and right ventricle to pulmonary conduit in infancy, and pulmonary valve replacement with a 23-mm Sapien valve (Edwards Lifesciences) at 10 years of age.He developed progressive systolic and diastolic dysfunction with ventricular tachycardia.After the patient experienced pulseless cardiac arrest with return of spontaneous circulation achieved after defibrillation, the decision was made to proceed with advanced MCS while the patient was evaluated for cardiac transplantation.Preoperative imaging revealed an aortic annulus of 1.5 cm, right axillary artery diameter of 5.3 mm, aortic annulus to LV apex distance of 12.3 cm, and bony chest wall width of 30 cm.The patient was listed status 1A for cardiac transplantation, however, was unable to receive offers due to prohibitively high panel reactive antibodies requiring desensitization.The patient was extubated on Impella day 15 and underwent Impella 5.5 removal and durable LVAD placement on Impella 5.5 day 42.He is undergoing desensitization for plasma reactive antibodies before transplant. Case 4 A 13-year-old female patient (45.0 kg, BSA 1.37 m 2 ) presented with 5 days of upper respiratory symptoms and 1 day of altered mental status.Laboratory results indicated multiorgan failure, and echocardiogram revealed severely decreased biventricular function, with thrombus in the LV.The patient was intubated and placed on high-intensity inotropes and transferred to our institution for transplant evaluation.Preoperative CT imaging could not be completed because of the urgent need for surgical intervention.Intraoperatively, the right axillary artery diameter was approximately 4 mm.The aortic annulus measured 1.59 cm, and bony chest wall width was 23 cm.She was taken to the operating room immediately upon arrival to our institution for Impella 5.5 placement.In the operating room, an axillary cutdown was performed and a 10-mm Dacron graft was anastomosed to the axillary artery.The Impella 5.5 was inserted into the Dacron graft and through the axillary artery; however, subclavian artery angulation was too acute to facilitate Impella 5.5 passage.We elected to place the patient on VA-ECMO via the axillary artery graft, followed by atrial septostomy for left atrial decompression.Postseptoplasty right heart catheterization demonstrated persistently elevated left atrial pressure, and a percutaneous left atrial vent was placed.Continuous renal replacement therapy was initiated on VA-ECMO day 8 for anuric renal failure.VA-ECMO and left atrial ventilation were discontinued after 11 days, and echocardiogram demonstrated mild to moderate LV dysfunction.Unfortunately, 53 days after ECMO decannulation, the patient had acutely decompensated cardiac function, resulting in hypotensive arrest.Given persistent renal failure and worsening cardiac function, this patient transitioned to comfort care and died 54 days after ECMO decannulation. Case 5 A 16-year-old male patient (113.2 kg, BSA 2.41 m 2 ) with a history of mild COVID-19 infection 1 month before symptom onset presented with vomiting, syncope, and ventricular ectopy.He was diagnosed with dilated cardiomyopathy and transferred to our institution for transplant evaluation.His heart failure was refractory to medical therapy, including carvedilol, lisinopril, milrinone, and sotalol, and progressed to the development of persistent nonsustained ventricular tachycardia.Cardiac magnetic resonance imaging demonstrated diffuse epicardial scarring consistent with a chronic, progressive cardiomyopathy rather than an acute COVID-19-associated myocarditis.Cardiac catheterization showed significantly elevated right-sided pressures (PCWP 34 mm Hg, PVR 560 dynes/sec/cm À5 , CVP 6 mm Hg).Preoperative imaging revealed an aortic annulus of 1.36 cm, right axillary artery diameter of 13.6 mm, aortic annulus to LV apex distance of 11.4 cm, and bony chest wall width of 37 cm.After multidisciplinary discussion, the team elected to proceed with MCS and Impella 5.5 implantation as a bridge to transplantation.Unfortunately, on postoperative day 5, the patient had ventricular fibrillation cardiac arrest requiring defibrillation due to device failure secondary to coagulopathy.The ICU and cardiac surgery teams proceeded with intracorporeal LVAD implantation as a bridge to transplantation.After Impella 5.5 explant, the team observed a complete occlusion of the outflow tract by clot (Figure 1).Repeat catheterization 26 days after durable LVAD insertion showed significant improvement in right-sided heart pressures and permissive of transplant.The patient received cardiac transplantation on post-Impella 5.5 day 36.Post-transplant course was initially complicated by low cardiac output, which improved with diuresis and inotropic support.A filling defect in the transverse aorta was observed on left heart catheterization, which prompted CT angiography and identification of a small, contained transverse aortic arch dissection that is being managed with labetalol to a blood pressure goal of less than 135/85 mm Hg and regular CT angiograms to evaluate dissection progression.He was discharged on post-Impella day 50 and continues to do well with outpatient management. Case 6 A 16-year-old female patient (81.4 kg, BSA 1.95 m 2 ) with a history of TNNT2-positive dilated cardiomyopathy status post-orthotopic heart transplantation at age 14 years presented with allograft rejection.Transthoracic echocardiography showed severely decreased LV ejection fraction (20%) and severe right ventricle dysfunction.She was transferred to our institution for retransplant evaluation.After right heart catheterization showing reduced cardiac index and elevated filling pressures, the cardiothoracic surgery team elected to proceed with femoral VA-ECMO cannulation.Preoperative imaging revealed an aortic annulus of 2.9 cm, right axillary artery diameter of 7.9 mm, aortic annulus to LV apex distance of 9.3 cm, and bony chest wall width of 28 cm.ECMO course was complicated by cannulation site bleeding and compartment syndrome on ECMO day 3 leading to Impella 5.5 implantation on ECMO decannulation.A suitable donor was identified, and she underwent a repeat heart transplantation 10 days after insertion of the Impella 5.5.Cardiac catheterization on post-transplant day 7 showed improved hemodynamics and no evidence of rejection.Her hospital course was complicated by a generalized seizure expected to be posterior reversible encephalopathy syndrome related to post-transplant hypertension.She was discharged on post-Impella day 24 and continues to do well with routine outpatient management. DISCUSSION In this article, we describe our institutional experience with the Impella 5.5 in 6 adolescent patients.Four patients were successfully bridged to transplant, with 1 requiring initial bridge to LVAD due to Impella device failure.One This case illustrates the alternative approach of using the already-placed axillary artery Dacron graft as a conduit for VA-ECMO. The most notable adverse event in this patient cohort was in case 3.This patient experienced Impella 5.5 outflow thrombosis, resulting in complete outflow occlusion, ventricular tachycardia cardiac arrest, and subsequent conversion to intracorporeal LVAD.This patient had a history of presumed COVID-19 infection approximately 1 month before the onset of heart failure symptoms, given several members of his household were symptomatic, although only 1 sibling had COVID-19 testing at this time, which was positive.The patient's COVID-19 symptoms were reportedly mild, with only 1 day of fever and no respiratory symptoms.The patient did test positive for COVID-19 at the time he presented with symptoms of heart failure.Thus, it is possible that this patient's coagulopathy was related to recent COVID-19 infection.We also considered insufficient anticoagulation or heparin-induced thrombocytopenia; however, heparin-induced thrombocytopenia panels were negative and activated prothrombin time ranged from 40.7 to 44.6 seconds 24 hours before thrombosis.The most likely etiology is device malpositioning; this patient experienced significant ectopy secondary to presumed arrhythmogenic cardiomyopathy, and the Impella 5.5 was repositioned multiple times to avoid triggering ventricular arrhythmia.It is possible that repositioning to avoid ectopy inadvertently caused the Impella 5.5 outflow cannula to push against the aortic wall, leading to stasis and serving as a nidus for clot formation or aspiration.Notably, we have not observed this complication in the more than 100 Impella 5.5 devices we have placed in adults at our institution.Although the etiology remains uncertain, this underscores the importance of regular positioning verification with echocardiogram and routine lab monitoring in the immediate postoperative period. Currently, only 1 adolescent patient supported with the Impella 5.5 is described in the literature: a 14-year-old with acute systolic dysfunction who was bridged to transplant after 21 days of Impella 5.5 support (Table 1). 8Our case series adds to a growing body of literature describing minimally invasive MCS in adolescent patients.One limitation is that many of our patients aged less than 18 years Can the Impella 5.5 be safely used in adolescents?could be considered adult sized, and our smallest successful insertion was in a patient weighing 51 kg.However, age is positively correlated with vessel diameter independent of weight, and body size may not always predict successful insertion. 13,14Further studies are needed to determine exact measurement thresholds and guidelines for use of the Impella 5.5 in adolescent patients. CONCLUSIONS The Impella 5.5 can be used to bridge adolescent patients to cardiac transplantation (Figure 2).Although exact size cutoff and anatomic guidelines are still being determined, this article gives a framework for the use of the device in adolescents. August 2020 -March 2023 • 6 patients < 18-years-old underwent attempted Impella 5.5 placement • Patient characteristics and outcomes recorded Impella 5.5 can be placed in adolescent patients, but vessel size and angulation can be prohibitive.Post-operative monitoring of hemolysis risk and device positioning is critical.
2023-08-10T15:04:42.025Z
2023-08-01T00:00:00.000
{ "year": 2023, "sha1": "4de1bb6cbad9c8aa3ee1e2e2c0cd119753691002", "oa_license": "CCBYNCND", "oa_url": "http://www.jtcvstechniques.org/article/S2666250723002699/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "48971c55d06650dfa934e0095de8855c724e20b3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
44811895
pes2o/s2orc
v3-fos-license
A genetic map of several mutations affecting the mucopeptide layer of Escherichia coli Several temperature-sensitive mutants of Escherichia coli were isolated which lyse at the restrictive temperature. Some of these possess a bio-chemically denned lesion in cell-wall mucopeptide synthesis. Three genes, termed murC, E and F, have been localized between the azi and leu markers. From transductional data a fine structure map was constructed of the mur mutations, establishing the order of the genes. The genetic relationship between these cell wall genes and neighbouring genes involved in cell division is discussed. 1. INTRODUCTION Among temperature-sensitive mutants of Escherichia coli K-12, several showed lysis when grown at the restrictive temperature; they were denoted by TKL (A. Rorsch, unpublished).The TKL strains were analysed biochemically by Lugtenberg, who found that many of the mutations involve genes for the 'adding enzymes' (Ito & Strominger, 1962a, 6;Comb, 1962) which synthesize the precursor of the mucopeptide layer of the cell wall.In this process, UDP-iV^-acetylglucosamine* is converted in two steps to UDP-iV^-acetyl-muramic acid, to which are added, sequentially, L-alanine, D-glutamic acid, m-diaminopimelic acid and D -alanyl-D -alanine. Preliminary evidence indicated that many Us mutations in TKL strains which lyse at the restrictive temperature are closely linked and are also linked to the fts marker, which gives rise to filament formation (van de Putte, van Dillewijn & Rorsch, 1964).It seemed of interest to investigate the relationship between the mutations in this complex of genes concerned with cell division and cell-wall synthesis, in view of the frequent occurrence in bacteria of clustering of genes with related functions for regulational purposes.In this paper some of the genes specifying the biochemical steps mentioned are characterized, their order is reported and their genetic relation with thefts mutations is discussed. (i) Bacterial and phage strains The genotypes of the strains can be found in Table 1.The TKL and TKF strains were obtained after mutagenesis with i^-methyl-iV^'-nitro-iV-nitrosoguanidine of strains derived from CR 34 (Okada, Yanagisawa & Ryan, 1960).Transducing phage was ^363 from Dr A. Rorsch. (in) Conjugation and transduction methods For conjugation log-phase cells growing in nutrient broth were mixed at 28 °C, so as to give approximately 2 x 10 7 Hfr cells and 2 x 10 8 F~ cells/ml.After 3 h the mixture was plated on appropriate media to select recombinants, using the auxotrophy of the Hfr parent for counter-selection.Heat-tolerant recombinants were scored on minimal medium of low ionic strength, transferred to 42 °C after a period of 4 h at 28 °C to allow complete expression.The relative frequencies of different recombinants in such an uninterrupted mating at 28 °C do not deviate essentially obtained from those at 37 °C. Transduction with phage 363 was performed mainly according to the method of Lennox.The lysates used for transduction were normally obtained at 28 °C, following the method of Signer (1966).Phage titres ranged from 1 to 5 x 10 9 .The number of transductants at 28 °C is not significantly lower than at 37 °C.ftecombinant colonies were tested for the presence of unselected markers by suspending them in saline and streaking on the appropriate media, in which, to prevent contamination with parental types, the original selective procedure was repeated. RESULTS (i) Characterization of the strains When TKL strains are grown at the restrictive temperature (42 °C), lysis occurs (Wijsman, 1972).Since a concentration of 10% sucrose in the medium is able to stabilize the spheroplasts formed, it is concluded that the lesion affects the cell wall rather than the cytoplasmic membrane.The growth of several of the strains is restored by the addition of 20 % sucrose to the plate; in others this is not the case, but a correlation of this phenomenon with the enzymic function affected (as described below) was not found. (ii) Location of the mutations The TKL strains were mated to strain KMBL 171, and from the number of lts + recombinants in relation to the gradient of transmission of the other markers it was concluded that the Us mutations here studied are located near leu, to which they are at least 80 % linked.Close linkage of Its mutations to leu and azi was confirmed by transduction with phage 363 (Table 2).Since some/fe mutations, in strains forming filaments at the restrictive temperature, had likewise been located near leu by van de Putte et aJ. (1964), they were compared with the Us mutations.Three-point crosses presented in Table 3 provide evidence that both Us and fts mutations are located between leu and azi.In this connexion it is relevant that fts-12 is the marker closest to azi, as will appear below.On account of its high linkage to azi (92 %) in conjugation with KMBL 171, it was anticipated that the Us mutation in strain H 1119, too, might be located among the other Us and fts mutations. (iii) The sequence of the mutational sites It seemed of interest to find the exact order of the temperature-sensitive mutations by intercrossing strains carrying different alleles of leu.Originally all the strains considered, except H 1119, were leu~.The leu + allele was introduced into all of them and the resulting strains were used as a host for phage 363.With the lysates obtained, three-factor transductions could be performed with the leus trains as recipient.When no more than two crossovers are necessary to produce a leu+lts + recombinant (Fig. 1, left), the percentage of the leu+ allele among heattolerant recombinations is high, indicating that, in the example, lts-1 is located to the left of lts-2.On the other hand, when at least four crossovers are necessary to produce leu+lts + recombinants, the ratio leu+lts + jlts + will be low (Fig. 1, right). 1 1 leu lls-l + leu + lts-3 Fig. 1.A comparison between two hypothetical crosses with a high (left) and a low (right) fraction of leu + among lts+ recombinants, respectively.Full line: crossover essential for lts + recombinant formation.Dashed: crossover pattern resulting in lts + leu + recombinants. In this way the order of different mutations can be deduced (Gross & Englesberg, 1959). Intone case azi was introduced in the donor strain, to test the crossing-over pattern independently.Since azi and leu are situated on different sides of the thermosensitive mutations, their frequency among heat-resistant recombinants is expected to be inversely related.From the cross TKF 12 x phage 363 (TKL 15 leu+ azi~), 30 heat-tolerant recombinants were analysed; only four of these (13%) carried the leu + allele, a 'low' value, indicating that at least four crossovers are involved.Of these 30 recombinants, 21 (69%) carried the azi~ allele, pointing to the requirement for only two crossovers for the formation of lts + azi~ recombinants.The outcome of this cross confirms the order leu.. .lts-15.. .fts-12.. .azi.All numbers were directly scored on the selection plates, except those marked with *, resulting from the analysis of isolated colonies. A number of thermosensitive mutants are very leaky, producing 'lawns' when whole cultures are plated at the restrictive temperature, so that heat-tolerant recombinants cannot be selected.In fact, virtually only TKL 46 behaved as a good recipient, the viability of recombinants in strains such as TKL 39 being low at 42 °C.The ratio leu + lts + jleu + , in which selection is made for leu + instead of for lts+ transductants, was now introduced for comparison with the leu + lts + llts + ratio.As Table 4 shows, the two were found to give the same information regarding the ordering of sites, even though in crosses between temperature-sensitive mutants in general the ratio of leu + transductants at 42°/28° is much lower than in the case of wild-type strains (115/89 for KMBL 146), so that discrimination between 'high' and 'low' ratios becomes less easy.Accordingly, the leu + lts + /leu + ratio alone can be used to order those mutations from which no lts + recombinants could be directly selected.The results are given in Table 5 and show that the order of the mutations is the following: leu... (lts-19, lts-15...lts-39)...lts-46...lts-119...lts-7...lts-22...fts-10... fts-15...fts-12...azi.H 1119, ST 622 TKL 11, 15, 19, 24, 39 TKL 46 ST640 All numbers were directly scored on the selection plates, except those marked with *, resulting from the analysis of isolated colonies.Table 6.Genetic symbols for some of the enzymes concerned with the synthesis of the mucopeptide layer DISCUSSION (i) Correlation of the biochemical and the genetic data The genetic map gains its interest from the findings of Lugtenberg, de Haas-Menger & Ruyters (1972), who have tested the enzymic activity in the mutants, in comparison with the wild type, of the enzymes listed in Table 6. Of the five mutants in which the diaminopimelic acid-adding enzyme activity is affected, lts-24 has not been mapped precisely because of its leaky character, but it can be cotransduced with leu (90% cotransduction).When diaminopimelic acid (20 /tg/ml) is added, growth at 42° is restored.This observation may be compared with reports on some mutants acyl-tRNA synthetases for which, too, the addition of even a small surplus of their amino acid substrate restores sufficient in vivo activity (Neidhardt, 1966;several references in Folk & Berg, 1970). In H 1119 the activity of the L-alanine-adding enzyme is strongly affected; the same is found for ST 622.The temperature-sensitive mutation of ST 622 was reported to be located near leu (Matsuzawa et al. 1969).The same is true for ST 640 (Matsuzawa et al. 1969), but here the activity of the D-alanine: D-alanine ligase is affected.All the enzymes mentioned are fully active in strains with an^s mutation.In view of the biochemical data it seems warranted to give the genetic symbol mur to genes specifically concerned with the synthesis of the pentapeptide precursor of the 'murein' (Weidel & Pelzer, 1964).Two enzymes concerned with the synthesis of UDP-iV-acetyl-muramic acid, a pyruvate transferase and an enolpyruvate reductase, have been described for Enterobacter (Gunetileke & Anwar, 1966, 1968), but may be assumed to be present in E. coli as well.The symbols mur A and murB are reserved for these two enzymes; a mutant in murB was described and mapped by Matsuzawa et al. (1969).It is located at about 78 min on the current map, far away from the present cluster. Symbols for the adding enzymes and for the D-alanine: D-alanine ligase are found in Table 6.They have been used in the genetic map (Fig. 2), which shows that the genes murE, murF and murC are very close to each other, apparently forming a genetic unit, to which the as yet unidentified gene murD as well as ddl, located in this region by Matsuzawa et al. (1969), may also belong. The mutant TKL 7 does not appear to be affected in one particular enzyme of those tested, although its behaviour was abnormal (E.J. J. Lugtenberg, personal communication).TKL 22 is interesting in that its growth at 42 °C can be restored by the addition of 5 mg/ml D-alanine in synthetic medium.It remains to be seen whether lts-7 and lts-22 affect regulatory functions or other cell wall enzymes. (ii) On the nature of the fts mutations Whether the fts mutations affect one gene or more than one cannot yet be said.Their concentration between the mur region and azi points to a specific role of the fts gene(s) in the process of cell division.Filament formation as such could also be a result of an aspecific weakening of the cell wall (Bazill, 1967).In this respect it is interesting to find that both TKL 7 and TKL 22, whose mutations are located between the mur genes and the fts complex, form short filaments shortly before or during lysis.Their phenotype, intermediary between lysis and filament formation, is in remarkable agreement with their position between the Us and fts mutations.Taylor (1970) has claimed that azi is an older synonym of fts.It must be emphasized, however, that thefts mutations do not confer any increased resistance to sodium azide at 28 °C, unlike the mutants of type 7, described by Yura & Wada (1968), which also form filaments at 42 °C.Furthermore, thefts mutations mapped all reside to one side of the classical azi mutation in strain HfrH.However, in this respect these/<s mutations may represent a special case, because they were selected by nitration at 42 °C, followed by recovery at 28 °C (van de Putte et al. 1964).For some of the TKF mutants isolated at random the process of nitration at the restrictive temperature would already be lethal; this phenomenon might have a genetic basis, even though van de Putte (1967) has found that these random mutations, too, are located near leu.It is concluded that thefts mutations mapped are not located in the azi gene, while for other fts mutations fine structure data are needed. (iii) The relations with neighbouring gene complexes Whatever their relationship may mean causally, a close correlation is found between the phenotypes and the loci of the mutations.Taylor (1960) gives 0-5 min as the distance between leu and azi.Of the 10-15 genes that can be accommodated on such a segment of the genophore, possibly the greater part is known at present.Of these genes several are concerned with mucopeptide synthesis, and these possibly form an operon.The mutations in fts, envA, giving rise to chain formation (Normark, Boman & Matsson, 1969), pea (Yura & Wada, 1968) and azi affect the process of cell division; of these, pea and azi have a special relation with the function of the membrane (Yura & Wada, 1968).The filament-forming azi mutant of type 7 is reported by Yura & Wada to degrade its DNA when shifted to the restrictive temperature (another difference with fts in view of the data of van de Putte et al. 1964). A relation with DNA replication of this complex of cell-division genes seems to be provided by the adjacent mutTl allele, which is supposed by Cox & Yanofsky (1969) to induce changes in the normal base sequence of the DNA by coding for a protein that is a component of an error-detecting system associated with DNA replication.The mutation has been located between azi and leu, very close to azi; the authors mentioned the possibility that mutT and azi are synonymous. When their sequence becomes known, a complementation analysis involving all the mutations mentioned would be worth while in revealing the number of genes.It seems to be unlikely that the close topographic relationship of these cellenvelope and cell-division genes is fortuitous, although the selective advantage of this clustering, perhaps concerned with regulation, remains to be studied.I thank Dr E. J. J. Lugtenberg for allowing me to make use of unpublished data and for many a discussion of mucopeptide synthesis problems, Professor A. Rorsch for providing the thermosensitive strains and Tiny van der Ven-Matser for devoted technical assistance.I am grateful to Dr H. J. Rogers for his comments on an earlier drift of this paper. Fig. 2 . Fig. 2. Genetic sequence of cell envelope loci between leu and azi.Rectangles indicate genes, but distances are shown only approximately, the mur genes, for example, being possibly contiguous.The numbers of the mutants refer to the TKL number, except for murC (H number) and for fts (TKF number).The aberrant symbol used for the site where mutations 7 and 22 are found is meant to indicate that they are likely not to specify a separate gene, but possibly a DNA fragment with a regulatory role.References: a = Matsuzawa et al. (1969); b = Normark et al. (1969); c = Cox & Yanofsky (1969); d = Yura & Wada (1968). Table 1 . Strains of Escherichia coli K-12All strains were provided by Dr A. Rorsch, except H 1119 (from Dr P. G. de Haan) and ST 222 and ST 640 (from Dr M. Matsuhashi). Table 4 . Ratio of the numbers of transductants per 0-1 ml plated when thermosensitive Its and fts strains carrying different alleles of leu are crossed by transduction Table 5 . Ratio of the numbers of transductants per 0-1 ml plated when ' leaky' thermosensitive Its and fts strains carrying different alleles of leu are crossed
2018-04-03T05:54:09.049Z
1972-08-01T00:00:00.000
{ "year": 1972, "sha1": "f97a8ed7820aabbd9bc5f71762aea19cf3c2b0c1", "oa_license": null, "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/19835A2A0C0989636262521B8DA563E2/S0016672300013598a.pdf/div-class-title-a-genetic-map-of-several-mutations-affecting-the-mucopeptide-layer-of-span-class-italic-escherichia-coli-span-div.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "7a7aba6050bd5c47201cf23ef822da827319ea8a", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
204925334
pes2o/s2orc
v3-fos-license
Insulating phases of the infinite-dimensional Hubbard model A theory is developed for the T=0 Mott-Hubbard insulating phases of the infinite-dimensional Hubbard model at half-filling, including both the antiferromagnetic (AF) and paramagnetic (P) insulators. Local moments are introduced explicitly from the outset, enabling ready identification of the dominant low energy scales for insulating spin- flip excitations. Dynamical coupling of single-particle processes to the spin-flip excitations leads to a renormalized self-consistent description of the single-particle propagators that is shown to be asymptotically exact in strong coupling, for both the AF and P phases. For the AF case, the resultant theory is applicable over the entire U-range, and is discussed in some detail. For the P phase, we consider in particular the destruction of the Mott insulator, the resultant critical behaviour of which is found to stem inherently from proper inclusion of the spin-flip excitations. INTRODUCTION Since its inception more than thirty years ago [1], the Hubbard model has become the canonical model of interacting fermions on a lattice. Although possibly the simplest model to describe competition between electron itinerancy and localization, with attendant implications for a host of physical phenomena from magnetism to metal-insulator transitions, its simplicity is superficial and an exact solution exists only for d = 1 dimension [2]. Recently, Metzner and Vollhardt [3] have pointed to the importance of the opposite extreme, d = ∞. In suppressing spatial fluctuations, the many-body problem here simplifies considerably, reducing to a dynamical single-site mean-field problem. Motivated in part by the expectation that an understanding of the d = ∞ limit will serve as a starting point for systematic investigation of finite dimensions, and by the knowledge that some important vestiges of finite-d behaviour remain inherent in the d = ∞ limit, intense study of the 1 2 -filled d = ∞ Hubbard model on bipartite lattices has since ensued; for recent detailed reviews, see Refs [4,5,6]. The true ground state of the model is an antiferromagnet (AF) for all interaction strengths U > 0. One aim of the present work [7] is to develop a theory for the d = ∞ AF which, in contrast to previous theories for the AF phase [8,9,10], is reliable over the entire U -range, and in particular becomes exact in the U → ∞ strong coupling limit both at 1 2 -filling where the Hubbard model maps onto the AF Heisenberg model, and in the one-hole sector where it reduces to the t-J model [11] The majority of previous work [4,5,6] on the d = ∞ Hubbard model has focused on the paramagnetic (P) phase that results, even for T = 0, simply by neglecting the magnetic ordering (or suppressing it via frustration [5]). One highlight of this work has been the emergence of a detailed description of the Mott metal-insulator transition, although here too the picture is not complete: for example, a firm understanding of the mechanism by which the T = 0 Mott insulating solution is destroyed, and even whether it is continuous or first-order, remains elusive [5,12]. A second aim of this paper is to focus on the insulating state of the P phase, and to develop a theory for it on a footing essentially identical to that for the AF, which likewise becomes exact in strong coupling and which permits an analysis of the destruction of the Mott insulator. In seeking to develop a 'unified' description of the AF and P insulating phases, we adopt a rather different approach to that taken in previous work [4,5,6] by introducing explicitly, and from the outset, the notion of site local moments. To this end we consider first a conventional T = 0 mean-field approach to the problem in the form of unrestricted Hartree-Fock (UHF), together with a random phase approximation (RPA) for transverse spin excitations of the mean-field state. Despite the limitations of such an approach per se, its importance resides in enabling identification of key low energy scales for insulating spin-flip excitations. Since spatial fluctuations are suppressed for d = ∞, the low energy spin-flip excitations are found to be Ising-like and (for each phase) characterized by a single scale, ω s . This has a simple physical interpretation. For the AF, ω s = ω p (U ) is essentially just the energy cost of flipping a spin in the Néel ordered background; the ubiquity of antiferromagnetism for all U > 0 leads naturally to ω p > 0 for all U , with ω p ∼ 1/U as U → ∞ as one expects in the Heisenberg limit. For the P phase by contrast, where magnetic ordering is absent, the fact that a given spin is equally likely to be surrounded by ↑-or ↓-spins and thus (for d = ∞) has as many ↑-as ↓-spin neighbours, ensures that the corresponding spin-flip energy cost ω s = 0 for all U in the insulating state. Identification of the low energy spin-flip scales, while crucial to the present work, is preliminary: to transcend the limitations of the conventional 'static' mean-field approach, single-particle processes must subsequently be coupled dynamically to the transverse spin-flip excitations. It is this which, in leading as we shall describe to a self-consistent description of the single-particle Green functions, enables the aims outlined in the preceding paragraphs to be achieved. The Hubbard Hamiltonian, in standard notation, is with the (ij) sum over nearest neighbour sites on a bipartite lattice of coordination number Z: a Bethe lattice (on which in practice we shall largely focus), or a d-dimensional hypercube. To ensure a non-trivial limit as d → ∞ [3], the hopping is scaled as t = t * / √ 2Z. The paper is organised as follows. UHF+RPA, and the spinflip scales referred to above, are discussed in §2. Emphasis is also given here to simple physical arguments which, in highlighting both the deficiencies and virtues of UHF+RPA, indicate what is required to go beyond it; particular attention being given in this regard to UHF for the P phase, in view of its close relation to the early work of Hubbard [13] and the Falicov-Kimball model [14]. Dynamical coupling of single-particle processes to the transverse spin excitations is considered in §3, leading ( §3.1) to a renormalized self-consistent approximation for the (T = 0) single-particle Green functions upon which we subsequently concentrate. In §3.2 the strong coupling behaviour is examined analytically, and shown to be asymptotically exact for both the P and AF phases. Results are given in §3.3, focusing in particular on singleparticle spectra for the AF from strong to weak coupling, and on a discussion of the localization characteristics of the single-particle excitations -the latter being quite subtle, and pointing to the delicacy of the limit U → ∞ for the AF phase. For the P insulator, single-particle spectra are discussed briefly in §3.3, before considering the destruction of the Mott insulating solution in §4. The single-particle gap is found to close continuously, with an exponent ν = 1, at a critical U c = 3.41t * . The origins of this behaviour are found to stem from inclusion of the ω s = 0 spin-flip scale in the interaction self-energies, pointing to the importance of such throughout the entire insulating regime, and not solely in obtaining the exact strong coupling limit. The results of §4 are in good agreement with recent numerical work [12], as discussed in §5. CONVENTIONAL MEAN-FIELD APPROACH We focus on the zero temperature single-particle Green functions, defined by (for the site diagonal element); and separated for later purposes into retarded (+, t > 0) and advanced (−, t ≤ 0) components. The essential feature of d ∞ is that the corresponding interaction self-energy is site-diagonal [3,15], Σ ij;σ (ω) = δ ijΣiσ (ω); here, and throughout,ω denotes frequency relative to the Fermi level, vizω = ω − U/2. G ii;σ (ω) may be written as where S iσ is the 'medium' self-energy -which alone survives in the non-interacting limit-expressing hopping of σ-spin electrons to neighbouring sites. Simple application of Feenberg's renormalized perturbation theory [16,17] shows that, for d = ∞ but regardless of lattice type, S iσ is a functional solely of the {G jj;σ }. The functional dependence is particularly simple for the Bethe lattice (BL) on which we concentrate, namely with t ij = t * / √ 2Z the nearest neighbour hopping element. Note that this is quite general; no assumption has been made about magnetic ordering or otherwise. We consider now a conventional mean-field approach to the single-particle Green functions. UHF For both the AF and P phases, a Hartree-Fock approximation -by which we emphasize is here meant spin unrestricted Hartree-Fock (UHF)-is the simplest nontrivial mean field approximation, in which the notion of site local moments (µ i ), regarded as the first effect of electron interactions, enters from the outset. In the AF case, the local moments are naturally ordered in an A/B 2-sublattice Néel state, with µ i = ±|µ| for site i in the A/B sublattice respectively [18]. For the P phase by contrast, the local moments are randomly oriented: a site is equally likely to be A-type as B-type [19]. In either case the essential -and limiting-feature of UHF is that it is a static approximation, with solely elastic scattering of electrons and ω-independent interaction self-energies approximated bỹ For the AF phase, the UHF Green functions (G 0 ii;σ ≡ G 0 ασ with α =A or B) follow from Eqs (2.2,2.3) for the BL as where the 'medium' self-energy part reflects the 2sublattice structure of the Néel state. Eqs (2.4) are a closed set, with the UHF local moment |µ| = |µ 0 | found self-consistently via the usual gap equation (see e.g. [20]), which may be written formally as in terms of the corresponding spectral densities. And the total Green function is given by such that D 0 (ω) = −π −1 sgn(ω)ImG 0 (ω) gives the total single-particle spectrum. For the P phase by contrast, The sole difference to Eq. (2.4) occurs in the medium self-energy (see Eq. (2.2b)), since the nearest neighbours to any site are equally likely to be A-or B-type sites. Eqs (2.6) are a closed set for G 0 (ω) and the G 0 ασ (ω) in the P phase; the UHF local moment is again found from Eq. (2.5). For either phase there are two basic symmetries, viz ) respectively; and note therefore from Eq. (2.6) that G 0 (ω) is naturally independent of the spin, σ. A. Antiferromagnet UHF for the AF has been widely studied since the early work of Penn [18]. Here we mention only that for any d > 1 the exact ground state of the 1 2 -filled Hubbard model on a bipartite lattice is an AF insulator for all U > 0, and this is qualitatively well captured at UHF level: for all U > 0, the mean-field ground state is a 2-sublattice Néel AF, with a gap in the single-particle spectrum D 0 (ω) given by ∆(U ) = U |µ|; Fig. 1 shows D 0 (ω) at U/t * = 4 for the d ∞ BL. The deficiencies of UHF are however most clearly seen in strong coupling, U → ∞, where for the AF the singleparticle spectrum reduces to D 0 (ω) = 1 2 [δ(ω + U 2 )+ δ(ω − U 2 )] -as for the atomic limit, t * = 0. The physical origin of this is simple: consider for example the upper Hubbard band in strong coupling, and imagine adding an ↑-spin electron to a site (B-type) already occupied by a ↓-spin. Since UHF is an independent (albeit interacting) electron approximation, only the added ↑-spin can potentially hop to nearest neighbour (NN) sites. But it cannot do so in the strong coupling limit, since for the AF all NN's to the ↓-spin B-site are ↑-spins (A-type). The added ↑-spin thus effectively 'sees' the ↓-spin site as an isolated site, hence the emergence of atomic limit behaviour as the strong coupling limit at UHF level. But while physically transparent, this behaviour is wrong. In strong coupling, and for the 1-hole sector appropriate to the lower Hubbard band (or 1-doublon sector for the upper Hubbard band), the Hubbard model maps onto the t-J model [11] H tJ = −t where the hole moves in a restricted subspace of no doubly occupied sites (c † iσ = c † iσ (1−n i−σ )); and in the fluctuating spin background provided by the Heisenberg part of H tJ , with NN exchange coupling J ∞ = 4t 2 /U . Although it is exact in the atomic limit, UHF by itself can evidently say essentially nothing about the strong coupling limit. B. Paramagnet UHF for the T = 0 paramagnetic phase warrants separate discussion, in part because of its very close relation to two other well known approaches. The first is that due to Hubbard [13], with 'spin-disorder scattering' only. This is often called the Hubbard III (HIII) approximation, and we here refer to it thus (noting that 'resonant broadening' contributions are additionally included in Ref. [13]). HIII is equivalent to UHF, but with a saturated local moment. Thus, with |µ| = 1, the resultant cubic equation for G 0 (ω) on the d ∞ BL, obtained from Eqs (2.6,2.7) above, coincides precisely with the HIII approximation for any U ; see eg Eq. (34) of Ref. [21]. Although Hubbard's original formalism is very different, its physical content is that of a static approximation to an alloy analogy description [22]; a close relationship to UHF is thus to be expected. The second connection is to the Falicov-Kimball (FK) model [14], a simplified version of the Hubbard model in which electrons of only one spin type are mobile, and which for d ∞ is exactly soluble [23,24,25]. For the paramagnetic phase of the FK model the single particle Green function reduces precisely to that of HIII for any U [22], ie to the above-mentioned cubic for G 0 (ω) on the d ∞ BL; see eg Eqs (7.3) and (4.4) of Ref. [25]. For the P phase, Fig. 1 shows the UHF D 0 (ω) for U/t * = 4 on the d ∞ BL, contrasted to its AF counterpart. Ordering or otherwise of the preformed local moments clearly has a significant effect on the spectra. In the AF ordered case, for example, the interior edges of the Hubbard bands have characteristic square-root divergences, while for the P phase all band edges vanish with square-root behaviour. More significantly, while the 2sublattice structure of the Néel ordered state ensures a band gap ∆ = U |µ| for all U > 0, the single-particle UHF gap vanishes in the P phase at a critical U c ≃ 1.9t * given by U c |µ(U c )| = √ 2t * -or correspondingly U c = √ 2t * for HIII/FK-signalling an insulator-metal transition. UHF/HIII fails of course in the metallic phase, there being no well-defined Fermi surface or quasiparticles [22,26]. This is inevitable for any inherently static approximation with a frequency-independent self-energyΣ ασ , since the essence of Fermi liquid behaviour is the inelasticity of electron scattering near the Fermi level,ω = 0 [27]. However, even in the P insulating phase of interest here, UHF/HIII is deficient. As for the AF this is seen most clearly in strong coupling, U → ∞, where although the centres of the Hubbard bands are separated by U , each has a non-vanishing width. In the strong coupling P phase, and for the one hole (doublon) sector corresponding to the lower (upper) Hubbard band, the Hubbard model maps onto the t-J model (Eq. (2.9)) in a random spin background; and the exact full bandwidth of either band is given for the d ∞ BL by [28,29] Note that this is also the single-particle bandwidth in the other extreme of the non-interacting limit, reflecting physically that in strong coupling the hole/doublon behaves essentially as a free particle [30]. In contrast, the strong coupling bandwidth at UHF/HIII level is UHF or HIII does not therefore give the exact strong coupling limit for the Hubbard model, contrary to what has been suggested recently [31]; but, as is clear from the above discussion, gives instead the strong coupling limit of the FK model [25]. The physical origin of Eq. (2.10b) is however both simple and revealing. Consider again the upper Hubbard band in strong coupling, and imagine adding an ↑-spin electron to a B-type ↓-spin site. Within a static approximation such as UHF/HIII, only the added ↑-spin can hop; and it can do so in the first instance to any of ↓-spin NN's (B-type sites) only -the effective coordination number for which is Z eff = 1 2 Z. Since Z eff for the propagating ↑-spin electron is reduced by a factor of 2 below the full coordination number, and since the bandwidth of the d ∞ BL is proportional to √ Z eff , the strong coupling UHF/HIII width is thus diminished by √ 2 from the corresponding non-interacting value which, as in Eq. (2.10a) above, is also the exact strong coupling limit; Eq. (2.10b) thus results. The distinction between Eqs (2.10a) and (2.10b) is however qualitative, and not solely a matter of degree, reflecting the need to take seriously -even in strong coupling-the correlated dynamics of the electrons. Whenever, say, an ↑-spin electron is added to a site occupied by a ↓-spin electron, the added ↑-spin can indeed propagate in the P phase, scattering elastically off successive neighbouring ↓-spins; and as sketched above this is well captured at UHF/HIII level. But, having added the ↑-spin to a ↓-spin site, the latter can itself clearly hop off the site -to a neighbouring ↑-spin siteleaving behind it a spin-flip on the original site. The energy cost for the spin-flip is zero, since we are considering the P phase where, for d ∞ , a given spin is equally likely to be surrounded by ↑ or ↓ spins and has as many ↑ as ↓-spin neighbours (whence there is no 'exchange penalty' for a spin-flip). Thus, whether the added ↑-spin or the ↓-spin already present hops off the site, the initially created doublon propagates as a free particle [30]; Eq. (2.10a) thus results. To describe correctly the electron dynamics, both types of process above -and therefore the interference between them-must be included. A static approximation such as UHF/HIII cannot handle this, since such dynamics reside in the frequency dependence of the full interaction self-energyΣ iσ (ω), as considered in §3. RPA In contrast to single-particle spectra -probing states one hole or particle away from 1 2 -filling-RPA probes fluctuations about the mean-field state, and thus excitations of the 1 2 -filled state itself. For the insulating phases, with a gap to charge excitations, transverse spin excitations are of lowest energy. These are reflected in the transverse spin polarization propagators given within RPA by ij is the pure UHF transverse spin polarization bubble. Eq. (2.11a) leads directly to a familiar diagrammatic 'bubble sum'. Alternatively, since the interaction is solely on-site, this may be recast as a 'ladder sum' of repeated particle-hole interactions in the transverse spin channel, as shown in Fig. 2; bare UHF propagators are denoted by solid lines, and the on-site interactions (conserving spin at each vertex end) by wiggly lines. From the basic sym- (2.11b) The spectral density of transverse spin excitations is naturally reflected in ImΠ +− ii (ω), as now considered for the AF and P phases. A. AF phase For the AF, Fig. 3a shows ImΠ +− AA (ω) at U/t * = 4 for the d ∞ Bethe lattice. Two distinct features are apparent: a low frequency spin-flip pole (discussed below), and a high energy Stoner-like band. The latter consists simply of weakly renormalized Hartree-Fock excitations across the gap in the mean-field single-particle spectrum. Spectral density for the Stoner bands does not therefore begin until precisely |ω| = U |µ| (see Fig. 1), and their maximum density occurs for |ω| ≃ U . This is as found also for finite-d, see eg Ref [32]. The central feature in Fig. 3a is a low-ω pole at ω p , located via Eq. (2.11b) from U 0 Π +− ii (ω p ) = 1, and occurring for all U > 0 (Fig. 3a, inset). This is the sole remnant, for d ∞ , of the spin wave-like component of the transverse spin spectrum studied recently at RPA level [32]; and which, for finite d and any U > 0, is naturally gapless. Physically, the single spin-flip pole at ω p reflects the general suppression of spatial fluctuations for d ∞ : it corresponds simply to the energy cost of flipping a spin in the AF background. This is particularly clear in strong coupling, where the Stoner bands are eliminated entirely. Here, as is well known [20,33], the RPA transverse spin spectrum reduces (for any d) to the linear spin wave spectrum of the nearest neighbour AF Heisenberg model, with exchange coupling J ∞ = 4t 2 /U = 2t 2 * /ZU , onto which the 1 2 -filled Hubbard model maps rigorously. And for d ∞ it is straightforward to show that the resultant linear spin wave spectrum collapses to an Ising-like spin-flip pole at ω ∞ p = ZJ ∞ /2 = t 2 * /U . Further, since linear spin wave theory for the Heisenberg model is exact for d ∞ [34], it follows that in strong coupling UHF+RPA gives the exact spin excitation spectrum of the 1 2 -filled Hubbard model. The occurrence of the single ω p -pole is robust to further renormalization of particle-hole lines in Π +− ii (ω), as discussed in §3.3. We stress further that to capture it requires the full ladder sum of repeated p-h interactions shown in Fig. 2: retention solely of the 'bare' polarization bubble diagram will clearly not suffice. The necessity of including the AF spin-flip scale will be evident when discussing the T = 0 single-particle spectra, §3.2,3. Here, we illustrate briefly its importance at finite temperature, as reflected in the Néel temperature T N (U ). Molecular field theory is exact for the Heisenberg model in d ∞ [35]; thus, in strong coupling, to yield a good estimate of the Néel temperature in a U -regime where thermal properties are dominated by the low-lying spin-flip excitations. Jarrell and Pruschke [36,37] have obtained the finite-T phase diagram for the d ∞ hypercubic lattice via quantum Monte Carlo (QMC). The thermal paramagnetic phase above T N (U ) is found to be metallic for U/t * < ∼ 3 and insulating for U/t * > ∼ 3 (with a small 'crossover' regime); it is thus in the latter region that we expect T N ≃ 1 2 ω p . This is borne out. Fig. 4 shows the QMC T N (U ) for the d ∞ hypercubic lattice, together with the corresponding 1 2 ω p (U ) and the exact strong coupling asymptote T N = 1 2 ω ∞ p . The QMC Néel temperature is indeed well described by 1 2 ω p (U ) down to U/t * ∼ 3. B. P phase For the T = 0 P phase, Fig. 3b shows ImΠ +− AA (ω) at U/t * = 4 for the d ∞ Bethe lattice. Compared to its AF counterpart (Fig. 3a) the key difference is that the spin-flip pole occurs at ω = 0, reflecting the fact that the energy cost for a spin-flip is zero in the paramagnetic insulator, as argued physically in §2.1b. The formal origin of this at RPA level is seen readily by noting that the bare transverse spin polarization bubble (Fig. 2, diagram (a)) is given by Hence, using the spectral representation of G 0 Aσ (ω), (2.14) Since the UHF local moment |µ| = |µ 0 | is given by Eq. (2.5), 0 Π +− AA (ω = 0) = 1/U ; and thus from Eq. (2.11b) the RPA Π +− AA (ω) has a spin-flip pole at ω = 0. Note again, as for the AF case, that the full ladder sum of particle-hole interactions in the transverse spin channel is required to capture the ω = 0 spin-flip pole. Further, although we have shown explicitly its existence within RPA, the occurrence of the zero-frequency spin-flip scale is naturally a general feature of the d ∞ paramagnetic insulating phase where, locally, the ground state is a doubly degenerate local moment (as for the single-impurity Anderson model embedded in an insulating host) [5]. For both phases, the evident virtues of the RPA for excitations of the 1 2 -filled state contrast sharply with the deficiencies of the single-particle spectra at UHF level, §2.1. This itself hints at what is necessary to describe the single-particle spectra successfully: single-particle processes must be coupled dynamically to the transverse spin excitations, reflected in the frequency dependence of the self-energy. This is now considered. GREEN FUNCTIONS It is helpful to separate the full interaction self-energies Σ ασ (ω) asΣ where Σ ασ (ω) (α =A or B) excludes the first-order UHFtype contribution, and contains the dynamics on which we want to focus. From Eqs (2.2,2.3) for the Bethe lattice, the exact site-diagonal Green functions are thus given formally by Here, the medium self-energy is given for the AF and P phases by where the site indexᾱ =B or A for α =A or B respectively; and is the total Green function. As at UHF level, ↑ / ↓-spin symmetry and particlehole symmetry for the corresponding spectral densities imply respectively. For the associated Green functions G ασ (ω) = G + ασ (ω) + G − ασ (ω), a Hilbert transform of Eqs. (3.4) gives directly Thus, from Eq. (3.3), and likewise for theΣ ασ 's. Eqs (3.5a) with (3.3) shows also that G(ω) is correctly independent of spin. The symmetries reflected in Eqs (3.5,3.6) play an important role in the following analysis. For the P phase, note also the physical interpretation of Eq. (3.3) for G(ω): viewing the paramagnet in terms of randomly oriented local moments, where a site is equally likely to likely to be A-type as B-type, we can consider Eq. (3.3) as a configurationally averaged Green function. This is a natural alloy analogy interpretation but, unlike the static approximation to such inherent in UHF or HIII, it is formally exact since no approximation to the interaction self-energies has thus far been made. Self-consistent renormalization Our aim now is to develop a specific approximation to the self-energy which in particular (a) becomes exact in strong coupling, ensuring thereby a controlled limit; and (b) is constructed in renormalized form, enabling a self-consistent solution for the single-particle Green functions. A relevant diagram contributing to Σ iσ is shown in Fig. 5, employing the same diagrammatic notation as Fig. 2. Using deliberately a strong coupling terminology, its physical interpretation is as follows (with t > 0 for convenience): at t = 0 a (σ =)↑-spin electron, say, is added to site i, thus creating a 'doublon'; at t 1 > 0 the (−σ=)↓-spin electron already present on site i hops from i to j, and at t 2 > t 1 an ↑-spin hops from j to k; the entire path is then retraced. The diagram thus describes motion of the doublon (or hole for t < 0) from i → j → k via a correlated sequence of alternating spin hops, creating behind it a string of flipped spins. All ladder interactions of the resultant on-site particle-hole pair -which reflect the on-site spin-flip created by motion of the doublon/hole-are shown explicitly for site i in Fig. 5; from which it is seen that their sum is exactly ) the RPA transverse spin propagator discussed in §2.2 (cf Fig. 2). It is precisely correlated dynamics of the sort exemplified by Fig. 5 that we seek to include and generalize in the frequency-dependent Σ ασ (ω). To this end we first define an undressed (or self-consistent host) Green function by This is shown diagrammatically in Fig. 6(a), as obtained simply from Eq. (3.7) using the Dyson equation for the full Green function G ii;σ expressed in terms of the UHF propagators and self-energy insertions. As seen from the figure, the implicit sum over intermediate sites j, k etc. is thus restricted to exclude site i itself (unlike the full G ii;σ where the site sums are free). While including all interactions on sites j = i, G ii;σ thus excludes all interactions on site i beyond the simple first-order UHF contribution toΣ iσ . The latter is of course subsumed into the UHF Green functions (as in §2.1), which here constitute the 'bare' propagators; and in this important sense the above definition, Eq. (3.7), of the host propagator differs from that of eg Refs [21,36,37,38,39,40] (which would be recovered if we set the site local moment |µ| = 0). To generalize the processes contained in Fig. 5, we renormalized the self-energy as shown in Fig. 6(b), replacing the −σ-spin particle lines connecting the starred vertices i in Fig. 5 by the self-consistent host Green function G ii;−σ ; the infinite set of diagrams thus retained in Σ iσ follows simply by direct iteration of Fig. 6(b) using Fig. 6(a) for G ii;−σ . This renormalization is adopted for the following reasons. (i) It ensures that an on-site spin-flip occurs only when the doublon/hole hops off a site, and that its outward path is self-avoiding. With reference to Fig. 5 for example, site j = i is guaranteed, likewise k = j; while terms with k = i vanish for d ∞ , being at least O(1/d) since G 0 ij;σ ∼ O(d −m/2 ) for sites i and j m th nearest neighbours. (ii) In addition, the resultant site restrictions further prevent the need to include a class of partially cancelling exchange diagrams, as illustrated simply in Fig. 7 (where a sum over j = i is implicit). Since j = i is guaranteed, the exchange diagram ( Fig. 7(b)) is at least O(1/d) and thus vanishes for d ∞ , while the 'direct' diagram ( Fig. 7(a)) is O(1). If, however, j = i was included in the direct diagram, its exchange counterpart would also be O(1) and would thus need to be retained. Our basic approximation to Σ iσ (ω) is thus Fig. 6(b), namely From Eq. (3.8a), Σ A↑ (ω) is thus a functional of the Green functions, the basic equations for which (Eqs (3.2)) must therefore be solved self-consistently, as now described. Since the t-J limit emerges correctly in strong coupling, the present theory is thus asymptotically exact. Consider for example the P phase, noting that for the U = 0 non-interacting limit the Green function G 0 (ω) = ReG 0 (ω) − iπsgn(ω)D 0 (ω) is given by whence the non-interacting spectrum is a semi-ellipse with full width 2 √ 2t * . From Eq. (3.15b) this is also precisely the spectral density for G A↑ (ω) in the lower Hubbard band. And since G(ω ≃ 0) = 1 2 G A↑ (ω) as above, the total lower Hubbard band spectrum in strong coupling is D L (ω) = 1 2 D 0 (ω), see also §2.1b; (the normalization factor of 1 2 naturally reflects the fact that the remaining half of the single-particle spectrum occurs in the upper Hubbard band centred on ω = U , viz D U (ω) = 1 2 D 0 (U − ω)). Note further that the Feenberg ('medium') and interaction self-energies contribute equally to the 1 2 t 2 * G A↑ (ω) denominator in Eq. (3.15b) for the P phase. Physically, this reflects the fact discussed in §2.1b that, upon adding a σ-spin electron to a site, it is equally probable for either the added σ-spin or the −σ-spin electron already present to hop off the site. At UHF/HIII level, in contrast, only the former can by construct occur (Σ ασ = 0): the analogue of Eq. (3.15b) is producing an incorrect strong coupling bandwidth of 2t * as argued physically in §2.1b. The AF case itself is discussed further in the following section since, in contrast to the P phase, the approach to strong coupling is subtle and physically revealing. Here we simply add that (i) in contrast to the P phase, the 1 2 t 2 * G A↑ (ω + ω p ) denominator in Eq. (3.15a) for the AF stems solely from the interaction self-energy Σ A↑ (ω). Thus at UHF/HIII level atomic limit behaviour arises (incorrectly), viz G 0 A↑ (ω) = 1/ω, as argued physically in §2.1a. (ii) Although obtained explicitly for the Bethe lattice, Eq. (3.15a) holds equally for the hypercubic lattice in strong coupling. This is because retraceable paths, which by construct are the only self-energy paths for a Bethe lattice, are for the d ∞ hypercube also the only paths which restore the Néel spin configuration; see also [29]. Results At finite U the basic self-consistency Eqs, (3.2) and (3.8a), are solved numerically. We consider first the AF phase. For the same ω p -value (Fig. 3a, inset), Fig. 8 shows also the corresponding t-J z limit spectrum from Eq. (3.15a). As is well known [11] the t-J z spectrum is discrete (and to illustrate relative intensities is thus shown with height proportional to integrated weight). Physically, this reflects the fact that the hole is pinned by the string of spin-flips its motion creates, leading therefore to spatially localized single-particle excitations and hence a discrete spectrum; mathematically, it is reflected in convergence of the continued fraction implicit by iteration of Eq. (3.15a). Although the U/t * = 10 spectrum evidently bears a close resemblance to its t-J z counterpart, it is by contrast continuous. This persists for any finite U : with increasing interaction strength the individual sub-bands in D L (ω) centre ever closely on their t-J z counterparts, and their integrated spectral weights tend to those of the t-J z limit; but they retain a finite width, reflecting delocalization of the hole. The peculiarities of U → ∞ are further evident in the t-J z model itself, Eq. (3.15a). For any ω p > 0 the t-J z spectrum is discrete, while for ω p = 0 (as in Eq. (3.15b) for the P phase) the spectrum is continuous: the point ω p = 0 thus corresponds to a transition from localized to extended single-particle excitations, and since ω p → t 2 * /U as U → ∞ it is clear that U = ∞ is a singular point. While the physical mechanism leading to delocalization of the hole at any finite U is not of course inherent in the t-J z model Eq. (3.15a) itself, it is readily inferred. Consider the Néel spin configuration and imagine removing, say, an ↑-spin electron from an A-type site, i. The nearest neighbours (NN) to any ↑-spin site all all ↓-spins. Hence to leading order in U -the t-J z limit-the hole initially moves via a NN ↓-spin electron hopping onto site i, creating thereon a spin-flip (with an associated exchange energy penalty); and the subsequent motion of the hole via such a correlated sequence of alternating NN spin hops, in leaving behind a string of upturned spins, would by itself render the hole spatially confined. At large but finite U there is however a small but nonvanishing probability amplitude, of order t 2 * /U , for an ↑spin electron on a second NN site, also A-type, to hop to site i via an intervening ↓-spin site: the hole thus moves two lattice spacings, to the second NN A-type site. Unlike the " t-J z processes" above, this does not entail a spin-flip with concomitant exchange penalty: the hole moves freely. This mechanism evidently leads to hole delocalization and, in tandem with the t-J z processes, produces the strong coupling spectrum. Its formal origins reside in the passage from Eq. (3.10a) to Eq. (3.15a) for the AF lower Hubbard band in strong coupling, where the Feenberg part of the self-energy S A↑ (ω) = 1 2 t 2 * G B↑ (ω) was neglected. As seen readily from the asymptotics of §3.2, the leading corrections to ImS A↑ (ω ≈ 0) are ImS A↑ (ω) = (t 2 * /2U ) 2 ImG A↑ (ω). It is these that embody the delocalization described above, and lead to spectral broadening (contributions to ReS A↑ (ω) are O(1/U ) and lead simply to residual energy shifts). Further, note that since the energetic width of the spectral broadening is naturally the smallest energy scale in strong coupling, the principal effect on the 'bare' t-J z spectrum is a small resonant broadening of the individual t-J z lines. This is seen in Fig. 8, and becomes clearer still with further increasing U . To our knowledge, the above mechanism is the only one which can lead to hole delocalization for the d ∞ AF in strong coupling; and for the reasons already given in §3.2 applies to the hypercubic as well as the Bethe lattice. In finite-d it is for example well known that Trugman paths [42] lead to hole delocalization for the hypercubic lattice, but such processes are O(d −4 ) and do not therefore contribute in d ∞ [29]. As U is decreased, the spectra continue to exhibit essentially strong coupling behaviour down to modest interaction strengths of U/t * ∼ 2 − 3, and can thus be understood quantitatively starting from the t-J z limit. This is shown in Ref. [7] (see eg Fig. 3(b) therein). With further decreasing U however, the spectra evolve continuously to a weak coupling form that shows no trace of remnant t-J z -like behaviour. The spectral gap closes only in the non-interacting limit whence, correctly, the system is an AF insulator for all U > 0. The full spectrum D(ω) = D L + D U is shown in Fig. 9 for U/t * = 1, together with the corresponding UHF spectrum to which (as one expects) it is qualitatively closer, although the single-particle gap ∆ g is reduced to 0.42 of the UHF gap ∆ = U |µ o |. Two further renormalizations have been performed to check the veracity of the above results. First, note that although the Green functions have been obtained selfconsistently via Eq.s (3.2,3.8), the single-particle propagators occurring in the RPA Π −+ AA that enters the selfenergy kernel Eq. (3.8a), are themselves bare UHF propagators; see Fig. 2. To ensure the theory is robust, we have thus additionally renormalized the single-particle lines entering Π −+ AA in terms of both the (self-consistent) full Green functions G ασ and the host Green functions G ασ . The results in either case differ only quantitatively, and at low U , from those just described; see also below. The second renormalization concerns the local moment |µ| which, in the calculations above, has been set to its UHF value |µ 0 |. In weak coupling, van Dongen [43] has examined perturbatively the Néel temperature and the moment magnitude |µ| (the order parameter) for the d ∞ hypercubic lattice, and has shown that even for U → 0+ these are reduced by a factor q of order unity (q ≃ 0.28 [43]) below their corresponding UHF values. The present theory is not of course perturbative (eg the emergence of the AF spin-flip scale is intrinsically non-perturbative), but it is certainly closer in spirit to van Dongen to renormalize the moment beyond UHF level. This is quantitatively important at low U , and is achieved by requiring that |µ| be determined fully self-consistently via (cf Eq. where D Aσ is the full (as opposed to UHF) spectral density. For illustration Fig. 9 shows the Bethe lattice spectrum at U/t * = 1, obtained with both |µ| and Π −+ AA renormalized (the latter in terms of the full Green functions). The gap ∆ g is further diminished, the ratio g = ∆ g /∆ being ∼ 0.15; while the local moment |µ| is likewise reduced below its UHF counterpart, such that m = |µ|/|µ 0 | ∼ 0.39. It is not unfortunately feasible to obtain numerically accurate estimates of g and m as U → 0 (since |µ| and ∆ rapidly become exponentially small). But for U/t * = 1 the UHF moment itself is accurately represented by its asymptotic U → 0 limit, , so the above result for m may be reasonably close to its limiting value. B. Paramagnet To obtain correctly the strong coupling limit for either phase is, as has been shown, fairly subtle. But in contrast to the AF, the approach to strong coupling for the paramagnetic phase is not. Fig. 10 shows the full spectrum D(ω) = −π −1 sgn(ω)ImG(ω) (ω = ω − U/2) for the P phase at U/t * = 8, 6 and 4, compared to the strong coupling t-J z limit from Eq. (3.15b). For U/t * = 8, the strong coupling limit has in practical terms been reached: the Hubbard bands are essentially symmetrically centred onω = ±U/2 respectively, with widths W ∼ W ∞ = 2 √ 2t * and a band gap of ∆ g ∼ ∆ ∞ g = U − 2 √ 2t * ; even for U/t * = 6 the departure from the asymptotic spectrum is relatively minor. With further decreasing U , however, the individual bands become increasingly asymmetric; and the gap tends to zero more rapidly than ∆ ∞ g , signalling the collapse of the insulating phase. This we now discuss, adding that throughout the insulating regime the local moments are well developed (|µ| > ∼ 0.95), as in Mott's conception of a Mott insulator [44]. Fig. 11 shows the resultant band gap, ∆ g (U ), for the paramagnetic insulator as a function of U/t * . ∆ g (U ) is found to vanish continuously at a critical U c = 3.41t * . Detailed numerical analysis shows the corresponding exponent to be unity, DESTRUCTION OF THE MOTT INSULATOR and we note that the width of the critical regime is quite narrow: the behaviour Eq. (4.1) is seen clearly for (U − U c ) < ∼ 0.05t * , corresponding to gaps ∆ g (U ) < ∼ 0.1t * . The continuous closure of the gap is intimately connected to the divergence of low-frequency dynamical characteristics of the system. Consider first the self-energyΣ A↑ (ω). At frequenciesω ∈ [−ω + ,ω + ] inside the spectral gap (∆ g = 2ω + ),Σ A↑ (ω) ≡Σ R A↑ (ω) is pure real with a leading low-ω expansioñ Here, A ≡Σ A↑ (ω = 0) (= − 1 2 U |µ| + Σ A↑ (0), see Eqs (3.1,8)), and is finite for all U > U c (see also below). We wish to find the behaviour of B = −|B| as U → U c . This is obtained by a scaling analysis. Defining y = ω/ω + , it is found that as the gap closes (ω + → 0), Σ A↑ (ω) − A obeys the scaling form with exponent α = 1 2 ; i.e. for different values of U close to U c , with correspondingly different gaps ∆ g (U ) = 2ω + (U ), theω-dependent functions [Σ R A↑ (ω) − A]/ω 1/2 + plotted in terms of y =ω/ω + , collapse to a 'universal' function f (y). Four points should be noted about the scaling behaviour. (i) Good scaling is found in practice for gaps ∆ g < ∼ 0.1t * , consistent with the critical regime found above for closure of the gap. (ii) The scaling is not confined to frequencies y ≪ 1 well inside the spectral gap, but encompasses the region of non-zero spectral density (|y| > 1), certainly up to |y| ∼ 2. (Similar scaling with α = 1 2 naturally occurs for ImΣ A↑ (ω), as follows from Kramers-Krönig; see also below). (iii) In numerical terms the scaling analysis is sufficiently accurate to distinguish readily between an exponent of α = 1 2 and, e.g., α = 1 3 . (iv) The scaling function f (y) is a finite, well behaved function of y =ω/ω + , with f (y) ∼ y for y → 0 as is evident from Eq. (4.2). From Eqs (4.3) and (4.2) it follows immediately that |B| ∼ω The divergence of |B| controls additionally the low frequency behaviour of ReG(ω) = X(ω). From Eq. (3.5c), X(ω) = −X(−ω), whence its leading low-ω behaviour is :ω → 0 (4.5) (with γ 1 = −|γ 1 |). From Eqs (3.1-3) and (3.6b), G(ω) may be written generally as Using Eqs (4.2) and (4.5) on either side of (4.6) enables |γ 1 | to be related to |B|; the result is We find that A 2 > 1 2 t 2 * for all U ≥ U c , whence the divergence of |B| as U → U c controls that of |γ 1 |, (4.9) This is further confirmed by a scaling analysis of X(ω) itself. In direct analogy to that forΣ A↑ (ω) above, X(ω) is found to satisfy the scaling form leads again to Eq. (4.9). It is instructive to compare the above results with those obtained from both the simple HIII approximation discussed in §2.1B, and with the resonance broadening contributions [13] additionally included, which we refer to as HIII ′ . For HIII, A = − 1 2 U and |B| = 0 -the approximation is purely static. Eq. (4.6) becomes a cubic for G(ω), leading as is well known to ∆ g ∼ (U − U c ) 3/2 [13]. As is clear from Eq. (4.8) with |B| = 0, the transition occurs when A 2 (U c ) = 1 2 t 2 * , i.e. U c = √ 2t * , and The HIII ′ approximation can also be shown to be of the form Eq. (4.6), but with aω-dependentΣ A↑ (ω) given byΣ A↑ (ω) = − U 2 + t 2 * G(ω) at low frequencies (which is sufficient to analyze the critical behaviour); so that A = − U 2 and |B| = t 2 * |γ 1 |. Sincẽ Σ A↑ (ω) is a simple linear function of G(ω), Eq. (4.6) again becomes a cubic for G(ω); and, as for HIII, the gap exponent ν = 3 2 [13]. Eq. (4.8) with |B| = t 2 * |γ 1 | yields |γ 1 | = (A 2 − 1 2 t 2 * )/(A 2 − 3 2 t 2 * ). The transition thus occurs when A 2 (U c ) = 3 2 t 2 * , i.e. U c = √ 6t * as is well known [13]; and, again, Both HIII and HIII ′ are thus in the same universality class, reflected more generally in the fact that in either case scaling of the form Eq. (4.10) can be shown to hold, but with an exponent of α = 1 3 . Gros [45] has recently extended Hubbard's hierarchical equation of motion decoupling scheme to higher order. The critical exponents are unchanged from those of HIII/HIII ′ ; and the value of U c itself is barely changed from its HIII ′ value of U c /t * ≃ 2.45. From the above discussion it is apparent that the present theory belongs to a different universality class from that of HIII or its extensions. In direct analogy to the AF phase discussed in §3.3, we have tested the robustness of our results by further self-consistently renormalizing single-particle lines in the polarization propagator Π −+ For ω = 0 in particular, Eq. (2.14) likewise holds, but with the bare D 0 Aσ (ω) replaced by the renormalized spectral densities D Aσ (ω) = −π −1 sgn(ω)ImG Aσ (ω); ie (4.12) As discussed in §2,3 the key feature of the paramagnetic insulator is the zero-frequency spin-flip scale. To preserve this, the local moment |µ| in Eq. (4.12) is itself renormalized to ensure that at each step of the selfconsistent iteration scheme U 0 Π −+ AA (ω = 0) = 1 (and we note that throughout the entire insulating regime, the resultant moment |µ| is also self-consistent in the sense of Eq. (3.17) to < 1% accuracy). The results of this further renormalization are found to differ negligibly from those we have reported above. Finally, to demonstrate the importance of the ω s = 0 spin-flip scale, we have eliminated it: both by (a) neglecting its contribution to Σ A↑ (ω) in Eq. (3.8b), retaining only Σ Stoner A↑ (ω); and (b) replacing Π −+ AA by 0 Π −+ AA in the self-energy kernel Eq. (3.8a). Results obtained from (a) and (b) are very similar, but differ qualitatively from those reported above. In particular, although the selfenergy remains ω-dependent, the resultant critical behaviour is found to be that of HIII/HIII ′ -the gap closes continuously, but with an exponent ν = 3 2 . This points clearly to the necessity of including the ω s = 0 spin-flip scale throughout the entire insulating phase: not only in achieving the correct strong coupling limit (as in §3.2), but also in describing the destruction of the insulating state. DISCUSSION We now discuss the present work, particularly in relation to the iterated perturbation theory (IPT) approach [21,38,39,40,46], use and application of which has been extensive [5]. Although our theory of the Mott-Hubbard insulating phases, with its explicit emphasis on local moments, is conceptually and technically distinct from IPT, some general points of marked contrast are evident. For the antiferromagnetic phase we have emphasized the importance of the ω p spin-flip scale, inclusion of which is necessary to obtain even qualitatively reasonable results throughout essentially the entire range of interaction strengths, and in particular to recover exact strong coupling asymptotics. However IPT does not appear to capture the AF spin-flip scale, presumably because it omits repeated particle-hole interactions of the sort shown in Fig. 2 (which, as in §2.2A, are required to pick up the spin-flip). This is seen, for example, from the known inability of IPT to describe correctly the Udependence of the Néel temperature [5], particularly in the 'Heisenberg' regime. For the paramagnetic insulator, the results of §4 also disagree qualitatively with those obtained from IPT; see in particular [40] and the review [5]. Within IPT the paramagnetic insulating solution is found to break down discontinuously (at a critical U c1 = 3.67t * , where the IPT gap ∆ g (U c1 ) ∼ 0.3t * ), and |γ 1 | (Eq. (4.10)) remains finite at the transition. The same authors [12] have recently examined the insulator via exact diagonalization (ED) on clusters of n s = 3, 5 and 7 sites, extrapolated to n s → ∞ assuming 1/n s scaling behaviour. The resultant data suggest a continuous closure of the gap at a U c1 /t * = 3.04 ± 0.35 and are consistent with ∆ g (U ) ∼ (U − U c ); see also [5]. Further, and independently of the gap analysis, the behaviour of |γ 1 | has also been examined by ED [12], noting (see Eq. (4.10)) that a divergence in |γ 1 | implies a continuous closure of the gap: 1/|γ 1 | is found to show good scaling behaviour, and to scale to zero when n s is extrapolated to ∞. The present theory evidently agrees with the inferences drawn from ED. These concur with our predictions ( §4) that the gap closes continuously and with an exponent ν = 1, that |γ 1 | diverges, and (less importantly) the value of U c itself; note moreover that the ED gap [12] is in rather good agreement with the present work over a wide U range. As described in §4, inclusion of the ω s = 0 spin flip scale is central in describing the destruction of the Mott insulator. That IPT appears unreliable close to U c [5] thus suggests an incomplete inclusion of the effects of this spin scale -which cannot be entirely absent since IPT does give the correct strong coupling spectrum [47]although in physical terms the origin of the spin-flip scale within IPT is not transparent. To conclude, we have developed in this paper a theory for the T = 0 Mott-Hubbard insulating phases of the d ∞ Hubbard model, encompassing both the antiferromagnetic and paramagnetic insulators. The microscopic perspective it affords hinges on the importance of lowenergy scales for insulating spin-flip excitations. Their existence is physically natural within the explicit local moment picture intrinsic to the theory, and inclusion of them is required not only to obtain the strong coupling limits of the single-particle spectra -which are captured exactly-but more generally to describe the entire insulating regimes, including for the paramagnetic phase in particular the destruction of the Mott insulator. Let us also note what we have not considered: the metallic state of the paramagnetic phase. But a glimpse of what is required to describe the metal within the present framework is evident from Fig. 12. For U/t * = 3.5, close to the critical U c of §4, this shows the spectral density of transverse spin excitations ImΠ +− AA (ω) (here obtained, as described in §4, with 0 Π +− AA renormalized in terms of the G Aσ ). The ω = 0 spin-flip pole characteristic of the paramagnetic insulator is evident, and persists down to U c . Clearly, however, the spectral edges of the Stoner-like bands are themselves approaching ω = 0. This they do at U = U c , and for U < U c in the metallic phase the insulating spin-flip pole at ω = 0 is replaced by a resonance at a small non-zero frequency ω = ω K , indicative of the Kondo-like physics known to dominate the correlated metal [5,6]. Extension of the present approach to describe the metal, encompassing the Kondo spin-scale in such a manner that the correlated state is correctly a Fermi liquid, will be described in a subsequent paper. ACKNOWLEDGMENTS DEL expresses his warm thanks to Ph. Nozières for the hospitality of the Institut Laue-Langevin, Grenoble, with particular thanks to Ph Nozières, F Gebhard and N Cooper for many stimulating discussions on the subject matter of this work. MPE acknowledges an EPSRC studentship, and we are further grateful to the EPSRC (Condensed Matter Physics) for financial support. [24,25]. The simple estimate TN ≃ 1 2 ωp(U ), argued to be valid for U/t * > ∼ 3, is also shown (solid line for U/t * > 3). The strong coupling asymptote T ∞ N = t 2 * /2U is indicated by the dotted line. FIG. 12. ImΠ +− AA (ω) vs ω (in units of t * ) at U/t * = 3.5 close to the boundary of the P insulating state, with renormalization as described in text.
2019-04-14T02:16:28.138Z
1997-06-06T00:00:00.000
{ "year": 1997, "sha1": "cc67257a973eda08089eb9a08e6b2cb10e1d6950", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/9706053", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "cc67257a973eda08089eb9a08e6b2cb10e1d6950", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
239467539
pes2o/s2orc
v3-fos-license
Methotrexate-Induced Septicemia With Severe Pancytopenia and Diffuse Cutaneous Ulcerative Lesions Methotrexate, a folate antimetabolite and one of the first few anti-neoplastic drugs, is now a commonly used drug in the treatment of many inflammatory disorders ranging from diseases like rheumatoid arthritis to psoriasis. The life-threatening toxicity of methotrexate in inflammatory diseases is not commonly encountered. Here we report a case of life-threatening multiorgan failure from methotrexate toxicity, which was given for skin lesions suspected to be psoriasis. Introduction Methotrexate is a folate antimetabolite and the first few anti-neoplastic drugs that were administered for diseases like lymphoma and leukemia in high doses [1]. However, with the evolving evidence of its antiinflammatory and immunomodulation properties, it has become one of the drugs of choice in many diseases like rheumatoid arthritis, small vessel vasculitis, and psoriasis [2][3][4]. The toxicity of methotrexate has been reported with both high and low doses and some risk factors are known to contribute to the toxicity [5][6][7]. Here we report a case of methotrexate toxicity with life-threatening multiorgan failure given to a patient with suspected cutaneous psoriasis. Case Presentation A middle-aged male of 46 years old, who is a chronic alcoholic, presented to the emergency department with multiple blisters and hemorrhagic crusting involving both upper and lower limbs and trunk for the last two months. The lesions started on the upper limbs as blisters and erosions following which the patient consulted a dermatologist (Figure 1). With a clinical suspicion of psoriasis, the patient was prescribed oral methotrexate weekly and topical steroids by a private practitioner. However after two weeks of the medication, the lesions increased resulting in multiple necrotic ulcers and hyperpigmented papules, macules distributed on the chest, the lower limbs and upper limbs bilaterally and erosions and necrotic ulcers on the back, some covered with crusts ( Figure 2). This was followed by high-grade fever and difficulty in breathing. There was also a history of bleeding from the gums, mouth, and nasal cavity five days prior to admission. The patient went into a stage of altered sensorium and was rushed to our center for further management. On examination, the patient had mucositis of the buccal mucosa, active bleeding from the gums and nasal cavity. Examination of the skin lesions revealed multiple blisters and erythematous to hyperpigmented, scaly, and crusts on the neck, trunk, and limbs. There were multiple ulcers over these lesions with some of them were showing hemorrhagic crusts ( Figure 3). His baseline saturation was 60% at room air and 91% with high flow oxygen therapy. Systemic examination revealed basal crepitations on respiratory examination and Glasgow Coma score of 5/15 with no focal neurological deficits. A CT brain was down to rule out any intracranial hemorrhage which did not show any obvious abnormality. Laboratory investigations revealed severe pancytopenia with hemoglobin of 6 gm%, total leucocyte count (TLC) of 300/cumm, platelets of 5000/cumm, and a reticulocyte count of 0.08%. Peripheral blood smear showed macrocytic anemia with target cells, leukocytopenia with lymphocytosis, and severe thrombocytopenia. Biochemical parameter like serum lactate dehydrogenase (LDH) was raised (475 IU/L) and serum procalcitonin was elevated (25.24 ng/mL). His serum creatinine was 2.5 mg/dL and serum urea was 150 mg/dL. Bone marrow study revealed hypocellular marrow with marked suppression of erythroid, myeloid, and megakaryocytic cell lineages favoring myelosuppression. Residual erythroid and myeloid cell lineages also showed megaloblastic changes ( Figure 4). His bone marrow culture and blood culture had significant growth of methicillin-resistant Staphylococcus aureus (MRSA) which was sensitive to vancomycin. His chest X-ray was suggestive of a right lower lobe consolidation ( Figure 5). His skin biopsy taken from an intact lesion on the medial aspect of the left foot showed features of psoriasis ( Figure 6). With the background history of worsening symptoms after the intake of methotrexate, the multiorgan dysfunction was suspected to be due to methotrexate toxicity, however, serum methotrexate estimation could not be carried out. All the investigations are tabulated in Table 1 and Table 2. Since low doses of methotrexate are unlikely to cause severe life-threatening complications a detailed history of the doses taken by the patient was obtained and it was found that the patient had consumed methotrexate daily at the dose of 10 mg for two weeks which was then followed by the increase of the cutaneous lesions and other complications. Based on the above clinical and laboratory findings with no other contributing factors, a diagnosis of methotrexate toxicity was made. Naranjo Algorithm-Adverse Drug Reaction Probability scale was 1 to 4. The patient was intubated and ventilated on the day of admission. He was managed on the line of severe sepsis with acute kidney injury, pneumonia, and pancytopenia with supportive skincare of the ulcerated lesions. The patient was started on intravenous leucovorin calcium, 25 mg every 6 hours on day 1, followed by 10 mg every 6 hours on days 2 and 3. Apart from intravenous antibiotic therapy and intravenous fluid, he received 2 units of packed red blood cells and 16 units of platelet concentrate due to upper gastrointestinal bleeding and bleeding from gums, nasal cavity, and buccal mucosa. Total parental therapy was continued for seven days. He responded to the medical treatment and on the 10th day of intensive care therapy, he was extubated and continued with supportive care. His lesions healed with the continuation of antibiotic therapy and daily dressing with povidone-iodine followed by the application of silver sulfadiazine cream. He was discharged on the 28th day of his hospital stay. Discussion There have been case reports of methotrexate toxicities in the form of cutaneous skin lesions, acute renal failure, pancytopenia, pneumonitis, neurotoxicity [5][6][7][8][9][10][11][12][13]. Here we report methotrexate toxicity following over-dosage of methotrexate by the patient resulting in life-threatening multiorgan failure with severe pancytopenia, septicemia, acute kidney injury, and diffuse cutaneous skin lesions. Toxicities like myelosuppression, pancytopenia, acute renal failure, interstitial pneumonitis, hepatitis, gastrointestinal mucositis, leukoencephalopathy are severe forms usually associated with high doses and risk factors. The severity of methotrexate toxicity can vary from mild to severe forms. Methotrexate is a folate antagonist and inhibitor of cellular proliferation. Cells with the highest turnover such as oral mucosa, gastrointestinal tract, and bone marrow cells are the most susceptible to its effect. Thus, mucositis develops when a patient's oral epithelial cells are affected. Myelosuppression in patients with methotrexate toxicity is explained by the same mechanism [14]. As a result, pancytopenia develops, which leads to increased bleeding, easy bruising, macrocytic red blood cells, and an increased risk of infections [15]. Ulceration of the psoriatic plaques in the skin due to methotrexate toxicity is uncommon. Two patterns of skin ulceration have been described in psoriasis patients, who are treated with methotrexate. In type I ulcers, the psoriatic plaques begin to erode shortly after commencing methotrexate treatment. Type II ulcers affect uninvolved skin observed with higher dose. The pathogenic mechanism was believed to be the direct toxicity of the drug in both types. Psoriatic plaques are initially painful and red, and then they develop superficial erosions [17]. When methotrexate is given to patients in high or low doses all preventive measures are to be taken before administering the drug. High doses of methotrexate are administered with maintaining a good urine output, urinary alkalinization, monitoring serum creatinine, electrolytes, pharmacokinetically guided leucovorin rescue therapy and serum methotrexate estimation. Toxicities from low doses of methotrexate are stomatitis, nausea, hepatitis, cutaneous eruptions, fever, macrocytosis, and myelosuppression which can be prevented by educating and monitoring the patient from time to time along with co-administration of weekly folic acid. Immediate withdrawal of methotrexate when toxicity is encountered is an important intervention with supportive therapy. Education about the proper dosage and the related toxicity should be explained to patients before administrating the drug. In the present case, the patient had an overdose of the drug following which he suffered a severe form of bone marrow suppression, acute renal failure, and diffuse cutaneous ulcerative lesions with septicemia. Pancytopenia could have also been contributed by folate deficiency which might have persisted before the event with the patient being a chronic alcoholic. The skin lesions of the patient could be attributed to the direct toxicity of the methotrexate therapy. Conclusions Methotrexate is a commonly used drug for many systemic inflammatory diseases and cutaneous lesions in clinical practice. The toxicity that our patient suffered was due to overdosage and resulted in lifethreatening complications which if not timely managed the mortality is known to be very high. Hence the toxicities that can result from methotrexate should always be considered before initiating the drug therapy. It is important for all clinicians to carry out a detailed laboratory evaluation prior to initiation of the therapy, adequate education and close monitoring during the course of therapy to avoid the adverse drug events from methotrexate. Additional Information Disclosures Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
2021-10-20T15:20:55.055Z
2021-09-01T00:00:00.000
{ "year": 2021, "sha1": "d7bafff15e9546a09c1c9986c4ae88bc04d81f75", "oa_license": "CCBY", "oa_url": "https://www.cureus.com/articles/70197-methotrexate-induced-septicemia-with-severe-pancytopenia-and-diffuse-cutaneous-ulcerative-lesions.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1481e31d1874c5c2fa1bd00da311509a80adbdac", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
232484180
pes2o/s2orc
v3-fos-license
Plasma biomarker profiling of PIMS-TS, COVID-19 and SARS-CoV2 seropositive children – a cross-sectional observational study from southern India. Background SARS-CoV-2 infection in children can present with varied clinical phenotypes and understanding the pathogenesis is essential, to inform about the clinical trajectory and management. Methods We performed a multiplex immune assay analysis and compared the plasma biomarkers of Paediatric inflammatory multisystem syndrome temporally associated with SARS-CoV-2 infection (PIMS-TS), acute COVID-19 infection (COVID-19), SARS-CoV-2 seropositive and control children admitted to a tertiary care children's hospital in Chennai, India. Pro-inflammatory cytokines, chemokines and growth factors were correlated with SARS-CoV-2 clinical phenotypes. Findings PIMS-TS children had significantly elevated levels of cytokines, IFNγ, IL-2, TNFα, IL-1α, IFNα, IFNβ, IL-6, IL-15, IL-17A, GM-CSF, IL-10, IL-33 and IL-Ra; elevated chemokines, CCL2, CCL19, CCL20 and CXCL10 and elevated VEGF, Granzyme B and PDL-1 in comparison to COVID-19, seropositive and controls. COVID-19 children had elevated levels of IFNγ, IL-2, TNFα, IL-1α, IFNα, IFNβ, IL-6, IL-17A, IL-10, CCL2, CCL5, CCL11, CXCL10 and VEGF in comparison to seropositive and/or controls. Similarly, seropositive children had elevated levels of IFNγ, IL-2, IL-1α, IFNβ, IL-17A, IL-10, CCL5 and CXCL10 in comparison to control children. Plasma biomarkers in PIMS-TS and COVID-19 children showed a positive correlation with CRP and a negative correlation with the lymphocyte count and sodium levels. Interpretation We describe a comprehensive plasma biomarker profile of children with different clinical spectrum of SARS-CoV-2 infection from a low- and middle-income country (LMIC) and observed that PIMS-TS is a distinct and unique immunopathogenic paediatric illness related to SARS-CoV-2 presenting with cytokine storm different from acute COVID-19 infection and other hyperinflammatory conditions. Introduction Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) infection in children has generally been a mild or asymptomatic illness compared with adults [1]. However, Paediatric inflammatory multisystem syndrome temporally associated with SARS-CoV-2 infection (PIMS-TS) also known as Multisystem inflammatory syndrome in children (MIS-C), a potentially fatal disease, is now recognised as a distinct childhood illness related to SARS-CoV-2 [2,3]. Clinically, PIMS-TS occurs as a late manifestation of SARS-CoV-2 infection (3À6 weeks after) with many children presenting with multi-organ dysfunction and elevated SARS-CoV-2 IgG antibodies, whereas children with acute COVID-19 infection (COVID-19) present with a mild or asymptomatic illness. Given this diverse clinical presentation, there is an urgent need to understand the immunology and pathogenesis of SARS-CoV-2 infection in children, to help clinicians understand the differences and similarities between different clinical phenotypes (PIMS-TS, COVID-19, and asymptomatic seropositive), so that optimal and effective treatment can be delivered. Studies on immunology of SARS-CoV-2 infections in children including PIMS-TS have mostly been from developed countries, [4À7] and there is currently no data from low-and middle-income countries (LMIC). We, therefore, performed a systemic level analysis of cytokines, chemokines, and growth factors in children with PIMS-TS and compared them to those in children with COVID-19, seropositive children and healthy controls presenting to a tertiary care children's hospital in Chennai, India, and aimed to define and identify important biomarkers involved in the immunopathogenesis of PIMS-TS and COVID-19. Study design and participants We performed an immunological analysis on plasma samples from children admitted to Kanchi Kamakoti CHILDS Trust Hospital (KKCTH), a tertiary children's hospital in Chennai, India from 1 June 2020 to 30 September 2020. Children of either sex, aged 12 months and under 18 years, needing hospitalisation for any condition, were included in this study, after caretaker's informed written consent. Blood was collected into EDTA tubes (BD Biosciences), plasma was obtained by centrifugation, and frozen at À80°C until transferred to the National Institute for Research in Tuberculosis (NIRT), Chennai for serological and immunological analysis. Peripheral blood sampling in all children were done prior to receiving any immunomodulatory treatment. Multiplex assays on all samples were performed at the same time to limit batch effect. Study staff involved in immunological and serological assays were blinded to clinical data. Clinical data The KKCTH study team approached caregivers of all children admitted to hospital during the above mentioned period for participation in the study. Demographic, epidemiological, medical and laboratory data were collected on a standardised case report form. Anonymised data was shared with NIRT for statistical analysis. Acute COVID-19 disease and severity of COVID-19 was defined according to the Ministry of Health and Family Welfare (MOHFW) guidelines [8] issued by Government of India and children with PIMS-TS were diagnosed according to the Royal College of Paediatrics and Child Health (RCPCH) case definition for PIMS-TS [9]. SARS-CoV-2 RT-PCR test SARS-CoV-2 real-time reverse-transcriptase polymerase chain reaction (RT-PCR) was performed by Indian Council of Medical Research (ICMR) approved laboratories. Results of RT-PCR were shared along with the clinical data by the KKCTH study team. SARS-CoV-2 antibody assay Serological and immunological assays were performed at National Institutes of Health -National Institute for Research in Tuberculosis -International center for Excellence in Research laboratory (NIH-NIRT-ICER), NIRT, Chennai. Antibodies were quantified in plasma using iFlash Ò SARS-CoV-2 IgG chemiluminescence antibody assay (CLIA) (YHLO Biotechnology Corporation, Shenzhen, China) according to the manufacturer's instructions. An antibody titre of 10 AU/ml was considered positive. Research in context Evidence before this study SARS-CoV-2 infection in children can present with varied clinical phenotypes. Children with PIMS-TS/MIS-C most commonly present with multi-organ dysfunction and elevated SARS-CoV-2 IgG antibodies, whereas children with acute COVID-19 infection present with a mild or asymptomatic illness. Understanding the pathogenesis is essential to help understand the differences and similarities between different clinical phenotypes and to inform about the clinical trajectory, so that optimal and effective treatment can be delivered. Added value of this Study To our knowledge this is the first study from a LMIC setting describing the plasma biomarker profile of children with different clinical spectrum of SARS-CoV-2 infection namely PIMS-TS, acute COVID-19 infection and asymptomatic seropositive. We were able to identify biosignatures that differentiates the clinical phenotypes distinctly and observed that the cytokine storm and hyperinflammation in PIMS-TS is different from that of acute COVID-19 and other hyperinflammatory conditions like Kawasaki Disease. Our results provide immunological basis for the use of current treatment modalities in PIMS-TS and emphasises the need for further research in identifying the optimal biological agent for treatment. Implications of all the available evidence The detailed analysis in our study provides additional information on the pathogenesis and may guide the clinicians in the assessment and management of children with PIMS-TS and acute COVID-19 infection in future. Funding This study was funded by National Institutes of Health -National Institute for Research in Tuberculosis -International center for Excellence in Research laboratory (NIH-NIRT-ICER), Chennai Statistical analysis For analysis, children were categorised into four groups as: COVID-19 (RT-PCR positive), PIMS-TS, Seropositive (IgG positive non PIMS-TS) and Control (both serology and RT-PCR negative). Continuous variables are presented as medians and interquartile ranges (IQRs), and categorical variables are reported as numbers and proportions. Comparison between the groups was performed using the Mann-Whitney U test. Geometric means (GM) were used for measurements of central tendency. Statistically significant differences between PIMS-TS, COVID-19, seropositive children and controls were analysed using the Kruskal-Wallis test with Dunn's multiple comparisons. Mann-Whitney test was used to compare cytokines concentrations between children with PIMS-TS needing Paediatric Intensive Care Unit (PICU) care and those who did not require PICU care as well as between COVID-19 children with mild disease and those with moderate to severe disease. P 0.05 was considered statistically significant and all tests were two sided. Analyses were performed using Graph-Pad PRISM Version 9.0 (GraphPad Software, CA, USA). Principal component analysis (PCA) and Spearman's correlation analysis were done using statistical software JMP 14.0 (SAS, Cary, NC, USA). Basic characteristics We enrolled and performed the immunological assays on stored plasma samples of 145 hospitalised children (33 COVID-19, 44 PIMS-TS, 47 seropositive and 21 control children) from June to September 2020. The median age was 5 years (range 1 month -17 years); and 58% (84/145) were male. All COVID-19 children were SARS-CoV-2 RT-PCT positive and all PIMS-TS children were seropositive (IgG). Of the 33 COVID-19 children, 22 (67%) presented with mild symptoms, 3 (9%) had moderate symptoms, 2 (6%) had severe symptoms needing PICU care and 6 (18%) children were asymptomatic. among the 47 seropositive non PIMS-TS children, 30 (64%) were admitted to the hospital for elective procedures; they therefore had no systemic symptoms. Further demographics and clinical presentation of COVID-19, PIMS-TS and seropositive children are described in Tables 1 and 2. Detailed clinical presentation of these children have been previously reported [10,11]. For comparison, we included 21 children with a similar median age (6 years, IQR: 1 À 15 years) and sex (male: 52%, 11/21), who were both SARS-CoV-2 RT-PCR negative and seronegative and presented to hospital for elective procedures (controls). They had no other co-morbid conditions and no history of COVID-19 contact. Plasma levels of CC and CXC chemokines are elevated in PIMS-TS and COVID-19 children We measured the plasma levels of CC and CXC chemokines (Fig. 2) and observed that the levels of CCL2 CCL5, CCL11, CCL19, CCL20 and CXCL10 were significantly higher in PIMS-TS compared to COVID-19, seropositive and control children; COVID-19 children had elevated levels of CCL2, CCL5, CCL11 and CXCL10 in comparison to seropositive and/or control children; and seropositive children had elevated levels of CCL5 and CXCL10 in comparison to control children. PIMS-TS children with myocardial dysfunction and gastrointestinal manifestations, had markedly high levels of CCL-2, CCL-19 and CCL-20. Thus, plasma chemokines could distinguish between PIMS-TS, COVID-19, seropositive and control children. Plasma levels of growth factors are elevated in PIMS-TS and COVID-19 Analyses of growth factors (Fig. 3) showed significantly higher levels of VEGF, Granzyme B and PDL1 in PIMS-TS children compared to COVID-19, seropositive and control children. Similarly, COVID-19 children had elevated levels of VEGF in comparison to seropositive and control children. Thus, certain growth factors could distinguish between PIMS-TS, COVID-19, seropositive and control children. Elevated VEGF was found in all PIMS-TS children, signifying endothelial dysfunction. Plasma inflammatory markers can robustly distinguish PIMS-TS from COVID-19 and seropositive and control children We performed PCA (principal component analysis) of IFNg, IL-2, TNFa, IL-1a, IFNb, IL-6, IL-17A, IL-10, CXCL-10 and VEGF to determine the discriminatory power of plasma cytokines, chemokines and growth factors in distinguishing in children with PIMS-TS from COVID-19, seropositive and control children (Fig. 4). PCA evidently demonstrates the ability of these markers to differentiate PIMS-TS from COVID-19, seropositive and controls with elevated levels seen in PIMS-TS and COVID-19 children. Plasma cytokines are markers of disease severity in PIMS-TS To assess if there is an association between the systemic levels of pro-inflammatory cytokines, chemokines, growth factors and disease severity in PIMS-TS, we compared the circulating levels of inflammatory markers between PIMS-TS children needing PICU care (n = 23) and children who did not require PICU care (n = 21) and observed that the systemic levels of IFNg, TNFa, IL-6, IFNb and IL-10 were significantly higher in children with PIMS-TS needing PICU care (Fig. 5), suggesting a correlation between elevated systemic levels of proinflammatory cytokines and clinical severity. Plasma cytokines are markers of disease severity in COVID-19 Next, we evaluated the correlation between the circulating levels of inflammatory markers and disease severity in COVID-19 and found that the systemic levels of TNFa, IL-6, IL-10 and CXCL10 were significantly higher in children with moderate to severe disease in comparison to children with mild disease (Fig. 6). Discussion We describe the immune profile of children presenting with different spectrum of SARS-CoV-2 infection, ranging from acute COVID-19 disease to PIMS-TS, and identify immune biomarkers that could help differentiate the varied presentation of infection. Elevated levels of cytokines may have systemic effects and may in turn cause organ dysfunction [12]. This may potentially be one of the reasons for children with PIMS-TS presenting with multi-organ dysfunction. Results from our study provide novel data that might further improve our understanding of the immunology and pathogenesis of PIMS-TS as well as COVID-19. We observed that the immune profile of children with PIMS-TS was distinct form that of COVID-19 and seropositive children, with markedly elevated cytokines and chemokines. This is similar to the data from other groups analysing the immune [2,13]. The symptomology of PIMS-TS and KD is indeed overlapping and similar, [2,3,13] presenting as a diagnostic challenge and it is extremely difficult to differentiate the cytokine storm of PIMS-TS and KD on the basis of clinical features alone [6,12,14]. Immunological assessment, therefore, could help unravel disease severity and establish the clinical course in PIMS-TS. Both TNFa and IL17A are known to be associated in the pathogenesis of PIMS-TS and KD [4,15]. However, levels of IFN-g which is not known to be strongly associated with KD is markedly increased in PIMS-TS as well as COVID-19, [4À6] signifying that IFNg may be important in the pathogenesis of PIMS-TS and COVID-19, thus implying the role of T-cells, NK-cells and macrophages. Likewise, marked elevation in IL-10 is associated with PIMS-TS which is uncommon in KD, where we tend to observe high levels of IL-1, IL-2, and IL-6 [4,15]. In concordance, children with PIMS-TS in our study had markedly increased levels of IFNg and IL-10. Correlating with the elevated levels of cytokines and chemokines, gastrointestinal symptoms and myocardial dysfunction were more prevalent in PIMS-TS children, both of which are not commonly seen in KD. Results of our study adds to the evidence that the cytokine profile of PIMS-TS is different from that of previously described in KD, [4À7] thus emphasising that PIMS-TS is a distinct paediatric illness. We also observed differences in the immune profile of children with PIMS-TS needing PICU care and those who did not require PICU care (Table 3). PIMS-TS children with shock requiring vasoactive medications were admitted to PICU and were characterised by elevated levels of INFg, TNFa, IL-6 and IL-10, affirming the distinct immunopathogenesis of PIMS-TS. Similarly, we also found differences in the immune profile of children with moderate to severe COVID-19 and mild COVID-19, suggesting a correlation between elevated systemic levels of pro-inflammatory immune markers and disease severity. Children with PIMS-TS in our cohort were treated with intravenous immunoglobulins (IVIG), corticosteroids, and IL-6 inhibitor (Tocilizumab). IVIGs can neutralize the immunological effect of autoantibodies and use of steroids may provide broad immunosuppression, however we are yet to fully understand the immumopathological role of these treatments in PIMS-TS [16,17]. Currently there is no consensus on choice of biological agents (IL-1 antagonists vs IL-6 receptor blockers vs anti-TNF agents) in the management of PIMS-TS, [17] which is based on the clinicians discretion. Interestingly, our cohort of children did not have high levels of IL-1b, implying that further research is warranted to establish whether IL-1 antagonists can be used as a potential therapy in PIMS-TS among Indian children. However, IL-1 antagonists, IL1RA (anakinra), is currently unavailable in India and IL-6 receptor blocker, Tocilizumab has been used in a few children [10,18]. We also observed that children with PIMS-TS in our cohort had few underlying comorbidities as compared to COVID-19 and seropositive children (Tables 1 and 2). The extent to which this observation influences the pathogenesis is not known and requires further investigations with a focus on genetic studies. It has been well known that cytokines play an important role in immunopathology during viral infection [12]. Recent published studies in adult populations have reported that elevated levels of inflammatory cytokines levels such as TNF-a, IL-1b, IL-6, IL-10, IL-17, IL-18, IFN-g are seen in active COVID-19 individuals compared to seropositive individuals and healthy controls [19,20]. TNFa and IFNg are known to particularly drive COVID-19 disease severity [20] and in addition, IL-6, IL-1b and IL-12 have been consistently implicated in severe disease [19,20]. Similar to this finding from the adult population, children with COVID-19 in our study also showed distinct cytokine/chemokine upregulation in comparison to seropositive and control children. Hyperinflammatory response in PIMS-TS, albeit with a delayed presentation appears to be similar to that seen in severe COVID-19 in adults [6,19À21]. However, there are various immunological differences [22,23]. The cytokine storm in adult COVID-19 is characterised by increased IL-1, IL-6 and TNF-a, [19,20,24] and predominantly involves the respiratory system, whereas the cytokine storm in PIMS-TS, with elevated IFNg, TNFa, IL-6, IL-10 and IL-17A, [4À6] involves multiple systems mainly gastrointestinal, mucocutaneous and cardiac, with few or no respiratory manifestations [3,22]. Congruent with this, the immune profile observed in PIMS-TS children showed prominent mucosal and endothelial immune signatures. We also observed that the immune profile of non PIMS-TS seropositive children were notably different from the uninfected control children, indicating an association between the cytokines and the time from infection. The median duration of blood sampling in asymptomatic seropositive children was 3 weeks since proven or suspected COVID-19 illness or contact. Our study has limitations as we included children from a single institution with small number of children in all groups and we were unable to assess the immunological profile with longitudinal follow up. In addition, it would have been ideal to compare the immunological profile with that of children with KD from similar population during the same time. We also studied a smaller number of children with COVID-19, and are unable to describe and identify subtle differences in the immune profile of children with mild, moderate and severe disease. Future work investigating the role of genetics and other environmental factors in the development of severe COVID-19 and PIMS-TS is pivotal as this has the potential to guide treatment. In conclusion, our study is the first study describing the immune profile of children with PIMS-TS and COVID-19 from a LMIC. We conclude, PIMS-TS is a distinct and unique immunopathogenic paediatric illness related to SARS-CoV-2, presenting with cytokine storm different from acute COVID-19 and other hyperinflammatory conditions. We were also able to provide biosignatures that differentiates asymptomatic seropositive children from uninfected, control children.The detailed analysis in our study provides additional information on the pathogenesis and may guide clinicians in the assessment and management of children with PIMS-TS and COVID-19 in future. The plasma levels of IFNg, IL-2, TNFa, IL-1a, IFNb, IL-6, IL-17A, IL-10, CXCL10 and VEGF were measured in children with moderate to severe (n = 5), children with mild (n = 22) and asymptomatic (n = 5). The data are represented as scatter plots with each circle representing a single individual. P values were calculated using the Kruskal-Wallis test with Dunn's post-hoc for multiple comparisons. Declaration of Competing Interest The authors disclose no conflicts of interest
2021-04-02T13:14:13.993Z
2021-04-01T00:00:00.000
{ "year": 2021, "sha1": "a6890a4e4bc58348d18a89d84bc057750a486904", "oa_license": "CCBY", "oa_url": "http://www.thelancet.com/article/S2352396421001109/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e9011a826d8bbb2dd143336a44ba016b740cd7f4", "s2fieldsofstudy": [ "Medicine", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }