id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
119009641
pes2o/s2orc
v3-fos-license
Bound states of three and four resonantly interacting particles We present an exact diagrammatic approach for the problem of dimer-dimer scattering in 3D for dimers being a resonant bound state of two fermions in a spin-singlet state, with corresponding scattering length $a_F$. Applying this approach to the calculation of the dimer-dimer scattering length $a_B$, we recover exactly the already known result $a_B=0.60 a_F$. We use the developed approach to obtain new results in 2D for fermions as well as for bosons. Namely, we calculate bound state energies for three $bbb$ and four $bbbb$ resonantly interacting bosons in 2D. For the case of resonant interaction between fermions and bosons we calculate exactly bound state energies of the following complexes: two bosons plus one fermion $bbf$, two bosons plus two fermions $bf_{\uparrow}bf_{\downarrow}$, and three bosons plus one fermion $bbbf$. I. INTRODUCTION Following the experimental realization of the Bose-Einstein condensation in ultracold bosonic gases, together with its intensive study, the physics of ultracold Fermi gases has taken off recently with a strong development of experimental and theoretical investigations within the last few years 1 . In particular, much advantage has been taken of various Feshbach resonances which offer the possibility observing experimentally the so called BEC-BCS crossover. This has been done in particular in 6 Li and in 40 K. In the weak coupling limit of small negative scattering length, which is realized far away on one side of the resonance, the corresponding weak attractive interaction between fermions leads to a BCS type condensate of Cooper pairs. On the other side of the resonance, where the scattering length is positive, weakly bound dimers, or molecules, consisting of two different fermions are formed. When one goes far enough of the resonance on this positive side, one obtains a weakly interacting gas of these dimers, which may in particular form a Bose-Einstein condensate, as it has been recently observed experimentally 2,3,4,5 . In the present paper, motivated by the problem raised by the physics of this dilute gas of composite bosons, we will deal with the dimer-dimer elastic scattering and present an exact diagrammatic approach to its solution. This will be done by staying in the so-called resonance approximation which is quite suited to the physical situation found with a Fesbach resonance. It this case the (positive) scattering length greatly exceeds the characteristic radius r 0 for the attractive interaction between fermionic atoms. A problem of this kind was first investigated by Skorniakov and Ter-Martirosian 6 in the case of the 3-body fermionic problem. They showed that the scattering length of a fermion on a weakly bound dimer is determined by a single parameter, namely the two-body scattering length a F between fermions, and it is equal to 1.18a F in the zero-range limit for the interatomic potential. A similar situation is found in the case of four fermions, where the dimer-dimer scattering length is fully determined by this same scattering length a F . In a study of the crossover problem Haussmann 7 calculated this scattering length of composite bosons a B at the level of the Born approximation and found it equal to 2a F . This result was later on much improved by Pieri and Strinati 8 , who took into account the repeated scattering of these composite bosons in the ladder approximation. This diagrammatic approach led them to a scattering length approximately equal to a B ≃ 0.75a F . However, this ladder approximation is not exact, because it misses an infinite number of other diagrams which in principle lead to a contribution of the same order of magnitude as those taken into account. Very recently this problem has been solved exactly by Petrov, Salomon, and Shlyapnikov 9,10 who found for the scattering length of these composite bosons a B = 0.6a F . This has been achieved by solving directly the Schrödinger equation for four fermions, using the wellknown method of pseudopotentials. Here we will give an exact solution of this scattering problem of two weakly bound dimers, using a diagrammatic approach in the resonance approximation, which can be seen as a bridge between the approach of Pieri and Strinati 8 and the exact result of Petrov, Salomon, and Shlyapnikov 9, 10 . In order to show the strength and the versatility of our approach, we make use of it to obtain new results for various systems, in the two-dimensional (2D) case which is of interest not only for cold gases, but also for high T c superconductivity. Specifically we consider first a system of resonantly interacting bosons. We calculate exactly the three bosons bbb and four bosons bbbb bound state energies in this case. We also make use of our approach for the study of 2D bosons interacting resonantly with fermions. In this case we calculate exactly the bound state energies of two bosons plus one fermion bbf , two bosons plus two fermions bf ↑ bf ↓ , and three bosons plus one fermion bbbf . In this respect the present paper is in the line of previous results obtained by of some of us. Indeed the possibility of two fermions 11,12 f f and two bosons 13 bb pairing was predicted, as well as the creation 14 of a composite fermion bf in resonantly interacting (a ≫ r 0 ) 2D Fermi-Bose mixtures. II. THREE PARTICLES SCATTERING As a preliminary exercise we will rederive the result of Skorniakov and Ter-Martirosian for the dimer-fermion scattering length a 3 using the diagrammatic method 15 . Following Skorniakov and Ter-Martirosian, in the presence of the weakly bound resonance level −E b (with E b > 0), we can limit ourselves to the zero-range interaction potential between fermions in the scattering of these two particles . The two-fermion vertex can be approximated by a simple one-pole structure, which reflects the presence of the s-wave resonance level in the spin-singlet state, and is essentially given by the scattering amplitude, namely: where P = {P, E}, E is the total frequency and P is the total momentum of incoming particles, m is the fermionic mass, E b = 1/ma 2 F . Indices α, β and γ, δ denote the spin states of incoming and outgoing particles. The function χ(α, β) stands for the spin singlet state. We will draw this vertex in the way, shown on Fig. 1, where the double line can be regarded as a propagating dimer. The simplest process that contributes to dimer-fermion interaction is the exchange of a fermion. We denote the corresponding vertex as ∆ 3 and it is described by the diagram on Fig. 2. Its analytical expression reads where G(p) = 1/ ω − p 2 /2m + i0 + is the bare fermion Green's function. The minus sign in the right hand side of Eq.(3) comes from the permutation of the two fermions. In order to obtain the full dimer-fermion scattering vertex T 3 we need to sum up all possible diagrams with indefinite number of ∆ 3 blocks. In the present case these diagrams have a ladder structure. It is obvious that the spin projection is conserved in every order in ∆ 3 and thus T 3 α,β = δ α,β T 3 . An equation for T 3 will have the diagrammatic representation shown in Fig. 3. It is obtained by writing that either the simplest exchange process occurs alone, or it is followed by any other process. In analytical form it reads where q ≡ i d 3 qdΩ/(2π) 4 . We can integrate out the frequency Ω in Eq.(4) by closing the integration contour in the lower half-plane, since both T 2 (P − q) and T 3 (q, p 2 ; P ) are analytical functions of Ω in this region (this property for T 3 (p 1 , p 2 ; P ) results from Eq.(4) itself). Hence only the "on the shell" value T 3 ({q, q 2 /2m}, p 2 ; P ) comes in the right-hand side of Eq. (4). Moreover, if we are interested in the low-energy s-wave dimer-fermion scattering length a 3 , we have to put P = {P, E} = {0, −E b } and p 2 = 0. Hence Eq.(4) reduces to an equation for the "on the shell" value of T 3 (p 1 , p 2 ; P ). Taking into account the standard relation between T -matrix and scattering amplitude (with reduced mass) and the fact that, from Eq.(1), T 2 has an additional factor 8π/(m 2 a F ) compared to a standard boson propagator, we find that the full vertex T 3 is connected with a 3 by the following relation: 8π This leads to introduce a new function a 3 (k) defined by and substituting it in Eq.(4), we obtain Skorniakov -Ter-Martirosian equation for the scattering amplitude: Solving this equation one obtains the well known result 6 for the dimer-fermion scattering length a 3 = a 3 (0) = 1.18a F . III. DIMER -DIMER SCATTERING By now we can proceed to the problem of the dimer-dimer scattering. This problem was previously solved by Petrov et al. 9,10 via studying Schrödinger equation for a 4-fermions wave function. Our diagrammatic approach is conceptually close to Petrov's one. Its basic point is that it requires the introduction of a special vertex which describes an interaction of one dimer as a single object with the two fermions constituting the other dimer. Let us investigate all the possible types of diagrams that contribute to the dimer-dimer scattering vertex T 4 . In this process both dimers are temporarily "broken" in their fermionic components, which means that the fermions of one dimer exchange and/or interact with the fermions of the other dimer. The simplest process is an exchange of fermions by two dimers shown on Fig. 4a. More complicated diagrams are composed by introducing intermediate interactions between exchanging fermions (see Fig. 4b,c). As long as one of the fermions does not interact or exchange with the other ones, all these complications can be summed up in the T 3 block (see Fig. 4d) which describes, as we have seen in the preceding section, the scattering of a fermion on a dimer. Furthermore we may exchange bachelor fermions participating in the T 3 scattering. The resulting series has the diagrammatic structure shown on Fig. 4e. This series describes a "bare" interaction between dimers. The last obvious step is to compose ladder type diagrams from this "bare" interaction. A typical ladder diagram is shown on Fig. 4f. These general ladder diagrams describe all possible processes which contribute to the dimer-dimer scattering. The fact that the T 4 vertex should be expressed in terms of T 3 was first noticed by Weinberg in his work on multiparticle scattering problems 16 . Note that a calculation of the diagrams shown on Fig. 4e, f requires information about an off-shell matrix T 3 , that is about a matrix with arbitrary relation between frequencies and momenta of incoming and outgoing particles. On the other hand, for the calculation of the dimer-fermion scattering length a 3 in Eq. (7), only the simpler on-shell structure of T 3 is required as we have seen in the preceding section. Luckily, as we will see now, we can exclude T 3 from our considerations and express T 4 only in terms of T 2 . By doing this we reduce the number of integral equations required for the calculation of the dimer-dimer scattering length a 4 . Since, as we have just seen, it is impossible to construct a closed equation for the dimer-dimer scattering vertex T 4 , we wish to find an alternative way for taking into account in one equation all the diagrams contributing to dimer-dimer scattering. Inspired by the work of Petrov et al. 9,10 and looking at the diagrams we have considered above, we are naturally lead to look for a special vertex that describes the interaction of two fermions, constituting the first dimer, with the second dimer taken as a single object. This vertex would be the sum of all diagrams with two fermions and one dimer as incoming lines. It would be natural to suppose that these diagrams should have the same set of outgoing -two fermionic and one dimer -lines. However in this case there will be a whole set of disconnected diagrams contributing to our sum that describe interaction of a dimer with only one fermion. As it was pointed out by Weinberg 16 , one can construct a good integral equation of Lippmann-Schwinger type only for connected class of diagrams. Thus we are forced to pay our attention to the vertex Φ αβ (q 1 , q 2 ; p 2 , P ) corresponding to the sum of all diagrams with one incoming dimer, two incoming fermionic lines and two outgoing dimer lines (see Fig. 5). This is also quite natural from our view point since, in our scattering problem we are interested in a final state with two outcoming dimers. Indeed once this vertex Φ αβ (q 1 , q 2 ; p 2 , P ) is known, it is straightforward to calculate the dimer-dimer scattering vertex T 4 (p 1 , p 2 ; P ) which is given by: The corresponding diagrammatic representation is given in Fig. 5. One can readily verify that, in any order of interaction, Φ contains only connected diagrams. The spin part of the vertex Φ α,β has the simple form Φ α,β (q 1 , q 2 ; P, p 2 ) = χ(α, β)Φ(q 1 , q 2 ; P, p 2 ). The diagrammatic representation of the equation for Φ is given in Fig. 6. One can assign some "physical meaning" to the processes described by these diagrams. The diagram of Fig. 6a represents the simplest exchange process in a dimer-dimer interaction. The diagram of Fig. 6b accounts for a more complicated nature of a "bare" dimer-dimer interaction. Finally the diagram of Fig. 6c allows for a multiple dimer-dimer scattering via a "bare" interaction (it generates ladder-type diagrams analogous to those of Fig. 4f). The last term in Fig. 6 means that we should add another set of three diagrams analogous to those of Fig. 6a, b, c but with the two incoming fermions (q 1 and q 2 ) exchanged. The diagrammatic representation translates into the following analytical equation for the vertex Φ: Finally let us also indicate that it is possible to rederive the same set of equations, purely algebraically, by taking a complementary point of view. Instead of focusing, as we have done, on the free fermions lines as soon as a dimer is "broken", we can rather keep track of the fermions which make up a dimer. This leads again automatically to introduce the vertex Φ(q 1 , q 2 ; p 2 , P ). Then Eq. (9) is recovered when one keeps in mind that, after breaking dimers, one may have propagation of a single dimer and two free fermions before another break (this corresponds to the second term in the right hand side of Eq. (9)). Alternatively one may also have the propagation of two dimers, which leads to the third term in Eq. (9). Coming back now more specifically to our problem, we can put p 2 = 0 and P = {0, −E b } since we are looking for an s-wave scattering length. At this point we have a single closed equation for the vertex Φ in momentum representation, which we believe is analogous to Petrov et al equation in coordinate representation. To make this analogy more prominent we have to exclude frequencies from the equation by integrating them out. However this exclusion requires some more technical mathematics and we leave it out for Appendix A. The dimer-dimer scattering length is directly related to the full symmetrized vertex T 4 (p 1 , p 2 ; P ). Just as in the preceding section, taking also statistics into account, we have: If one skips the second term in Eq. (9), i.e. one omits diagram Fig.6b, one will arrive at the ladder approximation of Pieri and Strinati 8 . The exact equation (9) corresponds to the summation of all diagrams. We have calculated the scattering length in the ladder approximation and the scattering length derived from the exact equation and obtained 0.78a F and 0.60a F respectively. Some details on our actual procedure are given in the next section. Thus our results in the ladder approximation are in agreement with the results 8 of Pieri and Strinati and, in the general form, with the results of Petrov et. al 9,10 . Note also that our approach allows one to find the dimer-dimer scattering length in the 2D case (this problem was previously solved by Petrov et. al 17 ). Finally we would like to mention that our results allow one to find a fermionic Green's function, chemical potential and sound velocity as a function of a F in the case of dilute superfluid bose gas of dimers at low temperatures. The problem of dilute superfluid bose gas of di-fermionic molecules was solved by Popov 18 , and later deeply investigated by Keldysh and Kozlov 19 . Those authors managed to reduce the gas problem to a dimer-dimer scattering problem in vacuum, but were unable to express the dimer-dimer scattering amplitude in a single two-fermion parameter. A direct combination of our results with those ones of Popov, Keldysh and Kozlov allows one to get all the thermodynamical values of a dilute superfluid resonance gas of composite bosons. Another interesting subject for the application of our results will be a high-temperature expansion for the thermodynamical potential and sound velocity in the temperature region T ∼ T * ∼ E b , where the composite bosons begin to appear. IV. PRACTICAL IMPLEMENTATION Let us give now some details on the way in which we have solved effectively the above equations. Actually we have dealed with two problems, the scattering length calculation discussed above and the bound states problem to be discussed below. Our two problems are quite closely related since, for the scattering length problem, we look for the scattering amplitude at zero outgoing wavevectors and energy for two dimers, while for the bound states we look for divergences of this same scattering amplitude at negative energy. As already indicated, in both cases the situation is somewhat simplified with respect to the variables we have to consider, due to the specific problem we handle. First with respect to P = {P, E}, we have P = 0 since we work naturally in the rest frame of the four particles. Moreover, with respect to the total energy, E = −ǫE b is negative. Specifically ǫ = 1 when we look for the scattering length. Or when we consider bound states ǫ gives the energy of the bound states we are looking for. Next, with respect to parameter p 2 ≡ {p 2 ,p 2 } which characterizes the outgoing dimers, we will have naturally p 2 = 0 as we have said since we consider zero outgoing wavevectors. Since we will evaluatep 2 on the shell, we have merelyp 2 = 0, and this parameter drops out. Hence in the following we do not write anymore explicitely the value of parameter P . Both for the scattering length problem and the bound states problem, we have followed two main routes. In our first route, we have written a specific integral equation for T 4 (p 1 , p 2 ), which is then solved numerically. The details of our derivation for this integral equation are given in Appendix B. The kernel for this equation is itself obtained from a vertex Γ. The defining integral equation Eq.(B2) for this vertex has been inverted numerically, by calculating the inverse matrix, to obtain the vertex Γ(q 1 , q 2 ; p 2 ). We have used 20 LU factorization and Gauss quadrature. The result has then been substituted in Eq.(B1) which gives the kernel ∆ 4 (p 1 , p 2 ) coming in the integral equation Eq.(B3). The solution of this last equation is naturally also handled numerically, for example by finding the eigenvalues of the kernel for the bound states problem. In our second route we have kept both functions T 4 and Φ. In the following we do not write anymore the parameter p 2 which takes always the trivial value p 2 = 0, as explained above. Hence we are left with T 4 (p 1 ) which, because of rotational invariance, depends only on the energyp 1 and the modulus |p 1 | of the momentum. For brevity we denote this quantity t 4 (|p 1 |,p 1 ). On the other hand it is shown in Appendix A that, in order to evaluate the second term in the right-hand side of Eq.(9), we need only the evaluation of Φ(q 1 , q 2 ) on the shell, which we denote as φ(q 1 , q 2 ). It depends only on the three variables |q 1 |, |q 2 | and the angle between these two vectors. Hence it is enough to write Eq.(9) only for q 1 and q 2 taking on the shell values. From Eq.(9) this leads for φ(q 1 , q 2 ) to the following more convenient equation: In the third term the angular integration can be performed analytically, and one is left with double integrals for the last two terms, for the 3D as well as for the 2D case. It is actually quite convenient, in the last term, to deform theQ contour from ] − ∞, ∞[ to ] − i∞, i∞[ by rotating it by π/2. No singularity is met in this deformation, and one is left to deal only with real quantities. The above equation has to be supplemented by a corresponding equation for t 4 (|q|,q) obtained from the definition Eq. (8). The important point is that the additional integrations can be performed analytically, owing to the various invariances under rotations found in the resulting terms. We just give here as an intermediate step the structure of the resulting equation: where α is the angle between p 1 and p 2 . Here S(k, z), I(k, z, p 1 , p 2 , α) and J(k, z, K, Z) are analytically known functions of the variables (except that J requires to perform numerically a simple integration to be obtained, see below). In this equation and in particular in its last term, we have already gone to the purely imaginary frequency variable for t 4 . The resulting t 4 (x, iz) turns out to be real and even with respect to z. To be fully specific let us now give the actual self-contained integral equations which we have solved. We restrict ourselves to the 3D case and to the bfbf case (implying α = 1), corresponding to the dimer scattering problem treated 9,10 by Petrov et al. The only generalization is that we keep E = −ǫ|E b |, instead of setting ǫ = 1 as we should if we considered only the scattering length problem. For clarity we write the resulting equations with dimensionless quantities, where 1/a has been taken as unit wavevector, and |E b | = 1/ma 2 as energy unit. For simplicity we keep basically the same notations for the various variables. We just indicate by a bar over the function name that they are expressed in reduced units, with reduced variables (actually we writet 4 (k, z) instead oft 4 (k, iz), and there is a change of sign between φ(q 1 , q 2 ) andφ(p 1 , p 2 )). Equations for other cases and dimensions are completely similar with only few changes in coefficients, signs (for the particle statistics), for the expression oft 2 (x) and for the explicit functions coming from analytical angular integrations. We obtain: with A ± = 2ǫ + p 2 1 + p 2 2 + k 2 + p 1 p 2 cos α + kp 1 cos θ + kp 2 cos(α ± θ), and α is the angle between p 1 and p 2 , while θ is the polar angle of k with p 1 . We have simply set nowt 2 Here we have also defined the function: The corresponding equation fort 4 (k, z) is: with ϕ = arctan(k/2) and γ = arctan[4z/(4 + k 2 )] and we have defined the function: It is seen on these integral equations for our two unknown functionst 4 (x, z) andφ(p 1 , p 2 ) that they require only at most a triple integrals to be performed numerically. In this sense they are not numerically more complicated than the work involved in solving directly for the corresponding Schrödinger equation, as it has been done 9,10 by Petrov et al. Indeed these integrals require only a few appropriate change of variables to take care of singular behaviours occuring on some boundaries. Otherwise they have been performed with unsophisticated integration routine. In the case of the scattering length a mere iteration algorithm has been found to lead rapidly to the solution (provided an appropriate exact algebraic manipulation is made to make the iteration convergent). In this way we have been able to handle 45 × 45 × 45 matrices (for the three variables enteringφ(p 1 , p 2 )). This size is large enough to allow improved precision by extrapolation to infinite size, although we have not done it in the present case, but rather for the ground state of the bbbb complex discussed below. This leads to the result a B = 0.60 a F in full agreement with Petrov 9,10 et al, within a quite reasonable computing time on (nowadays) unsophisticated computer. We have not tried to improve on the accuracy of the result, since there is no basic interest. In the case of the bound states, to be described below, we have proceeded to a straight diagonalization of the matrix equivalent to the right hand sides of Eq.(13) and Eq.(18) with the Lapack library algorithm. In the 2D case, it is worth noticing that, because of the logarithmic dependence oft 2 (x) on x, it is quite an improvement to make the change of variables K = ǫ 1/2 K ′ and Z = ǫZ ′ , and so on, since the more appropriate variable turns out to be ln ǫ rather than ǫ itself. V. NEW RESULTS IN A 2D CASE We will now apply the diagrammatic approach developed in the previous sections (see also Appendix A) to get new results for the systems of resonantly interacting particles in a 2D case. As it was first shown by Danilov 21 (see also a paper by Minlos and Fadeev 22 ) in the 3D case, the problem of three resonantly interacting bosons could not be solved in the resonance approximation. This statement stems from the fact that in the case of identical bosons the homogeneous part of Skorniakov-Ter-Martirosian equation (7) has a non-zero solution at any energies. The physical meaning of this mathematical feature was elucidated by Efimov, who showed that a two-particle interaction leads to the appearance of an attractive 1/r 2 interaction in a three-body system. Since in the attractive 1/r 2 potential a particle can fall into the center, the short range physics is important and one can not replace the exact pair interaction by its resonance approximation. On the contrary in the case of the 2D problem the phenomena of the particle fall into the center is absent and one can utilize the resonant approximation 23,24 . Therefore it is possible to describe three-and four-particle processes in terms of the two-particle binding energy E b = 1/ma 2 only (below, for simplicity we will assume that all particles under consideration have the same mass m). We will leave aside the problem of composite particles scattering and will concentrate on the problem of binding energies of complexes of three and four particles. As well as in the case of the 3D problem, the cornerstone in the diagrammatic technique is the two-particle resonance scattering vertex T 2 (see Fig.1). For two resonantly interacting particles with total mass 2m it reads in 2D: where we introduce a factor α = {1, 2} in order to take into account whether two particles are indistinguishable or not. That is α = 2 for the case of a resonance interaction between identical bosons, while α = 1 for the case of a resonance interaction between fermion and boson, or for the case of two distinguishable fermions. A. Three particles in 2D We start with a system of three resonantly interacting identical bosons -bbb -in 2D. An equation for the dimerboson scattering vertex T 3 which describes interaction of three bosons has the same diagrammatic form as the one shown on the Fig.3, however there are small changes in the rules for its analytical evaluation. The resulting equation reads: where we have now Let us now consider a complex -f bb -consisting of one fermion and two bosons. As noted above we take bosons and fermions with equal masses m b = m f = m. We assume that a fermion-boson interaction U f b , characterized by the length r f b , yields a resonant two-body bound state with an energy E = −E b . In the same time a boson-boson interaction U bb , characterized by the interaction length r bb , does not yield a resonance. Hence if we are interested in the low-energy physics the only relevant interaction is U f b and we can ignore the boson-boson interaction U bb , the latter would give small corrections of the order |E B |mr 2 bb ≪ 1 at low energies. In order to determine three-particle bound states one has to find poles in the dimer-boson scattering vertex T 3 . Since we neglect the boson-boson interaction U bb the vertex T 3 is described by the same diagrammatic equation of Fig. 3 as for the problems of three bosons. The analytical form of this equation also coincides with Eq. (22) with the minor difference that the resonance scattering vertex T 2 now corresponds to the interaction between a boson and a fermion, and therefore we should put α = 1 in Eq. (21) for T 2 . Solving the equation for T 3 we find that f bb complex has only one s-wave bound state with the energy E 3 = −2.39 E b . Note that a complex -bf f -consisting of a boson and two spinless identical fermions with resonance interaction U f b does not have any three-particle bound states. B. Four particles in 2D After solving the above three-particle problems we may proceed to the complexes consisting of four particles. At first we will consider four identical resonantly interacting bosons bbbb 25 . Any two bosons would form a stable dimer with binding energy E = −E b . We are going to find a four-particle binding energy as an energy of an s-wave bound state of two dimers. Generally speaking a bound state could emerge in channels with larger orbital moments, however this question will be a subject of further investigations. Just as in the preceding subsection, in order to find a binding energy we should examine the analytical structure of the dimer-dimer scattering vertex T 4 and find its poles. The set of equations for T 4 has the same diagrammatic structure as those shown on Fig. 5 and Fig. 6. The analytical expression for the first equation reads: and the equation for the vertex Φ is: where T 2 should be taken from Eq.(21) and one should put α = 2 for the case of identical resonantly interacting bosons. When we look for the poles of T 4 as a function of the variable E, with P = {0, E}, we have naturally to consider only the homogeneous part of this equation. We have found 2 bound states for the bbbb complex. The values of the total binding energy |E 4 | = 2|E| are given in Table 1 below. Certainly for the validity of our approximation we should have |E 4 | ≪ 1/mr 2 0 . For the case of four bosons bbbb it means that 197 E b ≪ 1/mr 2 0 and hence a/r 0 ≫ √ 197. This case can still be considered as quite realistic for the Feshbach resonance situation. The case of a four-particle complex -bf ↑ bf ↓ -consisting of resonantly interacting bosons and fermions is still described by the same equations (23,24) but with parameter α = 1. In this case we found 2 bound states and they are also listed in Table 1. In order to obtain bound states of the f bbb complex one has to find energies P = {0, E} corresponding to nontrivial solutions of the following homogeneous equation This equation corresponds to the diagram of Fig. 6b. We have found a single bound state for this f bbb complex. Finally we summarize the results concerning binding energies of three and four resonantly interacting particles in 2D in Table 1. For the bbbb complex we find the beginning of a continuum of states at |E 4 |/E b = 16.5, as it should be since this is, within our numerical precision, the binding energy of bbb. Similarly we find the beginning of a continuum at |E 4 |/E b = 2.4 for the f bbb and the bf ↑ bf ↓ complex, in agreement with the binding energy of f bb. We display our corresponding results in Fig. 7 and Fig. 8. In all our calculations we find numerically, as a function of |E 4 |, the eigenvalues λ corresponding to the matrix on the right-hand side of our equations, for example Eq. (25). When one of these eigenvalues is equal to 1, this means that the corresponding E 4 is the energy of an eigenstate of our complex. In Fig. 7, we display the first highest eigenvalues for |E 4 | = 2.4, both for the bf ↑ bf ↓ case and the bbbf case. One sees clearly that a fair number of eigenvalues are essentially equal to 1. One could tune them exactly to 1 by changing very slightly |E 4 |. Hence this corresponds to the beginning of the continuum. By contrast one sees also clearly two isolated eigenvalues larger than 1, for the bf ↑ bf ↓ case, and one eigenvalue larger than 1 for the bbbf case. One can bring them to λ = 1 by increasing |E 4 |, and therefore they correspond to the bound states that we have found. Similarly we display in Fig. 8 the eigenvalues for the bbbb case, for the value |E 4 | = 16.5 corresponding essentially to the threshold for the continuum. Here again one sees many eigenvalues quite close to 1. On the same figure we also show the results of the same calculations for |E 4 | = 22. in order to display the way in which this whole spectrum evolves with |E 4 |. In particular one sees clearly the two isolated eigenvalues, corresponding to the two bound states found in this case. In particular since one of them is equal to 1, this means that the binding energy of one of the bound states is equal to 22 E b , within our numerical precision. Note finally that all our calculations correspond to the case of particles with equal masses m f = m b = m, although they can be quite easily generalized to the case of different masses. VI. CONCLUSIONS AND DISCUSSION For the problem of resonantly interacting fermions in 3D we have developed an exact diagrammatic approach that allows to find the dimer-dimer scattering length a B = 0.60a F in exact agreement with known results. This exact diagrammatic solution of the dimer-dimer scattering length problem in 3D opens new horizons for the extension of the self-consistent mean-field schemes of Leggett and Nozières-Schmitt-Rink to the inclusion of the quite essential three and four-particle physics in the two-particle variational wave-functions of the BCS-type. This in turn will help us to get diagrammatically exact results for T c , pseudogap and sound velocity in the dilute BEC-limit and to develop a more sophisticated interpolation scheme for these quantities toward the unitarity limit. The work on this very exciting project is now in progress. We have applied the developed approach to get new results in the 2D case. Namely, we have calculated exactly the binding energies of the following complexes: three bosons bbb, two bosons plus one fermion bbf , three bosons plus one fermion bbbf , two bosons plus two fermions bf ↑ bf ↓ , and four bosons bbbb. Our investigations enrich the phase-diagram for ultracold Fermi-Bose gases with resonant interaction. They serve as an important step for future calculations of the thermodynamical properties and the spectrum of collective excitations in different temperature and density regimes, in particular in the superfluid domain. Note that in purely bosonic models in 2D or in the Fermi-Bose mixtures in the case of prevailing density of bosons n B > n F a creation of larger complexes consisting of 5, 6 and so on particles is also possible. In fact here we are dealing with the macroscopic phase separation (with the creation of large droplets). The radius of this droplet R N for N bosons in 2D is estimated in 26 on the basis of a variational approach. Note that already for N = 5 the exact calculation of the bound state requires huge computational capability, but it would be interesting to see precisely how this would appear with our approach. APPENDIX A: DIMER-DIMER SCATTERING EQUATION. FREQUENCY INTEGRATION In this Appendix we will show how one can integrate explicitely over the frequency dependence in the dimer-dimer scattering equation (9) (we consider only this case, the other ones considered in Section IV would require trivial modifications). To simplify further computations we slightly change the notation and introduce a chemical potential µ = −E b /2 and the single fermion energy ξ p = p 2 /2m − µ = p 2 /2m + E b /2, with the modified fermion Green's function G(p) = 1/(ω − ξ p ). In the expression Eq.(1) for T 2 (Q) we have similarly to replace E by E − E b . The integral equation (9) reads more explicitely (with k = {k, ω} and Q = {Q, Ω}): From this equation Φ(q 1 , q 2 ) = Φ(q 2 , q 1 ), as it is obvious physically. Note also that the third term is already explicitely symmetrical in q 1 ↔ q 2 . First we note that, from Eq.(A1) itself, Φ(q 1 , q 2 ) is analytical with respect to the frequency variables ω 1 and ω 2 of the four-vectors q 1 and q 2 in the lower half-planes Im ω 1 < 0 and Im ω 2 < 0. This can be seen by assuming this property self-consistently in the right-hand side, and checking that the three terms are then indeed analytical, or equivalently one can proceed to a perturbative expansion. Then, if we make the "on the shell" calculation of Φ(q 1 , q 2 ) from Eq.(A1), that is for ω 1 = ξ q1 and ω 2 = ξ q2 , we see that, for second term in the right-hand side, the only singularity in the lower complex plane Im ω < 0 is the pole of G(k) at ω = ξ k . Hence the integration contour can be closed in the lower half-plane, leading to: Here we denote Φ(q 1 , q 2 ) = Φ({q 1 , ξ q1 }, {q 2 , ξ q2 }). The frequency integration of the third term in Eq.(A1) over the frequencies Ω and ω is more difficult because singularities are not essentially located in one half of the complex plane, as it was the case for the second term. For example Φ(k, Q − k) has singularities in both half planes, with respect to ω, and similarly for T 2 (−Q)T 2 (Q) with respect to Ω. We solve this problem by splitting the involved functions as the sum of two parts, one analytical in the upper complex plane, and the other one in the lower complex plane. First we write F (Ω, Q, q 1 , q 2 ) ≡ G(Q − q 1 )G(−Q − q 2 )T 2 (−Q)T 2 (Q) + (q 1 ↔ q 2 ) (we take into account that we want to calculate Φ(q 1 , q 2 )"on the shell") as: where U + and U − are respectively analytical in the upper and lower complex planes of Ω. This is done by making use of the Cauchy formula f (Ω) = (1/2iπ) C dz f (z)/(z − Ω) for a contour C which encircles the real axis (on which F has no singularity) and is infinitesimally near of it. This gives: with ǫ = 0 + . Making use of F (−Ω) = F (Ω), we find U − (Ω, Q, q 1 , q 2 ) = U + (−Ω, Q, q 1 , q 2 ). On the other hand the last part of the third termT 4 ( . This can be seen by substituting Eq.(A1) for Φ(Q ′ /2 + k ′ , Q ′ /2 − k ′ ) in this last expression forT 4 (Q ′ ). For the first term contribution, the result is trivial. For the second term, one has to make the shift k → k − Q ′ /2, and then k ↔ k ′ . In the third term one has to make the shift k ′ → k ′ + Q/2 and then k ′ → −k ′ . Then, when we make the change Q → −Q in the third term of Eq.(A1) and usē T 4 (−Q) =T 4 (Q), we see that the U − contribution is exactly identical to the U + contribution and we are left with a single contribution from U − to evaluate. In order to perform the ω integration inT 4 (Q) = d 4 k G(Q/2 + k)G(Q/2 − k)Φ(Q/2 + k, Q/2 − k), we split: into the sum of two functions, with Φ + analytical in the upper complex plane with respect to ω, and Φ − analytical in the lower complex plane. That this can be done is immediately seen from Eq.(A1) itself. For the first term we just have to write the product of Green's functions as , which has explicitely the required property. In the third term we can handle the product of the first two Green's functions in the same way. Finally, in the second term, after performing the ω integration as indicated above (but without taking the "on the shell" values for the frequencies), one sees that the result for the term written explicitely above in Eq.(A1) is analytical in the lower complex plane with respect to ω. The corresponding term obtained by (q 1 ↔ q 2 ) is analytic in the upper complex plane. In each case one checks that the functions analytical in the upper and lower complex plane are related by k ↔ −k, so that Φ − (Q/2 + k, Q/2 − k) = Φ + (Q/2 − k, Q/2 + k). Hence by the change of variable k ↔ −k, the contributions of Φ + and Φ − are equal. Then we have arrived, for the calculation ofT 4 (Q), to a situation which is similar to the one we met for three particles. Since Φ + (Q/2 − k, Q/2 + k) and G(Q/2 − k) are analytical in the lower complex plane, we can close the integration contour at infinity in this lower half plane and the only contribution comes from the pole of G(Q/2 + k). This leads to:T where F (Ω, k, Q) is Φ + (Q/2 − k, Q/2 + k) evaluated for ω = ξ k+Q/2 − Ω/2. An important property, which can be checked on each term contributing to Φ + (Q/2 − k, Q/2 + k) is that F (Ω, k, Q) is analytical in the lower complex plane with respect to Ω. Hence the integration of U − (Ω, Q, q 1 , q 2 )T 4 (Q) over Ω can also be performed by closing the contour in the lower half plane, since the only singularity in this half plane is the pole due to the denominator in Eq.(A6). The contribution of this pole leads to the evaluation of F (Ω, k, Q) for Ω = ξ k+Q/2 + ξ k−Q/2 . Taken with the above definition of F this means that we have calculated Φ + (Q/2 − k, Q/2 + k) for Ω/2 − ω = ξ k−Q/2 and Ω/2 + ω = ξ k+Q/2 , which is just an evaluation "on the shell". Because of the simple relation between Φ + and Φ − the result can be expressed in terms of Φ(k + Q/2, −k + Q/2) itself. Gathering all the above results we end up with the following complete equation for Φ(q 1 , q 2 ): In this equation we have modified the integration contour in the definition of U − to have it running on the imaginary axis rather than on the real axis, and we have used the symmetry property of F (z, Q, q 1 , q 2 ) with respect to z, together with symmetry properties of Φ(q 1 , q 2 ), to rewrite the result in terms of the real function: U (Ω, Q, q 1 , q 2 ) = Ω π ∞ 0 dy F (iy, Q, q 1 , q 2 ) y 2 + Ω 2 (A8) which shows that Φ(q 1 , q 2 ) itself is real. We have made practical numerical use of Eq. (A7) to find for example the ground state energy. Although this turned out to be quite feasable, this equation appears finally less convenient than what we have described in section IV. This was expected since the solution implies quadruple integrals, instead of the triple integrals we had only to deal with in section IV. APPENDIX B: MODIFIED DIMER-DIMER SCATTERING EQUATION This appendix is devoted to an alternative description of the dimer-dimer scattering process. The purpose is to obtain a direct integral equation for T 4 (p 1 , p 2 ; P ), in a way convenient for numerical calculations. Below we derive such a set of equations, that were used for practical computations as indicated in section IV. The first step is to construct for two dimers a "bare" interaction potential, or vertex, ∆ 4 , which is the sum of all irreducible diagrams, and then to build ladder diagrams from this vertex, in order to obtain an integral equation (see Fig. 9). These irreducible diagrams are those ones which cannot be divided by a vertical line into two parts connected by two dimer lines. As it was pointed above the vertex ∆ 4 is given by the series shown on Fig.4e, since the diagrams on Fig.4f are by contrast reducible. Again we can eliminate T 3 from our considerations and express ∆ 4 only in terms of T 2 . For this purpose we have to introduce a special vertex with two fermionic and one dimer incoming lines and two dimer outgoing lines Γ αβ (q 1 , q 2 ; p 2 , P ) (see Fig. 10). This vertex Γ αβ (q 1 , q 2 ; p 2 , P ) corresponds to the vertex ∆ 4 with one incoming dimer line being removed, in much the same way as Φ(q 1 , q 2 ; p 2 , P ) and T 4 (p 1 , p 2 ; P ) are related in Eq. (8). The difference is that Γ αβ (q 1 , q 2 ; p 2 , P ) is irreducible with respect to two dimer lines while Φ(q 1 , q 2 ; p 2 , P ) is not, just in the same way as T 4 (p 1 , p 2 ; P ) and ∆ 4 (p 1 , p 2 ; P ) are related. The corresponding equation relating Γ αβ (q 1 , q 2 ; p 2 , P ) and ∆ 4 (p 1 , p 2 ; P ) is: ∆ 4 (p 1 , p 2 ; P ) = 1 2 Q; α,β χ(α, β)G(P + p 1 − Q)G(Q)Γ αβ (P + p 1 − Q, Q; p 2 , P ). One can readily verify that the diagrammatic expansion for Γ shown on Fig. 11 yields the same series as the one shown on Fig. 4e for the vertex ∆ 4 . The spin part of Γ α,β has again the simple form Γ α,β (q 1 , q 2 ; P, p 2 ) = χ(α, β)Γ(q 1 , q 2 ; p 2 , P ) and the function Γ(q 1 , q 2 ; p 2 , P ) obeys the following equation: Γ(q 1 , q 2 ; p 2 , P ) = −G(P − q 1 + p 2 )G(P − q 2 − p 2 ) − G(P − q 2 + p 2 )G(P − q 1 − p 2 )− The sign minus in (B2) is a consequence of the anticommutativity of Fermi operators. It is clear that Eqs. (B1) and (B2) can be analytically integrated over the variable Ω. Thus the s-wave component of the vertex Γ(q 1 , q 2 ; p 2 , P ) is a function of the absolute values of vectors |q 1 | and |q 2 |, the angle between them, the absolute value of vector |p 2 |, and the frequency ω 2 . The s-wave component of the sum of all irreducible diagrams ∆ 4 (p 1 , p 2 ; P ) is a function of the absolute values of the vectors |p 1 | and |p 2 | and the frequencies ω 1 and ω 2 .
2019-04-14T02:09:03.400Z
2005-07-11T00:00:00.000
{ "year": 2005, "sha1": "38a54d91f12607ea483cd45e228296fe8f5df833", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/0507240", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "3c7e8166bce9b5eb3859efc29412b2faaf055b00", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
256511715
pes2o/s2orc
v3-fos-license
Generational Bargain, Transfer of Disadvantages and Extreme Poverty: A Qualitative Enquiry from Bangladesh Why do the poor stay poor? And, crucially, why are their children likely to be poor and end up poor later in life? This is a familiar question in the fields of development, social policy and economics alike. Bangladesh has seen notable successes in reducing poverty, and yet, addressing the transfer of deprivations and disadvantages within and between generations still poses a major challenge for policy-makers. To date, literature on inter-generational poverty remains dominated by large quantitative panel data. By contrast, this study draws on a unique qualitative dataset of 72 extreme poor households across Bangladesh, examining how inter- and intra-generational bargains generate extreme poverty. It is argued that, while poverty is transferred inter-generationally, it is not transferred equally. Rather, transferred disadvantages are shaped by persistent forms of deprivation, discrimination and a household-level political economy that is highly gendered. The inter-generational transfer of poverty should be seen as a dynamic and negotiated process that is crucially shaped by intra-generational bargains. Introduction Pockets of entrenched poverty persist in societies across the world, even those with strong macro-economic growth. This kind of poverty is commonly referred to as 'chronic', 'ultra' or 'extreme'. A distinct feature of such poverty, shared across these definitions, is that it is often inter-generationally transmitted. Although categorical distinctions should not be overstated, given the dynamic nature of poverty and the churning of the poor's fortunes, experiences of extreme poverty share characteristics of social isolation, exclusion from support networks and a prominence of femaleheaded or managed households. There is a concerted effort to improve the quality of rates of poverty in most of the Global South, yet the processes through which extreme poverty is transmitted inter-and intra-generationally remain under-analysed. In Bangladesh, for example, despite remarkable achievements in the Millennium Development Goals (MDGs) and sustained economic growth, around 20 million people (12.9%) are still living below the poverty line (BBS 2017;. Furthermore, in terms of absolute count, the number of extreme poor will likely increase in the coming decades, despite the fact that the percentage-wise share may go down, as the rate of progress has stagnated or slowed (Roser and Ortiz-Ospina 2013). Such statistics raise the question: why do some people stay poor? Understanding why the children of poor people are also likely to be poor or end up as poor during their late life is then a crucial concern for policy-makers. The present work examines these questions drawing on a qualitative longitudinal dataset compiled by the EEP/shiree 1 programme. The data were collected annually from 72 extreme poor households from 2008 to 2016. The core analysis focusses on how inherited material deprivations (namely assets and inheritance, health, education and skills) often translate into relational disadvantages that play a key role in 'trapping' the extreme poor within exploitative relationships over long periods of time. It is argued that inter-generational transfer of poverty is shaped by the intragenerational bargaining within the household, which has a significant and gendered impact on the reproduction of poverty between the generations. This argument has strong implications for development programming and poverty reduction strategies, promoting a view of the extreme poor as relationally as well as materially poor. The remainder of this manuscript is organised as follows: The first section reviews existing literature on extreme poverty and inter-generational poverty, not in an attempt to be comprehensive but to deploy a useful analytical framework that will organise later empirical discussions on the inter-and intra-generational transfer of extreme poverty in the studied population. The second section introduces the methodology and data on which the analysis is based. The third section analyses how inter-and intra-generational extreme poverty occurs and persists, and attempts to draw lessons for development programming. Poverty and Extreme Poverty in Development Thinking Poverty reduction is a global priority for development organisations, embodied in the MDGs and, more recently, in the Sustainable Development Goals (SDGs). Even in high-performing economies, a significant minority of the population remains extremely deprived or poor (Green and Hulme 2005). The presence and persistence of 'traps' keeping people in poverty across the world has led to conceptualisations of chronic/extreme forms of poverty as distinct (Shepherd et al. 2011). Development programmes led by governments and donor agencies have designed an array of approaches to alleviate poverty, often targeting those deemed the most 'extreme'. 2 Such approaches persistently rely on a finance-centric understanding of poverty, focussing on incomes, expenditure and consumption, variables deemed more 1 EEP/shiree was a poverty reduction programme implemented in Bangladesh in partnership with the UK Department for International Development (DFID), the Swiss Agency for Development and Cooperation (SDC) and the Government of Bangladesh (GoB). The programme was widely known as 'Shiree'-the Bangla word for 'steps'-the name is an acronym for 'Stimulating Household Improvements Resulting in Economic Empowerment' which reflected the programme's core approach: to provide households with the assets and support needed to take enduring steps out of extreme poverty. See www. shire e.org for further details. 2 The definition of extreme poverty differs and, in the context of a development programme and the SDGs, commonly refers to people with incomes below a certain threshold that is adjusted to the particular national context and is designed to reflect the capacity to achieve a minimum set of basic needs. measurable and trackable in terms of long-term and spatial trends and amenable to national and international comparisons. A wide range of critiques have however been wagered against this vision of poverty, including accusations that income-based measurements are flawed, misleading, superfluous and easily manipulated (Jayaraj and Subramanian 2017). Even assuming economic wellbeing as an important indicator to distinguish the poor and extreme poor from those who are not, there is still a debate whether economic wellbeing can simply be measured using incomes, expenditures or consumptions figures (Posel and Rogan 2016). Furthermore, as Davis and Baulch (2009) have argued, by focussing on ever increasingly sophisticated poverty estimations techniques and on concerns about the reliability and interpretability of poverty metrics, researchers neglect the lived experiences of poverty and fail to differentiate the outcomes and manifestation of poverty from its causes. Alongside dominant material approaches to conceptualising poverty are the alternative understandings, which incorporate the immaterial and relational dynamics of poverty and its reproduction (Wood 2003;Mosse 2010). Moving away from materialistic understandings of poverty, Sen's ideas of freedom and capabilities for example places the question of absence of entitlements at the centre of poverty analysis (Sen 1981(Sen , 1999. He argues that poverty is composed of social relations enabled by sets of entitlements (including nutrition, health, education and self-respect), which are contingent on specific social and cultural configurations (Chiappero-Martinetti and Moroni 2007). Similarly, analyses of adverse incorporation and social exclusion explicitly link individual situations of poverty to the broader social, political and political-economy context and thereby can generate a thick understanding of how poverty is reproduced at the household level (Hickey and du Toit 2013). A common denominator to such approaches is the measuring of deficits (be it related to assets, knowledge or capacities) and deprivations within broader relational and temporal frameworks. Extreme poverty within this perspective must thus be understood within a political-economy context that inhibits, exploits and differently enables the capacities of the extreme poor. Beyond a definition of extreme poverty as a level of consumption below an arbitrary threshold, more qualitative understandings see it as deeply entrenched within and inherited across generations. 3 Examining the reproduction of poverty necessitates analysing the dynamic interaction between systemic and idiosyncratic risks. The distinction between systemic and idiosyncratic dates back to Kanbur (2001) and provides a helpful analytical lens through which poverty can be explored. Systemic routes to poverty are typically characterised by hazardous and precarious engagements with the labour and petty commodity markets. Although it can be argued that the 'moderate poor' endure more systemic barriers whilst being vulnerable to idiosyncratic shocks, "the extreme poor may experience, idiosyncratic poverty and poverty relating to horizontal inequalities, but are highly vulnerable to systemic, relational poverty whenever their livelihood depend directly upon petty commodity or labour markets" (Wood 2018, p. 8). In other words, as the extreme poor work for the betterment of their condition, they move from a condition of social exclusion to adverse incorporation, facing more prejudice and discrimination than their moderate poor counterparts. Yet, much of these idiosyncratic and systemic forces that result in the exclusion or exploitation of an extreme poor household should be understood in the light of previous experiences and processes of exploitation and exclusion that are specific to this household, two compatible phenomena. Understanding Generational Transfers of Disadvantages For the vast majority of the extreme poor, social exclusion and deprivation are not a sudden occurrence. They work hard relentlessly to prevent and cope with vulnerabilities and risks brought on them by systemic forces that hamper their efforts (Hulme and Lawson 2010). The transfer of deprivations and disadvantages taking place within and between generations poses some challenging societal questions about inequalities (Musick and Mare 2004). Although the bulk of the research on intergenerational poverty is often dominated by large quantitative panel data, through the 2000s, a resurgence of scholarly interest in exploring how poverty is reproduced inter-generationally has taken place. 4 This is often framed as a 'transfer' between generations, which often occurs within the boundaries of the household and relates to the accumulated set of exposure to opportunities, constraints or disadvantages during the parents' life cycle and inherited by the child. Although part of this transfer process can be considered conscious and intentional, a large part of it is less palpable and deliberate. As stated above, a salient feature of inter-generational poverty is that it describes a time-based process (Bird et al. 2010) which goes beyond the life of an extreme poor household to its offspring. Therefore, inter-generational poverty not only represents an outcome, children's poverty, but a cause, the older persons' poverty, that of previous and future generations. This also indicates that a person's ability to move upwards over his/her life is determined by diverse household-based, community or wider external factors that are out of their control (Cooper and Bird 2012). Both positive and negative resources can be inherited from previous generations, an inheritance package that combines assets, reputation, values and aspirations, labour arrangements, health, religion and traditions (Harper et al. 2003;Musick and Mare 2006). Poverty passes from one generation to another due to a lack of inputs, resources and capitals, along with poor health, forced child labour, inadequate access to education, low self-esteem and inadequate social status (Moore 2001). Critically, the transmission of resources from parents to their children is not a static process but often bargained between generations and across generations, within families and relations. McGregor et al. (2000, p. 447) defined inter-generational bargains as a pattern of relationships through which generations transfer resources which come with uncodified rights and obligations. Inter-and intra-generational bargains can be associated with the transmission of disadvantages, liabilities and inequalities between generations (Anderson 2013), while also representing a means by which one generation makes sacrifices and strategic decisions for their children. Collard (2001) argues that a transfer of resources (both material and non-material) from one generation to the next is very common in their life cycles, and the process is very complex and often reciprocal; For instance a child is dependent on his/ her parents who hold the bargaining power and authority until a power shift occurs when children become earning members of the family and their ageing parents start relying on them (Kalenkoski 2008). Thus, the inter-generational bargain influences each party in different ways. Education, for example, is often thought of as an enabling factor to long-term security and higher wages for poor households and their offspring (Bird et al. 2010). The motivation for sending children to school depends on the perceived benefits of formal education, which has important implications in transferring prospects or constraints for children to overcome their poverty. Children of uneducated parents find it difficult to continue their education (Behrman et al. 2017). It has also been argued that it is the education rather than economic status of parents which is more important in determining whether the children of the household go to school (Horii and Sasaki 2012). In extreme poor households, choosing to send children to school is influenced by a calculus of the perceived losses (loss of labour and schooling related costs) against the future possible benefits of enhanced earnings. The bargaining process is often highly gendered. A woman's position in her family and in society (in terms of the gendered differences in exercising rights, nature of obligations and capacity to access opportunities) plays a key role in determining the next generation's fortune; For example research conducted in Bangladesh found that many extreme poor female-headed households place their children in orphanages to cope with food insecurity, a group of children one author termed 'food orphans' (Akram 2018). Another example of this concerns the importance of women's good health during pregnancy. Extreme poor pregnant women who have had and continue to have a poor diet during pregnancy and receive inadequate antenatal care are more likely to give birth to children who experience congenital malformations, sudden infant death syndrome and become under-or malnourished (Smith and Ashiabi 2007). For breastfed babies, poor nutrition in early life can have lifelong consequences, including stunting, limited body development or delayed cognition, which in turn can lead to low working and earning capacity (Martorell and Zongrone 2012;Caulfield et al. 2006). In all those cases, it might be worth reflecting on what the future of those children could look like. The answer in many cases would be 'poverty' or even 'extreme poverty'. Inter-generational influences are thus so strong that they produce an 'inter-generational cycle of growth failure', also referred to as consistent growth failure by Martorell and Zongrone (2012). The concept of a 'generational bargain' is used herein to refer to a person's inheritance of assets, skills, opportunities and vulnerabilities that typically shape their identity, defined herein as the nature and terms on which it is possible for them to interact with the outside world and their capacity to improve their condition within it during their life course. This reflects the bargains and decisions made by households and between generations but is also conditioned by the political economy of the household. In examining inter-generational poverty through this lens, the intention is thus to contribute to the growing interest among policy-makers, academics and development researchers in understanding how inheritance influences one's life chances and exploring how this transfer influences people's capacity to escape from or remain in extreme poverty across generations. Understanding this may prove crucial to design effective extreme-poverty-alleviation interventions, and yet academic scholarship exploring the patterns and processes through which household dynamics, reciprocal exchanges and bargains within and between generations impact experiences of extreme poverty remains scarce. Data Inter-generational poverty has mostly been understood through quantitative panel datasets, which arguably often neglects the wider context and processes through which such transmission occurs and the dynamism of this through people's lives (Heissler 2012). 5 By contrast, the present analysis draws on the baseline of a secondary qualitative longitudinal dataset 6 developed as part of the monitoring and evaluation component of an extreme poverty reduction programme. Analysing this dataset enabled the authors to examine the multiple processes through which inter-and intra-generational bargains occur within extreme poor households. The qualitative dataset used tracks the livelihood trajectories of 72 extreme poor 7 households over a 5-year period, from 2010 to 2015. The sampling of the 72 households was done purposively to cover the wide range of interventions taking place as part of the EEP/shiree programme (implemented by 17 NGOs) and therefore attempts at capturing a wide range of experiences of extreme poverty. 8 Geographically, the present study covers multiple regions, with 25 households living in coastal areas, 25 in Northern Bangladesh-including the monga-prone areas (characterised by seasonal food insecurity), char and haor areas and Santal (ethnic minority of the north) community of the Barind region-12 in the Chittagong Hill Tracts (CHTs) and 10 in urban slum settings. The number of times each household was visited varied (32 households were visited four times, 39 households three times and 1 household twice 9 ). This variation can be explained by the two-stage recruitment process of the extreme poverty intervention. The second cohort of 40 respondents were recruited 2 years after the first cohort of 32 respondents (2012 and 2010, respectively). The 72 households were repeatedly interviewed by research officers (ROs), mentored by a team of senior researchers from the University of Bath, UK. Typically, one RO was assigned six beneficiary households. Thus, repeated visits over several years enabled ROs to develop a strong personal rapport with each of them and provided ROs with detailed contextual knowledge about the households. This way, information was collected in a semi-ethnographic manner. As part of the programme, two different tools were used to track the households: baseline life histories and 'reflection on interventions'. For the purpose of this analysis, only data derived from the life histories were considered, as the aim was to keep the analysis clear of the EEP/shiree intervention's effects on the beneficiaries' lives. The life histories contain two elements: (1) a detailed narrative of crucial events that had affected the lives of the extreme poor beneficiary, in terms of wellbeing level and poverty status, since their birth and prior to the intervention; and (2) a map illustrating the self-reported variations in wellbeing status and events on a timeline. Six categories were used in the life histories to determine wellbeing statuses: destitute, working extreme poor, moderate poor, lower-earning non-poor, middle elite and wealthy elite (Table 1), a framework adapted from Davis (2005Davis ( , 2006Davis ( , 2007. 10 The life histories enabled the examination of detailed information relating to a wide range of topics from health to diet and migration to marriage, politics, environmental changes and deeper personal reflections on empowerment and the 'self' (Goto et al. 2011). From fieldwork preparation to transcription, each life history required an average of 15 working days, given that the data collection for each of them required between three and five consecutive visits to the household (and in some cases some phone calls to fill in some remaining gaps). The process engaged the participants as co-producers of knowledge. Once each script was finalised, the participants were consulted to validate the information gathered. The life history dataset was then manually coded by the authors and analysed for the purpose of this particular analysis to shed some light on inter-and intra-generational bargains. Characteristics of the Sample Although the sampled households are all categorised as extreme poor, they are distinguished by a wide range of demographic characteristics. Most of them (82% of which most are women) report having experienced poverty in their childhood (belonging either to the level of destitute, working extreme poor or moderate poor). Only 13 respondents categorised themselves as having belonged to the level of lower-earning non-poor. None of the respondents reported having experienced middle elite or wealthy elite wellbeing status during their childhood. Demographic characteristics such as disability, old age, female-headedness and female-managed households distinguish the 72 households. Of the total respondents, 52 were women of female-headed and female-managed households and 20 were men from male-headed households, with an average age of 39 years overall. The large majority (76%) of them had no formal education, 11 of them had had primary level schooling and only 6 of them had high-school-level schooling experience. From Deprivation to Discrimination and Disadvantage: Interand Intra-generational Dimensions of Extreme Poverty The life histories reveal that, as diverse as the personal experiences of extreme poverty can be, the extreme poor share a common sense of disruption in accumulating skills and assets (and struggle in protecting them), erosion of supportive social connections and networks and the feeling that their aspirations and dignity have been crippled. Respondents typically construct and make sense of their life stories by explaining how they became excluded from rights and unjustly lost their entitlements, how they became unable to sustain a living as working-age adults and fell into extreme poverty as they were unable to rely on others. There are multiple dimensions and issues that emerged from the analysis of the longitudinal qualitative data. This section is organised to reflect how respondents themselves made sense of their life experiences and of how and why their bargaining power weakened, leading them to a point of absolute deprivation. To reflect this notion of 'time' and personal experience, fundamental to exploring the bargaining process, the data analysis is organised as follows: the first section looks at material deprivations often stemming from systemic failures as a starting point in the process, then how these deprivations became the basis for discrimination is examined and finally how lingering discrimination or exclusion transformed into idiosyncratic risks and inherited disadvantages is studied. In this section, how this transfer takes place is explored in the following dimensions emerged frequently from the data: assets, health and nutrition, education and skills, relationships and inter-and intra-generational bargains. Owning and Inheriting Assets The successful accumulation and transfer of assets through inheritance represents a significant aspiration in the life of extreme poor households, especially because limited assets were (or to be) owned by multiple members; For instance an extreme poor family who owns a small piece of three decimal arable or homestead land has three children who are the future possible inheritors. The data show a strong gender differential treatment takes place during these processes and bargains, notably in the transfers of productive assets. Contrary to the impression that male inheritance is clearly defined, in rural areas, when members of households become incapacitated or die, productive assets (e.g. land, livestock, or pieces of equipment) are often shared on an ad-hoc and unpredictable basis across male siblings. This is a crucial time when female and younger or 'weaker' siblings (weaker in terms of education, potential earning capacity and general bargaining power) become deprived in comparison with their elder or stronger male counterparts who mainly influence or initiate decisions in ways that favour them. Although religion plays a structuring role in conceiving this transfer, in practice culture is found to supersede specific religious principles. Customary practices are generally shared across households living in the same locality regardless of their religious beliefs (although with some exceptions in the CHT). In extreme poor households, transfers from parents to their daughters occur mostly at the time of marriage under the form of gifts or dowry. Women are often excluded from asset ownership and inheritance because dowry is considered to be their only entitlement. In most cases, women reported having been deprived of both their inheritance and their control over marriage dowry. This occurs through diverse means; For example Halima (46) was required to marry a disabled man so her brother could avoid giving her dowry. She neither inherited land from her family nor received a share of the 200 betel nut and coconut trees owned by her father. Furthermore, the data strongly indicates that even the few women who inherit or accumulate assets by their own means struggle to fully control them and often are excluded from being consulted about their use, sale or transfer; For example, when Marium's (32) husband died in a road accident, she and her two children were not allowed to control the BDT 30,000 (approximately USD 350) compensation received from the transport owners association. Her father-in-law took control over the money arguing that her husband had overdue loans, which Marium was not aware of. Rafeza (40) was married when she was only 15. Her brothers decided to marry her off to a much older man who was on low income and who was already married. This was done so that they did not have to pay the dowry to the groom or sell the land that they would have inherited later. In some cases, however, for working men, inheriting assets can in fact limit mobility. Although migrating entails high exposure to hazards and risks for the working extreme poor, especially during the first few months, 11 many respondents reduced their vulnerability and improved their wellbeing by using it as a strategy to access employment or cash-based wages, while leaving their families in the village. In such contexts, the inheritance of a small piece of homestead land restricts one's ability to migrate, as it brings expectations to remain in a natal village to take care of the inherited land, although there could be better economic opportunities elsewhere. A large number of respondents reported that their comparative lack of assets, skills and knowledge restricts their access to cash-generating activities on the labour market. The terms of their engagement with it are often limited to in-kind payments and reliance on arrangements such as live-in and bonded labour, advance sale of labour, often characterised by precarious work, dominating patronage structures and low wages. These types of access to labour options certainly benefit the extreme poor in giving them means of subsistence, but they also usually bring complex longterm relationships and liabilities. Because of these arrangements and obligations, when experiencing a shock or hazard, the extreme poor face periods of severe hardship induced by an immediate loss of income often combined with negative coping strategies which include distress sale of assets, use of child for waged labour or caring duties, borrowing money (at high-interest rates) and reduction of food consumption and quality. When prolonged, these have damaging snowballing long-term effects. The future is discounted against, a process which Wood (2003) termed 'the Faustian bargain' (Wood 2003); For example sons are pulled out of school because employing their time for education is an immediate expense that, compared with their possible earnings, is not affordable to parents. Because young girls have fewer opportunities to engage in paid work, they are often pulled out of school to enable their mothers to work whilst care and house-chores duties are performed by their daughters. The gradual depletion of tangible and intangible assets further narrows the households' opportunities for choice and weakens their social status. Protecting accumulated and earned gains becomes a struggle, and the prospects of improving the household's immediate wellbeing an unattainable aspiration. Parents become concerned with preserving their children from inheriting the disadvantages they have themselves inherited; For example the data shows that young girls (between 12 and 18 years) are pulled out of school to get married so that the dowry remains relatively low, particularly when the young girl is married off to a significantly older man or a man with some form of disability or sickness. The strategic use of children as instruments in households' livelihood often reflects severe forms of household destitution and gender discrimination, in this instance perhaps placing the value of the women and girls' future wellbeing below the value of their immediate wellbeing. Femaleheaded and female-managed households included in the dataset linked being overexposed to such arrangements with gender-based discriminations on the labour market (their involvement in precarious hazardous low-paid forms of employment) and their related limited access to and control over assets, usually explained by their limited contribution to the household's finances. Thus, in female-dependent households, children were found to face more disadvantages compared with the children in a male-dependent household. Note that issues of asset depletion, gender discrimination and labour disadvantages were a significant transfer observed across all geographical sites. However, extreme poor households living in the CHTs experienced a comparatively lower level of intra-household tensions over inheritance of production assets. This might be because land is the most crucial productive asset in extreme poor families, and in CHTs, land rights are communal rather than individual (Roy 2000;Chowdhury 2008), which is regulated by the circle chief, headmen and village headmen. Poor Health and Nutrition Health is a major driver of extreme poverty, playing a significant role in determining the fortunes of the households studied. There is strong evidence that health-related disadvantages can be transmitted from one generation to the next. Poor health not only erodes the minimum assets of the households but also the future prospects and ability to work and earn, generating the type of negative coping explained above. Problems such as ulcer, acidity, chest pain and headache were common, for which they lost workdays, and prolonged periods of hunger or malnutrition coupled with intense physical labour resulted in long-term health weakness for the extreme poor. Aside from being a negative outcome of a coping strategy, poor health often presents a basis for extreme poor households to be discriminated against and, the analysis suggests, becomes an inherited disadvantage. The data show that the presence of infectious and communicable diseases (e.g. asthma or tuberculosis) passed from parents to children among extreme poor households resulted in prolonged periods of sickness, loss of workdays and, in many cases, permanent disability or death of family members. Highly communicable diseases such as tuberculosis are heavily stigmatised, which results in social isolation or exclusion of households. Extreme poor households spend a proportionately large share of their income towards accessing low-standard health services. A significant number of extreme poor households reported having developed long-term illnesses as a consequence of ignoring a particular health concern at the early stages. Postponing treatment or reluctance to create new expenses (often in case of women members of the family) causes minor and curable illnesses to transform into important life-long diseases, disability and prolonged periods of sickness or death. The fact that extreme poor households lack information about these diseases or have limited access to (or trust in) effective means of prevention and treatment can exacerbate the impact of the disease, enabling its spread to other family members or the worsening of their condition; For example Mong (34), who lived in the CHTs, experienced multiple health shocks that reduced his ability to work. He visited a number of healers, ranging from herbalist and faith healers to medical doctors. Mong explained that he had no choice but to sell his labour in advance as well as borrowing from moneylenders at highinterest rates to meet the costs of treatment. Although health-related concerns are common across all the life histories, respondents located in the CHTs tend to suffer most from it compared with households living in the plain land. This can be explained by the remoteness, inaccessibility and discriminatory practices that characterise public health services, often operated by Bengalis. Prolonged care-seeking processes often aggravate the health of the patients' condition. There is evidence from the life histories that diseases and nutritional deficiencies of parents (particularly mothers) contribute to the deaths of extreme poor children. There are frequent examples of stillbirth, child death and also weak-cognitive and nutritional status of extreme poor children born with lifelong disadvantages, for example disabilities because of the lack or absence of access to antenatal care and/or malnutrition of mothers. Poor housing and unhygienic living conditions combined with poor nutrition negatively impact on the development (cognitive and physical) of children. Lack of cognitive skills (self-reported) was apparent among household members and restricted their work capacity, feeling of self-worth and ability to maintain and build supportive inter-personal relationships and networks. Disabled and older household members are the most vulnerable sub-groups within the extreme poor population, living with poor access to amenities, care and livelihood options, and a feeling of exclusion. Interrupted Education The data collected only allow to observe the impact of education on livelihoods in a limited way (given that the life histories have been tracking livelihoods for 5 years). Nonetheless, analysing parental decision-making processes in relation to education, according to their own background and experience, provides a lens through which to see the perceived beneficial outcomes and costs of schooling in the inter-generational bargain. For extreme poor families, formal education often does not provide enough shortterm incentives when compared with other income-earning and non-income-earning activities. In fact, this often represents significant opportunity costs, as children's engagement in remunerated work and children's role in unpaid care work is considered productive in the sense that it enables parents to work outside home. However, the data indicate that the financial capacity of extreme poor households is not then the sole reason for not enrolling children in school or for them dropping out of school. Parents' perception of education plays an important role in children's schooling, more so than their financial capacity to do so. These are often influenced by visible wellbeing and wealth outcomes of educated people in their immediate kinship group. Therefore, children in families where parents had received some form of formal education tend to enrol their children at school more so than children whose parents received no formal education; For example Bidhan (37) reports that he was never encouraged to go to school because his father believed that a barber's son needed no education. Rather, he was trained as an assistant at his father's barbershop from the time he was only 10 years old. This was a norm and was believed to be more beneficial for the family. Similarly, respondents portrayed teachers as discriminating against their children in school due to prejudice, which either demotivates children to attend school or interrupts their learning motivation. Choices concerning formal education are often ridden with gendered considerations. In extreme poor households, girls' access to formal education is determined by a specific set of factors. Grandparents' or other older family members often discredit the need for young girls to be educated as, they believe, it will risk delaying their marriage. For many of the girls, being able to read holy books (e.g. Quran in Muslim families) is perceived to be sufficient education and deemed more valuable than formal schooling. The present analysis partly confirms Horii and Sasaki's (2012) findings that households rationally anticipate that the benefits from education for poor children and their families will be low because children will not be given access to well-paid jobs and remain discriminated against because of their socioeconomic background. However, this data also shows that children who have had some level of education tend to have access to supportive social networks that generate benefits for the entire household. These can help the household cope with shocks and hazards and also contribute to their overall security and status; For example a little education up to grade five gave Abida (27) strength to fight the adverse situations (avoid multiple abusive relations on her own) in her life, take the challenge to work outside the home, continue her children's schooling and still to look forward to live for her children with courage and confidence. Respondents reported that the social safety nets and social protection schemes such as the conditional cash transfers and stipends for girls' education often incentivise parents to enrol children, particularly girls, at school; For instance Fahmida (48) enrolled her eldest daughter in a local government primary school in 1994, when she got to know that the family would get 10 kg rice or wheat per month as an incentive. Her son completed his primary education in 2004, but Fahmida could not continue her son's schooling because there was no such stipend/conditional support for high-school male students. Fahmida's eldest daughter, Morzina, had to leave school after she completed the eighth grade, as the allowance did not continue thereafter. This lack of formal education or interrupted education can generate a form of stigma in children's circles and form a basis for short-term discriminations as well as longer-term disadvantages on the labour market. Intra-household Relationships A crucial dynamic to the transfer of poverty between generations is, as already indicated above, relationships. The data collected strongly point to relationships and networks being a significant determinant of the fortunes of households, and one that is certainly inherited. The narration of the extreme poor respondents' lives appeared dominated by stories of new, broken, mended and weakened relationships formed through labour, kinship, patronage, politics and marriage. It appears that the extreme poor rely on multiple but fragile inter-personal arrangements that allow them to live and maintain their livelihoods. The data particularly reveals how intra-household relationships affect household's livelihoods and wellbeing. This section therefore focusses on exploring their significance for inter-generational transfers. The life histories of the households studied demonstrate that multiple marriages, abandonment and divorce make women and children vulnerable to becoming extreme poor. They often play into pre-existing patrilineal and patriarchal power dynamics and reinforce asymmetries through allocating resources, responsibilities and obligations. Meeting the latter is found to generate high levels of stress and generally low self-confidence. After marriage, brides are often found to be the weakest and most vulnerable household member with the least bargaining power, until they become mothers (particularly if their child is male). Dowry is a means to secure a respectable position for the girl in her in-laws' house. The groom's family would normally demand more dowries for an older bride than a younger one. A bride's position (and confidence) in her in-law's house often depends on the dowry and marriage gifts the family received from her natal family. When negotiating for dowry, a lot of discussions usually take place between the two parties, yet the woman has little say in those negotiations. For many of the respondents, giving birth to a boy often brings much pride and prestige. Such pressures can lead to poorly spaced pregnancies and women's reduced agency over their bodies (food intake, pregnancies and household chores). For example, after Jalmai's (30) father passed away, her brothers inherited 93 decimals of land to which Jalmai was entitled. Jalmai's brothers said that they were sympathetic towards her, but she had no rights over the land. She did not receive any share of her father's property and had no education either. She was married off at the age of 13. The marriage met the interests of her brothers and prospective in-laws. Her brothers sought a man who would demand little dowry. Jalmai's husband was sick and died a few years after their marriage. Since then, she has had no support and had to endure abuse from her in-laws, making her children suffer a childhood of extreme poverty and destitution. The advantages of living in joint or extended families have long been recognised. One of the main advantages, as pointed out by Collard (2000), is that the extended family can provide insurance against some kinds of risks such as illness, shocks and bereavement. The data indicate a relationship between the gradual erosion of the joint family structure and extreme poverty; For example Sajida (50) describes falling into extreme poverty when she separated from her in-laws' house. She took a loan from a micro finance institution (MFI) to buy a van for her son and sold her land for her daughter's wedding and dowry. Later, her sons deserted her and she was left destitute. In cases of re-marriage, men either leave their family with no income security at all or they maintain two or more marriages/households, which leads to reducing the scarce resources being allocated to the first marriage and its children. The data show that young men often re-marry for dowry purposes or when the economic and noneconomic value of their earlier marriage depletes. This often occurs when first wives do not give birth to a son, cannot become pregnant at all or have not met dowry expectations. This can create conflict within households and impact on the mental health and wellbeing of the first wife and of the children of the first union. When widowers re-marry, children from the first marriage often reported having experienced abusive relationships with their step-mothers. Community-based relationships the extreme poor have are often limited but central to their search for security. Short-term support during a period of transitional hardship can have a profound impact on their wellbeing in the long run, the data show. Supportive and resourceful social networks can help them access salaried and skill-based work that, in turn, help overcome the inter-generational disadvantages of extreme poverty. Yet, disabled and old-aged individuals in poverty generally lack such social networks. Baulch and Davis (2008) argued that long-term declines in wellbeing were caused by intermittent crises of the households studied. They continued to argue that crises in households turned into a severe decline in wellbeing when two or three shocks occurred within a short span of time, including high health expenditures, dowry or wedding expenses. The present study's data shows how disability, illness and/or subsequent death of the main income earner of a family create a situation where another family member must take charge. In a large extreme poor family with many siblings, both inter-and intra-generational bargains can be intense and a great source of anxiety, stress and conflict. In this regard, relationships break or erode, re-form or revive through various trade-offs towards an individual search for security. Inter-and Intra-generational Bargain The data indicate that senior members and those perceived as powerful and knowledgeable (in terms of education, networks, skills and physical ability) often abuse younger and vulnerable members in this bargain and pursue their own interests. This contributes to keep the poor and the most vulnerable in poverty (or further into extreme poverty) and keep the better-off in a position of power (exercising their agency). The relationship amongst siblings often erodes as a consequence; For example the illness of Mintu's (60) father and subsequent death not only left him without any income security being very young but also allowed his elder brother to sell all the lands of the family and keep the money. At that time, being only 13 years old, Mintu was so young that he could not really understand what was happening and was left alone with his disabled mother, surviving through child labouring. Another example of this is Jebunnesa (32), who was married very young and who, after the death of her husband, returned to her father's home to live with her son with the extended household. When her son was ill, her brothers pretended to not have any money, ignored his illness and refused to take care of him in any way. She said that, whenever they go to the market and purchase chocolates for their children, they do not bring any for her son, and their children would not share food with him, although they live in the same house. This makes her and her son feel neglected. Her brothers see no reason to give her anything. They are angry with her because their father sold his (their future) land to get her married (dowry). Although she never benefited from the dowry, she said, she has to bear the responsibility and be blamed for it, and so does her son. In general, the findings refer to a trend similar to that identified by Dorward et al. (2009), the three broad types of strategies in the extreme poor households, namely 'hanging in', 'stepping up' and 'stepping out', where most of the extreme poor households, as found in the present work, remain 'hanging in' in the face of adverse socio-economic circumstances experiencing a high-level transfer of disadvantages. Erratic forms of crises were found to affect individuals' ability to cope and their offspring's future possibility to rely on supportive relationships to avoid and escape poverty. Discussion and Conclusions This article examines the role of generational bargain and transfer in producing extreme poverty. Denied access to or depletion of material assets and capitals has been found as a recurrent determining factor shaping discrimination. The latter forms a gradual erosion and narrowing of social networks. The wearing-off of supportive relationships has multiple idiosyncratic and systemic pathways of impact on the poor's wellbeing and on the future liabilities and disadvantages carried by their children (including reputation, ill-health, label, exclusion and loan). The understanding of extreme poverty that is developed is then one in which this notion of the bargain is central, seen as a dynamic and relationship-laden phenomenon formed by a set of disadvantages that severely limit individual choice and prospects of securing arrangements or earned gains that could support social and economic mobility of the poor and their children. High degrees of inter-and intra-generational bargain not only impact on children's wellbeing but determine their access to future opportunities. The data show a clear trend of poor households' family members adopting coping strategies that put the wellbeing of the children, women, older persons, disabled or ill-bodied members of the family at risk by imposing unfavourable arrangements on them (labour, care work, grabbing of land or assets and marriage). This form of unequal distribution, abuse and exploitation generates a type of poverty that is extreme in that it limits the employment opportunities of the children, creates a basis for discrimination and forms a long-lasting disadvantage that will affect their life fortunes. This points to the negligence of family for securing one's own livelihood at the cost of others'. Thus, a family can appear as a 'double-edge sword' which can both offer support and undermine the capacity to prospect of individuals (Hulme, 2004), a role that, despite being so important, has never been "adequately recognized in contemporary analysis (i.e. thinking small)" (ibid, p. 173). A primary conclusion drawn from this analysis is that extreme poverty is not only material but also, to a large extent, relational, which greatly constitutes and exacerbates the overall experience of poverty and wellbeing. For Green and Hulme (2005, p. 9 of web copy), the extreme poor "are structurally constrained by the social relations which produce poverty effects" and thus further argued inter-generational transmission of poverty as an outcome of unequal social relations. This process of negligence leads to a special type of poverty that Ci (2013) termed as 'status poverty'. For Ci, subsistence poverty is practically the most urgent but, in principle, the least important and comparatively easily resolvable. What makes subsistence poverty distinct is the type of social relations a household has access to, as elaborated above. While status poverty can make subsistence difficult or impossible, it also disallows the person from participating in a range of social affairs, which forms the basis of social capital and respect or dignity (Ci 2013). Scholarship on inter-generational poverty focusses mostly on analysing the process of transfer from an older generation to the next (Harper et al. 2003;Cooper and Bird 2012;Krishna 2012;Bird 2013). Findings from this study shed light on other dimensions of transfers. In the context studied, much of the transfer process gets influenced by the siblings themselves; For example the decision about a girl's inheritance is greatly influenced by the presence of a male sibling or close male relatives. Even within male siblings only, inheritance is defined by how much one has over others in terms of age, education, responsibility and obligations within the family. Thus, the process of the transfer of extreme poverty is not only inter-generational but also intra-generational. In addition, it is crucial to remember that these transfers not only influence the poverty level of a person during his childhood, adulthood or late adulthood period but also predicts how a person will live their old-age period. To cope with structural insecurities and risks, poor people accept relationships of dependency that reduce their agency and also their prospects of longterm improvement (Wood 2003;Mosse 2007Mosse , 2010. The findings suggest that this transfer of disadvantages from one generation to the next is highly gendered. Even in male-headed households, transfer of disadvantages across generations depends on the status and position of the mother. This helps explain why most of the respondents tracked in this study experienced living mostly in extreme poverty during their childhood. While the analysis contributes to the existing knowledge base on extreme poverty and their routes, these findings, at the same time, have also broad implications for development practice and programmes in the field of poverty reduction. Escape from extreme poverty ought to be conceived as a strongly political rather than technical process in that it requires defying and challenging long-established power relations for households and individuals to accumulate 'gains' and 'protect' them. The data analysis shows how power is used to protect one's wellbeing on the one hand and generate deprivation, discrimination and longer-term disadvantages on the other. One critical implication is that the multi-faceted experience of extreme poverty and its reproduction suggest that interventions must be designed in a way that considers how project 'beneficiaries' are relationally embedded and how this significantly determines their ability and respond to a project. The emphasis in the present analysis is on intra-household relationships, raising complex questions about how (if at all) this can be addressed through development programmes. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/.
2023-02-03T15:16:05.016Z
2020-02-26T00:00:00.000
{ "year": 2020, "sha1": "2253dfeeda165708cfb4f4c8f1f02cff9ae8ea4d", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1057/s41287-020-00261-4.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "2253dfeeda165708cfb4f4c8f1f02cff9ae8ea4d", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [] }
196160462
pes2o/s2orc
v3-fos-license
An Akka Mailbox Implementation Facing SDN . Akka is a highly-extensible lightweight software toolbox. It solves the classic problem of sensing node state in distributed system by abstracting things into Actor models. Because the limited-capacity Mailbox is generally used, in fact, when there is no available space, some high-priority messages will not be able to enter the container, which will lead to some important messages not being processed; in addition, when the message waits to be processed, if more than expected waiting time it will generate feedback to tell the sender, the sender will cause behavior like repeat-operation. The receiver does not know and will process the message as normal,so it will handle this message twice. Based on the red-black tree data structure, this paper improves the original Mailbox architecture so that it has the function of entering or quiting the team according to priority when the team is full. It can also discard the message that the waiting time exceeds the set-value when the message quits, so as to avoid wasting resources. Introduction With the expansion of network at the present [1], the traditional network equipment has a large number of complex protocols built in, which leads to a large increase in the cost of network operation and maintenance. The SDN (software-defined Networking) [2] emerges as the times require. SDN separates the data plane from the control plane to flatten the management. The core of the controller in SDN is the control layer which need to enrich the North-South interface to control the network resources and provide services to the application layer. The controller most widely used is Opendaylight (ODL), which supports Openflow and LISP protocols, among a plurality of controllers can be applied to the cluster model. Because of the high reliability and high performance requirements of the system and the consistency of data and logic, infinispan data grid platform, distributed [3] Akka [4] framework and zooker are usually used as the technical solutions to build the controller cluster. Akka is a highly-extensible lightweight software toolkit, developed by the Scala language, while providing API support for Scala and Java. Its main goal is to achieve improvement of system performance, reliability and scalability as much as possible in concurrent programming. Actor model solves these problems in distributed systems by abstracting things [5]. The Actor model highly abstracts the communication process between models, which makes the development of distributed system greatly simplified. Based on the analysis of mailbox, this paper comprehensively describes the deficiency of mailbox in the Akka actor model, and improves it according to the requirement. Finally the simulation verifies the implementation of this paper are carried out. Akka Mailbox Actors using the default Dispatcher only process one message at a time. In order to ensure that other messages arriving during message processing are not discarded, Akka provides the Mailbox structure as message queue of Actor. Built-in Mailbox Defects Akka's built-in Mailbox has the following problems in extreme situations. Defect 1: When the priority queue is used, the queueing of high-priority messages is insecure and important messages will be lost. From the analysis above, we can see that Akka's built-in limited-length Mailboxes inherit the BoundedQueueBasedBlockingQueue class, regardless of the message priority queue. Because the unique criteria of judging if the message is on place is that whether the queue of the current message in the message queue l(t) is less than the maximum length of the queue L. It has nothing to do with the degree of importance and the priority of the message. Therefore, when an Actor is processing a message that takes a long time and causes the message queue to overflow, it will be unable to correctly receive more important control messages, such as heartbeat and other control signaling. In some scenarios, this problem is fatal. Because the computational tasks exist for a long time, the number of control messages is much smaller than the number of computing messages. At this time, if the target Actor's message queue is in a long-term fully occupied state, as shown in Figure 2, the control message will not be sent to the target message queue for a long time, resulting in loss of control according to Akka's random competition mechanism. Defect 2: The reliability and timeliness of messages cannot be properly guaranteed, which will result in waste of resources. In the actual environment, the rate of consumption of messages r_c is a time-varying process. That means, the processing time of the message is not fixed. This has brought certain difficulties to ensure the reliability and timeliness of news. Specifically, the selection of T_o in reliability is usually a fixed value (as shown in Figure 3). In this case, although the message m_c to be processed has successfully entered the target Actor queue, the processing time of the message m_a, m_b located in front of the m_c in the queue exceeds the expected reply time, resulting in the message producer received no reply within the stipulated time, so that the production end considers the message to be discarded. According to Akka's current framework, the message at the consumer side was unconditionally processed inside the Mailbox, even if the message has been timed out at the originating side. That means, the message m_c is considered to be discarded, but the consumer still executes m_c contained tasks. Obviously, this operation caused a waste of resources. The reliability and timeliness of messages are widely considered in the actual environment, and usually Actor should distinguish the different degree of urgency message. Therefore, the improvement of the above two defects has important research significance. Functional Expansion Based on the defects of the Akka built-in Mailbox, this section proposes a highly reliable Mailbox implementation scheme based on the Akka framework for the application scenarios of timeliness, reliability, and difference service. In order to achieve this type of Mailbox, making it meet the above objectives, this section will introduce specific improvements. The overall improvement is divided into the improvement plan of the message access team behavior control and the improvement plan of the message container. In Mailbox, the inbound behavior and message container are shown in Figure 4. The inbound and outbound activities refer to Akka's access to the Mailbox message queue. This behavior allows the message queue to be blocked and queued, captain limited the container itself does not have the standards behavior, but the behavior will not directly operate on the container's data structure. Instead, after limited the behavior of the inbound and outbound team operations, operates on an internal and actual message container. The improvement of the message container is the solution to this specific container. Both the improvement of the behavior and the improvement of the message container involve the outbound and inbound operations. However, the contents of the responsibility for the outbound and inbound operations of the two parts have the above-mentioned essential differences. Behavioral Control of Message Enqueued The behavior of the inbound operation is controlled by the offer method of the queue class with capacity constraints when the message queue space is full. This method determines whether to add new messages to the container by comparing the number of messages in the message container and the capacity limit. Therefore, the implementation proposed in this paper needs to modify or expand the function to make inbound high-priority messages capable when the message container space is insufficient. To achieve this goal, the scheme adopted in this paper is that: when a high-priority message attempts to enter the queue, if the message queue is full and there is at least one message in the queue that is lower than the priority of the message to be enqueued, the lowest priority and the last-arrived message in the queue is discarded, and the high-priority message to be inbound is enqueued. The specific implementation of the control of the enlistment behavior is as follows. This work derives from Akka's built-in BoundedBlockingQueue which controls the incoming and outgoing queues of a finite-length queue, and overrides its offer method. The Offer method is implemented as shown in Algorithm 1. It first judges whether the backing is full. If it is not full, the message is encapsulated and then it is enqueued. If the backing is full, an element e_t is dequeued from the tail of the backing and both priorities are compared. If the message to be enqueued has a higher priority. If it is high, the message after the enqueuing is encapsulated, e_t is discarded. Otherwise, e_t enters the team, and the message to be enqueued is discarded. Message Dequeue Behavior Control The behavior of enqueuing operations is controlled by the poll method of the BoundedBlockingQueue when the message queue space is full. This method sends the elements to the Actor from the end of the queue. In order to ensure that all messages received by the Actor from the Mailbox are valid within the timeliness requirements, the solution adopted in this paper is to add a time-sensitive discrimination process to the poll method, so that the messages meeting the timeliness requirements are dequeued to the Actor and for those are dissatisfied with the time-sensitive information is discarded after it is dequeued. The specific implementation of the control of the enqueued behavior is as follows. This work derives from Akka's built-in BoundedBlockingQueue which controls the enqueuing and dequeuing of the finite-length queue, and overloads its poll, isEmpty method, and defines a new private method called prune. Prune is used to eliminate the expired messages within container. isEmpty refers whether the current container is empty, so that Akka can correctly call the poll method. Because there is a possibility that all messages in the container are out of date, the isEmpty method needs to be overridden to make the prune operation first. In order to make the message queue sensitive to timeliness, the message queue needs to be assigned a timeout period T_o. Users can specify this time according to actual needs with different granularity. That is the full queue granularity, priority granularity, and even single message granularity. This scheme uses full queue granularity, that is, the entire queue uses the same T_o. The algorithms such as prune, poll, and isEmpty are shown as Algorithm 2, Algorithm 3, and Algorithm 4 respectively. Algorithm 2 performs peek operation on the backing to determine whether the first element of the team is expired. That is, to check if the time of the team entry is longer than T_o from the current time. If it exceeds, dequeue the backing and repeat the above process until an unexpired leader element is found, or backing is empty. Algorithm 3 first performs the prune operation, then dequeues the backing and returns the decapsulated message. Algorithm 4 first performs a prune operation and then returns to check whether the backing is empty. Message Container Improvements The message container is the structure that actually holds the message. Akka's built-in limited queue leader's priority Mailbox message container is the PriorityQueue structure. The structure is implemented by a binary heap, with (1) no-priority quantity limitation, (2) complexity of the enqueue and forward dequeuing time is O (logN), (3) complexity of the backward dequeuing time is O(N), (4) Coexistence of different messages with the same priority, (5) The order of dequeueing messages with same priority is not stable according to the enqueuing order. Due to the complexity (O(N))of the backward dequeuing of the binary heap, the efficiency of the enqueuing operation will be degraded when the full-team process is triggered frequently. In addition, the binary heap cannot guarantee that the nature of the same priority elements dequeued by FIFO does not meet the design requirements. Therefore, this paper uses the red-black tree [6] structure as a message container after investigation. The container has following properties: (1) no-priority quantity limitation, (2) complexity of the enqueuing and forward dequeuing time O (logN), (3) complexity of the backward dequeuing time O (logN), ( 4) Only allowed the same priority exists in one element and so on. The nature of the container (3) is better than that of the binary stack and does not create a bottleneck for performance. However, the property (4) needs to be improved to meet the design goals. In this work, this improved scheme attaches a unique timestamp identifier to the priority, at the expense of a small amount of space, so that the actual priority identifiers of different messages are unique, and messages with the same user-defined priority can be made timely. This section proposes a container based on red-black trees. The structure of the message container is shown in Figure 5. It consists of a leader element cache Cc of length 1 and a red-black tree assembly structure C_RB. In this structure, the red-black tree structure serves as the main message storage container, which can store up to L-1 encapsulated message elements. The first element cache stores one encapsulated message element. This element is the next element to be dequeued. The function is to improve the efficiency of the peek operation, so as to improve the efficiency of judging whether the queue is empty in the access control. It is worth mentioning that the peek time complexity in the binary stack structure is O(1), and the peek time complexity of the red-black tree structure is O(logN). In this scenario, we need to optimize this operation due to the large number of peek operations that need to be performed. By setting a head cache Cc and guaranteeing that the highest priority and the first arrived element in the entire container is always stored in the cache by the algorithm proposed below, the peek method can be simplified to directly return the elements in Cc. Therefore, the peek time complexity of PriorityDeque is reduced to O(1). The element flow in and out of the container is shown in Algorithm 5, where Algorithm 5 shows the elements enqueue operation. Algorithm 6 shows the elements in the forward dequeue poll operation. Algorithm 7 shows the elements in the back dequeue poll. Algorithm 8 shows the peek operation. Among them, algorithm 5 compares the element to be enqueued with the element priority in Cc. If the element priority of Cc is lower than the element to be enqueued, the elements in the buffer are placed into C_RB and the elements in Cc are replaced with elements to be enqueued; if the Cc element has a higher or equal priority than the element to be enqueued, the element to be enqueued is placed in C_RB. Algorithm 6 returns the current element in Cc and places the C_RB forward dequeuing elements into Cc. The algorithm 7 determines whether the C_RB is empty. If the C_RB is empty, the elements in the Cc are dequeued, otherwise the elements in the C_RB are dequeued backward and returned. Algorithm 8 returns directly Cc elements and does not perform any modified operations on Cc. Simulation Introduction In this section, we compare the BoundedStablePriorityMailbox built in Akka, to the performance of multiple indicators, and present the test results and conclusions. We use PSQ to represent Akka's built-in BoundedStablePriorityMailbox and use PDQ to represent the solution described in this article. Simulation Environment. We built two test environments. The first one is a single Akka node, one producer, one consumer; the second is three Akka nodes, three producers, and one consumer. 1) Support capability performance simulation scenario In this scenario, we construct two types of messages that simulate high-priority control messages and low-priority computing messages, respectively. After the consumer Actor receives the control message, it immediately responds to the producer, which means successful receiving. After receiving the calculation message, it randomly blocks for a certain period of time Tb and returns a calculation completion message. The production end will set the waiting time for reply and record the time interval ΔTsr of the sending message, the total number of No of the replied message, the number of the sending control message Production side summaries data after each test. We allocate a large amount of queue space for the production end and assume that the queue will not overflow, so that condition. Finally, we calculate the evaluation indicators Ap, Aa through the above summary data. This experimental environment, as described in 4.1.1, includes single-node and three-node environments. The relevant parameters of the test are shown in the table 1. We use the following parameters to simulate a scenario where calculations and controls coexist. Here, Actor spends 0.1-0.5 seconds of CPU time on receiving computational messages, and hardly consumes CPU time when receiving control messages. In addition, we assume that the control message is relatively small, about 1%. In this scenario, since the timeout period is set to 1 second, 2-10 computing messages are expected to cause timeouts in the remaining messages in the Mailbox. Therefore, we set the queue length to 10. However, in order to verify that the work is still available when the user incorrectly sets the length of the queue, we have added a queue length of 100, 1000 as a reference. 2) Enqueuing and dequeuing performance simulation scenario In this scenario, we construct p n type messages, giving them different priorities. In order to eliminate the interference factors, we performed the enqueuing and dequeuing performance test in a single-node environment, and provided that the consumer-side Actor completes the processing of the message immediately after receiving other than the first message (operation, blocking time is 0). The production end generates a Ni number of messages best-effort in the shortest time, performs an enqueuing operation, and counts the time from the start to the end of the enlistment e T . In order to fill in the message queue as much as possible to more accurately measure the efficiency of the team, the production end will first send a message with the highest priority. When the consumer receives the first message, it first blocks a certain time for the production end to fill the message queue. After the end of the blocking, the consumer starts timing and stops timing when receiving the Ni message to obtain the time interval Td. Finally, we calculate the evaluation indicators Ri, Ro through the above summary data. The test parameters for this experiment are shown in the Table 2. This experiment simulates the maximum throughput rate of non-blocking messages to get the best performance of the incoming team. In order to avoid the additional impact of operating system thread scheduling, we set the queue length to a larger value and ensure that the simulation results contain fullness by using the first-in-team and then-out-of-team solutions. Figure 6. Supportability performance test results. Figure 6 shows the results of the guarantee capability test. The higher the a A , the stronger the time-saving guarantee ability. From the results shown in the above figure, we can see that the scenario proposed in this paper is 100% guaranteed for the important message guarantee capability indicators, which is much higher than the 40% guarantee capability of existing plans. The proposed scheme is more than 75%, which is much higher than the existing scheme's capacity of about 10%. In the case of three nodes, the results are similar. In this case, the important message guarantee capability of the existing solution is further reduced, and the capability of this solution remains unchanged. In addition, the time-saving guarantee ability of the scheme is improved in the case of small queue length. The above results show that the performance indicators of the various guarantee capabilities of the scheme far surpass the existing ones, and the reliability has greatly increased. The higher i R and o R , the higher the enqueue and dequeue performance. In the formula (3) and (4), e N refers to the total number of attempted enqueuing messages, e T refers to the time taken to enqueued messages, d N refers to the total number of dequeued messages, and d T refers to the time taken on dequeued messages. From the results shown in the above figure, the performance of the enqueuing operations has a certain degree of decrease, due to the addition of prune, search for priority, etc. but it is in the same order of magnitude as the original planned. The performance of dequeuing operations is about 0.8 orders of magnitude better than the existing solution. The above results show that in terms of the performance of the enqueue and dequeue performance, this plan is basically the same as the original plan and has availability. Concluding Remarks: This article mainly focuses on the two problems: The high-priority messages in the original solution are discarded because the message queue is full, causing the loss of important messages and the reliability and timeliness of the messages cannot be properly guaranteed to cause waste of resources. An improved Akka Mailbox program was proposed. The scheme avoids the situation where important messages are dropped when the message queue space is insufficient, and has availability in terms of the reliability and timeliness. The Mailbox of this solution has several advantages: The unlimited number of priorities; The messages with the same priority are queued in chronological order; High-priority messages can replace low-priority messages when the queue is full; Messages are dequeued with logarithmic time complexity in the forward and backward directions, etc. This article further provides the performance test of the Mailbox. The test results show that the Mailbox mentioned in this article meets the expected performance. Acknowledgement This research was financially supported by the sub topic of National 973 Basic Science Foundation: Basic Research on Theory of Smart and Cooperative Network (2013CB329105).
2019-02-17T14:20:39.170Z
2018-06-27T00:00:00.000
{ "year": 2018, "sha1": "763f7335fdc51bd0de0345f2a9f76a3e83632fbf", "oa_license": null, "oa_url": "http://dpi-proceedings.com/index.php/dtcse/article/download/23651/23286", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "fecd30b8909dab8da7e7c0ea0019ecbbc3667874", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
10138861
pes2o/s2orc
v3-fos-license
Carcinogenicity of a single administration of N-nitrosomethylurea: a comparison between newborn and 5-week-old mice and rats. N-Nitrosomethylurea (NMUrea) was given as a single intraperitoneal injection either to newborn or to 5-week-old (C57BL * C3Hf)F(1) mice and Wistar rats. Newborn mice were more susceptible than 5-week-old mice to the development of lymphosarcomas, lung adenomas and hepatomas, whereas newborn rats were more susceptible than their weaned counterparts to the development of renal anaplastic tumours. Other tumours occured with the same frequency in newborn and mature animals. Tumours of the forestomach in mice were more frequenty found in animals treated at 5 weeks than in those treated at birth. Since NMUrea persists for only a very short time and breaks down spontaneously it seems that the paucity of enzymes related to immaturity in newborns is not a major factor in determining the different susceptibility of newborn animals to NMUrea carcinogenicity. IN several experimental systems, newborn or suckling animals were found to be more susceptible than mature animals to the effects of carcinogens. This difference is not fully explained; among other hypotheses, it has been suggested that the breakdown of chemical carcinogens is slower because of the enzymic deficiency of young animals. Powerful carcinogens such as 7,12-dimethylbenz-(a)anthracene, urethane and possibly N-nitrosodimethylamine persist unchanged for longer periods in newborn than in adult laboratory animals (Domsky et al., 1963;Mirvish et al., 1964;Terracini and Magee, 1964). N-nitrosomethylurea (NMUrea) was felt to be a suitable compound for testing this hypothesis, since in solution at pH around 7 it is unstable and is not likely to persist for more than a few hours (Druckrey et al., 1967). NMUrea can produce tumours after a single administration (Druckrey et al., 1964;Janisch et at., 1967;Kelly et al., 1968;Leaver et al., 1969); its carcinogenicity in newborn mice and rats has been previously investigated (Kelly et al., 1968) but the available information does not permit comparison of newborn and more mature animals as regards susceptibility to tumour induction by NMUrea. Experiments on rats and different strains of mice have been undertaken in this laboratory to elucidate this point. The present work describes the effect of a single administration of NMJUrea to newborn and 5-week-old (C57BL x C3Hf)Fl mice and Wistar rats. Whereas as a whole newborns were more susceptible to tumour development than young adults, inconsistencies between different target organs have been found. In addition, several types of non-neoplastic lesions produced by NMUrea are described. MATERIAL AND METHODS Hybrid (C57BL x C3Hf)F1 (BC3fF,) mice and Wistar rats from the colonies of this laboratory were used. Animals of both species were fed a commercial diet in pellets (Mangime Valleolona, Castellanza, Varese) and tap water ad libitum. Animals were treated within 24 hours after birth or at 35 days. Recrystallized NMUrea was obtained through the generosity of Mr. P. F. Swann, Courtauld Institute of Biochemistry, Middlesex Hospital Medical School, London. It was dissolved in saline immediately before use at a concentration of 0.1%. With the exception of one experiment, injections were given intraperitoneally at a standard dose of 50 ,ug./g., because preliminary experiments in newborn mice had demonstrated that 100 ,ug./g. produced a high early mortality. One additional experiment in rats was carried out with the purpose of directly exposing the thymus to the carcinogen: newborns from 4 litters received an intrathoracic injection of 50 or 100 ,ug. NMUrea (i.e. in the order of 8-16 ,ug./g.) as a 0.5% solution in saline. This was done under ether anaesthesia and through opening in the chest in the animals of one litter; rats of the other three litters received the solution of NMUrea as an injection through the thoracic wall. Rats and mice were weaned at 3-4 weeks of age and separated according to sex. They were subsequently controlled daily and weighed at weekly or fortnightly intervals. The animals were allowed to die naturally or were killed with ether if obviously sick: all survivors were killed at 60-64 weeks of age. A control group of BC3fF1 mice was given only saline at birth. The pathology of a control group of 95 Wistar rats has already been reported (Della Porta et al., 1968). During the performance of the present study the weight and the survival rate of the experimental rats were compared to that of an untreated group assembled at about the same time, which was over 2 years of age at the end of 1969. Complete autopsies were performed on all animals, including opening of the spinal cord in rats. Histological sections were routinely prepared from the thymus, liver, kidneys and spleen, with the exception of a few animals highly decomposed. Endocrine organs were also examined in most rats. In addition, all organs grossly damaged were examined histologically. Specimens were fixed in Bouin and stained with haematoxylin-eosin: at least one coronary section from each kidney was routinely examined. RESULTS Survival rates are presented in Table I for both mice and rats receiving NMUrea i.p. and for control animals. Eleven of 95 mice treated at birth died before weaning compared with 3 of 34 controls. Early deaths did not occur in mice and rats treated at 5 weeks of age. A consistent observation among animals treated at birth was in irreversible impairment of body growth. Fig. 1 and Table II show the pattern of body growth in mice and rats respectively. Growth depression was also observed, although to a much lesser extent, among mice treated at 5 weeks of age. All mice and rats treated at birth were smaller than the controls, regardless of whether they developed tumours or not. Tables III and IV show the incidences of tumours observed in mice and rats respectively. Both tables indicate a high carcinogenic activity of NMUrea with the production of a broad spectrum of tumours. Only 7 % of mice treated at birth and 21% of those treated at 5 weeks were tumour-free, compared to 97% among the controls. In rats treated either at 1 day or at 5 weeks, only 3 of 52 had no tumours at death. Multiple tumours in the same animal were commonly found. With the exception of mice with lymphosarcomas and rats with large anaplastic tumours of the kidneys, it was difficult to establish which tumour was causally related to death or sickness. Lympho8arcomas.-Thymic lymphosarcomas and generalized lymphosarcomas with prominent involvement of the thymus were the tumours most commonly found in mice. There were great variations in the degree of involvement of organs other than the thymus. Among mice treated at birth, the incidence of lymphosarcomas in both sexes ranged between 50 and 60% with an average age at death of 17-18 weeks. The incidence of lymphosarcomas in mice treated at 5 weeks of age was 46% in females and 31% in males, with an average age at death of 29 weeks in both sexes. Lymphosarcomas were found in one rat treated at birth and killed at 38 weeks of age, and in 3 rats observed at 16, 21 and 28 weeks of age among those treated at 5 weeks. With one exception in which the thymus was spared, they were generalized lymphosarcomas of probable thymic origin. No tumours of the lymphatic organs were seen among rats injected NMUrea intrathoracically. In 95 untreated rats of this strain, one generalized lymphosarcoma occurred in a rat dying at 17 weeks of age; another rat aged 113 weeks had hepatic and splenic lymphoma. Tumours and non-neoplastic lesions of the kidneys.-Three mice of each sex among those treated at birth and 2 males treated at 5 weeks had cystic-papillary or trabecular, non-invasive renal adenomas. Only one of the tumours was greater than a few mm. in diameter and showed atypicalities, with no obvious invasion. No renal tumours were found in the control group. In addition, 27 mice of either sex treated at birth (including 4 with renal adenomas), 8 treated at 5 weeks of age and 2 control males showed single or multiple " hyperplastic tubules " in the renal cortex (Terracini et al., 1966) (Fig. 2). Finally, among mice treated at birth, but not among those treated at 5 weeks or in the controls glomeruli with cell loss and fibrosis, occasionally with involvement of the Bowman's capsule, were seen (Fig. 3). Renal tumours were the most commonly observed neoplasms among rats treated either at birth or at 5 weeks of age. They were bilateral in 11 rats treated at birth and in 1 treated at 5 weeks. Two different types of tumours were found, anaplastic or interstitial and tubular (Magee and Barnes, 1962;Riopelle and Jasmin, 1969). The incidence of renal tumours among survivors at 20 weeks of age was 74% in rats treated at birth and 37% in those treated at 5 weeks. Age at death of rats with renal anaplastic tumours was roughly similar in both groups. Only 1 anaplastic tumour was seen among 95 untreated rats from the same colony. Six rats with renal tumours of tubular origin were observed throughout the present series. They were all less than 0*3 cm. in diameter and histologically appeared as solid or papillary adenomas; some contained areas of necrosis but no invasion or other signs of malignancy. In addition, 4 rats treated at birth had at least 1 hyperplastic tubule. Among 95 untreated animals, one renal adenoma and 2 adenocarcinomas were observed at an average age of more than 100 weeks. Of the 32 animals of both sexes given NMUrea intrathoracically, 13 developed a total of 17 anaplastic and 2 tubular tumours. Lung adenomas.-They occurred only in experimental mice surviving the period of high mortality due to lymphosarcoma. Animals treated at birth were more susceptible than those treated at 5 weeks. With the exception of 4 animals of either sex in the former group and 3 in the latter one, lung adenomas were found in mice killed at the end of the experiment. Considering survivors at 40 weeks, incidences were 81O% and 38% among those treated at 1 and 35 days respectively. Lung adenomas were usually multiple but only rarely did they largely replace the lung parenchyma. Hepatomas.-This type of tumour was also seen only in mice. Data contained in Table III indicate a much higher susceptibility of males than females and a sharp loss of susceptibility at 5 weeks of age. Hepatomas were more than 0-8 cm. in diameter, on section they showed a trabecular pattern and compressed the surrounding parenchyma without invasion. No hyperplastic nodules or other lesions often associated with hepatocarcinogenesis were observed. Lung metastases were not seen. Tumours of the forestomach.-The relation between age at treatment and subsequent tumour development appeared to be different in mice and rats. Tumour incidence was higher in mice treated at 5 weeks than in those treated at birth, whereas only rats treated when newborn developed this type of tumour. In mice, with the exception of a male treated at 5 weeks and dying with a squamous cell carcinoma at 54 weeks, all stomach tumours were papillomas found in animals killed at the end of the experiment. The 7 tumours of the forestomach in rats were papillomas observed in animals dying from other causes between the 33rd and 50th week of life. In addition, 1 papilloma of the forestomach was found among the rats given NMUrea intrathoracically. Four rats in the control group, aged more than 100 weeks, each had a papilloma of the forestomach. Tumours of the intestine.-The only intestinal tumour in a mouse was a sarcoma in a male treated at birth. A total of 7 rats with intestinal adenocarcinomas were found: 1 tumour was located in the duodenum, 5 in the small intestine and 1 in the colon. An animal with intestinal adenocarcinoma also had a carcinoid of the caecum. Two borderline lesions, possibly non-invasive adenocarcinomas, were also seen, 1 of which was in a rat with an adenocarcinoma. Another adenocarcinoma of the intestine was found in 1 of the rats given NMUrea in the thorax. Two adenomatous polyps were found in the control group. Mammary tumours.-The only mammary tumour in a mouse was found in a female treated at 5 weeks and dying at 42 weeks of age. In rats, 4 mammary tumours were found among females treated at birth (all were palpable before the 30th week of age) and 3 among those treated at 5 weeks of age (all of which were palpable after the 50th week of age). Two of the 17 females treated intrathoracically developed mammary tumours. Among the controls, 15 of 48 females developed mammary tumours, the earliest being palpable at the 86th week of age. Non-neoplastic lesions in mice.-The occurrence of hyperplastic tubules and hyaline glomeruli in the kidney has already been mentioned. In addition, at autopsy, in 29 mice treated at birth, but in none of those treated at 5 weeks or in the control group, there were multiple dark spots in the inner side of the skin; they were up to 02 cm. in diameter and histologically appeared as cysts containing keratin, lined by flat cells, associated with groups of melanocytes and occasionally with small foreign body lesions. The earliest change of this type was seen in a mouse dying at 15 weeks of age (Fig. 4 and 5). A common finding in female mice given NMUrea either at birth or at 5 weeks of age and killed at the end of the experiment was a dilation of the uterine horns associated with cystic hyperplasia of the endometrium. Non-neoplastic lesions in rats.-Among rats treated at birth, the incisor teeth of several animals were irregular and very long and had to be cut several times. This was unlikely to be a major cause of stunted growth since there were no differences in body growth among animals with normal and abnormal teeth. Another common finding in male rats treated with NMUrea at birth were small testes. On section, the germinal epithelium appeared atrophic (Fig. 6) and very few or no spermatozoa were present in the epididymis. A common finding in these testes was hyperplasia of the interstitial cells, which in 2 rats appeared as large areas classifiable as interstitial cell tumours (Table IV). Endometrial changes were seen only in a female treated at birth. Ovarian cysts up to 1 cm. in diameter were observed in 5 instances. No consistent changes were found in the pituitary, adrenals and thyroid. DISCUSSION Although a single administration of 50 ,tg./g. NMUrea proved to be highly carcinogenic to both species investigated, the type and location of tumours was different in mice and rats. This confirms a previous observation on the effects of NMUrea in newborn animals (Kelly et al., 1968) and at present can be explained only on a speculative basis. Intrathoracic administration of carcinogens is known to enhance the occurrence of thymic lymphosarcomas in mice (Chieco-Bianchi et al., 1965;Doell et al., 1967) and this could be related to a higher amount of the carcinogen reaching the target organ; in the present study, however, rats given NMUrea intrathoracically failed to develop thymic lymphosarcomas. In both mice and rats there were differences in tumour incidence between animals treated at birth and later in life. The present study confirms that lymphosarcomas, hepatomas and lung adenomas in mice as well as renal anaplastic tumours in rats are more easily induced in infant than in mature animals (Toth, 1968;Della Porta and Terracini, 1969). The earlier occurrence of mammary tumours in rats given NMUrea indicates a similar trend. However, in mice, stomach tumours occurred more frequently in animals treated at 5 weeks of age. The tumours at different sites indicated in the footnotes of Tables III and IV were probably related to the treatment and their incidence was not significantly different in animals treated at birth or later in life. Since NMUrea breakdown is rapid and may not require an enzyme (Leaver et al., 1969) it seems that factors other than the degree of maturation of enzyme production are related to the difference in susceptibility among newborns and young adults. Thus, the " organotropism " (Druckrey et al., 1967) of NMUrea is different in mice and rats and is influenced by the age at treatment. Present knowledge is insufficient to establish whether speciesand age-related differences are the consequence of a different rate of absorption, a different distribution of the carcinogen or the different functional state of some organs in newborn animals. NMUrea ranks among the most potent leukaemogenic chemicals in mice as single doses of 30 ,ug./g. or higher have produced incidences of lymphosarcomas of 40%0 or more in all strains so far investigated, i.e. XVII (Graffi and Hoffmann, 1966), outbred Swiss (Terracini and Stramignoni, 1967), inbred Swiss (Frei, 1969), NIH general purpose (Kelly et al., 1968) and BC3fF1 (in the present study). The order of magnitude of the effective doses and the percentage of mice developing lymphosarcomas are comparable to those observed following administration of 7, 12-dimethylbenz(a)anthracene (Toth et al., 1963); a single administration of urethane to newborn mice was equally effective only when given at a dose of 1 mg./g. and in Swiss mice (De Benedictis et al., 1964) whereas in C3Hf, C3H, BC3F1 and C57BL mice a longer exposure to urethane was required to produce lymphosarcomas (Della Porta et al., 1967). In the present study, when 35-day-old mice were used, lymphosarcomas were induced, but their incidence was somewhat lower and the latent period (measured as age at death) was longer than in mice treated at birth. In the case of urethane in Swiss mice, susceptibility to the leukaemogenic effect of 1 mg./g. was found to decrease significantly between 1 and 40 days of age (IDe Benedictis et al., 1964). A different situation is created by lung tumours: the decreased ability of NMUrea to induce lung adenomas in mice aged 35 days contrasts with the observation that in experiments lasting at least 30 weeks, following single doses of DMBA, nitroquinoline oxide and urethane, the incidence of lung adenomas approached 100% in animals treated both at birth and later in life (Walters, 1966;Nishizuka et al., 1964;De Benedictis et al., 1962;Klein, 1966). On the contrary, the finding of a high incidence of hepatomas only in males given NMUrea at birth in an experiment lasting 60 weeks is similar to the observations following 20-methylcholanthrene or urethane (Klein, 1959;Chieco-Bianchi et al., 1965;Klein, 1966). Tumours of the forestomach were more numerous among mice given NMUrea when mature. This result contrasts with the finding that carcinogenesis in the forestomach by intragastric administration of 20-methylcholanthrene or urethane was similar in mice treated as infants or later in life (Klein, 1959(Klein, , 1966. In rats, the higher incidence of anaplastic renal tumours among animals treated at birth probably reflects a different susceptibility related to age. The observation of stomach tumours only in rats given NMUrea at birth might indicate a difference related to age at treatment, but the total number of tumours was small; in any case, the present results confirm that NMUrea can induce stomach tumours through a single parenteral administration (Druckrey et al., 1964). The occurrence of some mammary tumours in female rats treated at birth or at 5 weeks of age confirms a single previous observation (Kelly et al., 1968) and indicts the mammary tissue of the female rat as another target organ for the carcinogenic effect of NMUrea. Among the tumours appearing at other sites in rats, and probably related to the treatment, only one was neurogenic and was an intracranial neurinoma (Table IV, Fig. 7). No tumours of this type were seen in the control rats. The sporadicity of tumours of nervous tissue in rats and mice following a single i.p. injection of NMUrea confirms a negative finding following a single intracranial administration (Kelly et al., 1968) and contrasts with previous findings indicating the nervous system as a major target for NMUrea. The latter results, however, were obtained in experiments in which the carcinogen was given either intravenously (Druckrey et al., 1965;Fried and Fried, 1966;Janisch et al., 1967) or orally with a long exposure (Stroobandt and Brucher, 1968;Thomas and Bollmann, 1969). A common finding in the present series of experiments was a marked impairment of body growth in mice and rats given NMUrea at birth. This effect was unrelated to tumour development, since it was found also in tumour-free animals; in addition, stunting growth was already obvious before weaning. Mice also showed some hyaline changes in renal glomeruli, as previously described (Terracini and Stramignoni, 1967). Other symptoms of homologous disease (Keast, 1968) such as diarrhoea and hair loss were absent. The effect upon body growth in animals treated at 5 weeks of age was much less marked or debatable. The carcinogenicity of a single administration of NMUrea appears to be a valuable tool for the study of dose-response relationships in carcinogenesis in view of the effectiveness of the treatment and the rapid breakdown of the carcinogen. Studies along this line are in progress in this laboratory.
2014-10-01T00:00:00.000Z
1970-09-01T00:00:00.000
{ "year": 1970, "sha1": "f4dd2c8c720d150b7f76bfc9d7fe5e80ad44b6e6", "oa_license": "implied-oa", "oa_url": "https://europepmc.org/articles/pmc2008619?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "f4dd2c8c720d150b7f76bfc9d7fe5e80ad44b6e6", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
16676298
pes2o/s2orc
v3-fos-license
Efficient Learning for Crowdsourced Regression Crowdsourcing platforms emerged as popular venues for purchasing human intelligence at low cost for large volume of tasks. As many low-paid workers are prone to give noisy answers, one of the fundamental questions is how to identify more reliable workers and exploit this heterogeneity to infer the true answers. Despite significant research efforts for classification tasks with discrete answers, little attention has been paid to regression tasks where the answers take continuous values. We consider the task of recovering the position of target objects, and introduce a new probabilistic model capturing the heterogeneity of the workers. We propose the belief propagation (BP) algorithm for inferring the positions and prove that it achieves optimal mean squared error by comparing its performance to that of an oracle estimator. Our experimental results on synthetic datasets confirm our theoretical predictions. We further emulate a crowdsourcing system using PASCAL visual object classes datasets and show that de-noising the crowdsourced data using BP can significantly improve the performance for the downstream vision task. Introduction Crowdsourcing systems provide a labor market where numerous pieces of classification and regression tasks are electronically distributed to a crowd of workers, who are willing to solve such human intelligence tasks at low cost. To a data analyst, such systems provide unprecedented accesses to get training dataset at a scale and budget that was not previously feasible. Thus obtained training dataset can then be seamlessly integrated into downstream machine learning tasks together with the state-of-the-art classification and regression methods. However, because the pay is low and the tasks are tedious, error is common even among those who are willing. This is further complicated by abundant spammers trying to make easy money with little effort. To cope with such noisy data, we add redundancy which is a common and powerful strategy widely used in real-world crowdsourcing. We assign each task to multiple workers and aggregate these responses by some inference algorithm. For classification tasks, where each task asks a worker to find the best label from a finite set, the fundamental question of how to model the worker behavior Dawid & Skene (1979); Zhou et al. (2015); Shah et al. (2016), how to assign tasks Karger et al. (2011), and how to aggregate the responses Smyth et al. (1995) to efficiently use the given budget and achieve the best accuracy, has been extensively studied. The key insight to achieving budget-optimal performance is to identify the good workers by comparing a worker's responses with those of others on the same task and appropriately weighting the worker's responses according to estimated reliability. Although the optimal inference algorithm is computationally intractable, various efficient approaches have been proposed with provable guarantees Karger et al. (2011); Zhang et al. (2014). On the other hand, there are little principled approaches for crowdsourced regression tasks, where each task asks a worker to provide the best answer in a form of a real valued vector. While numerous machine learning tasks routinely done on crowdsourcing platforms require continuous valued evaluation of a training dataset, e.g., the location of an object Everingham et al. (2015); Su et al. (2012), center of a galaxy, or the center of a marker in a Cryo-EM image, we lack systematic study of how to tackle the human noise in thus collected data. For this crowdsourced regression problem, we address the fundamental question of how to achieve the best accuracy given a budget constraint, or equivalently how to achieve a target accuracy with minimum budget. As in typical crowdsourcing systems, we assume 2 Crowdsourcing for Regression In this section, we present our probabilistic model capturing heterogeneity of the workers and the corresponding optimal estimator minimizing the mean squared error. Since this is computationally intractable, we introduce a tractable estimator using belief propagation. Crowdsourcing Model The task requester has a set of n regression tasks, denoted by V = {1, . . . , n}. As a running example, consider object detection where a worker is asked to locate the position of an object of interest, e.g., the center of a galaxy or the center of a marker in a Cryo-EM image. In the i-th task, we denote the true position by µ i ∈ R d . To estimate these unknown true positions, we assign the tasks to a set of m workers, denoted by W = {1, . . . , m} according to a bipartite graph G = (V, W, E), where edge (i, u) ∈ E indicates that task i is assigned to worker u. We also let N u := {i ∈ V : (i, u) ∈ E} and M i := {u ∈ W : (i, u) ∈ E} denote the set of tasks assigned to worker u and the set of workers to whom task i is assigned, respectively. When task i is assigned to worker u, worker u provides his/her estimation/guess A iu ∈ R d for the true location µ i . Each worker u is parameterized by his noise level σ 2 u , such that the response A iu suffers from an additive spherical Gaussian noise with variance σ 2 u . Precisely, conditioned on µ i and σ 2 u , A iu is independently distributed with Gaussian We assume each worker u's variance σ 2 u is independently drawn from from a finite set S = {σ 2 1 , ..., σ 2 S } uniformly at random. We further assume that the true position µ i is independently drawn from a Gaussian prior distribution φ(x | ν i , τ 2 ) for given mean ν i ∈ R d and variance τ 2 ∈ (0, ∞), i.e., we have a prior knowledge that the true position µ i is near by ν i . We are interested in sufficiently large τ 2 only assuming marginal knowledge on µ i . 1 Crowdsourced Regression Under this crowdsourcing model, our goal is to design an efficient estimatorμ(A) ∈ R d×V of the unobserved true position µ from the noisy answers A := {A iu : (i, u) ∈ E} reported by workers. In particular, we are interested in minimizing the average of mean squared error (MSE), i.e., where we let A i := {A iu : u ∈ M i } and for the last equality, we use the conditional independence between A i and Computing the marginal posterior P[σ 2 u | A] in general requires summing over the rest of (exponentially many) σ v 's, making this optimal estimator intractable. We make this intractable estimator explicit in the following, which leads to a tractable estimator based on belief propagation in Section 2.3. We first show that the posterior density of µ i given A i = y i := {y iu ∈ R d : u ∈ M i } and σ 2 Mi is a Gaussian density and compute its mean and variance: where we defineσ 2 i : S Mi → R andμ i : R d×Mi × S Mi → R d as follows The Gaussian posterior density (4) follows from: Equation (4) leads to the posterior mean, which is weighted average of the prior mean and the worker responses, each weighted by the inverse of its variance: Thus, the optimal estimatorμ * i (A) in (2) is given bŷ The marginal probability of σ 2 u given A in (7) can be calculated by marginalizing out σ 2 −u := {σ 2 v : v ∈ W \ {u}} from the joint probability of σ 2 , i.e., where for given The summation in (8) is taken over exponentially many σ 2 −u ∈ S m−1 with respect to m. Thus, in general, the optimal estimatorμ * (A) in (7), requiring the marginal probability of σ 2 u given A in (8), is computationally intractable. Belief Propagation We note that the joint probability of σ 2 given A in the product form of (9) forms a factor graph Jordan (1998) where each worker u's variance σ 2 u and each task i correspond to a variable and a local factor C i (A i , σ 2 Mi ) on the set of workers, M i , to whom task i is assigned, respectively. This probabilistic graphical model motivates to use the popular (sum-product) belief propagation (BP) algorithm Pearl (1982) on the factor graph of P[σ 2 |A] for approximating the marginalization in (8), which is intractable. BP typically is an efficient heuristic with little known provable guarantees. First, we give explicit iterative BP update rules on the messages m u→i and m i→u between task i and worker u and belief b iu on each worker u: where the belief b u (σ 2 u ) denotes the estimated marginal probability of σ 2 u given A. We initialize messages with a constant 1 |S| and normalize messages and beliefs, i.e., At the end of k iterations, one can estimateμ BP (A) from (7) We note that if the factor graph is a tree, i.e., having no loop, then it is well known that BP can calculate the exact marginal probability Pearl (1982) However, for general graphs having loops, BP has no performance guarantee, i.e., BP may output b u (σ 2 u ) = P[σ 2 u | A], and even the convergence of BP is not guaranteed, i.e., the value of lim t→∞ b t u (σ 2 u ) may not exist. Even though BP doesn't have the performance and convergence guarantees, it has been applied to many applications having loops with empirical successes Murphy et al. (1999); Liu et al. (2012); Yanover et al. (2006). We propose BP for crowdsourced regression under our model assuming the finite set of the worker variance S = {σ 2 1 , . . . , σ 2 S }. If the support S is a infinite set, e.g., a continuous interval, running BP becomes computationally intractable since the messages become functions on the infinite support. To address the issue, several methods approximating messages have been studied Minka (2001); Wald & Globerson (2014) ;Noorshams & Wainwright (2013); Moallemi & Roy (2009). However, such an extra layer of approximation renders their performance analysis significantly more challenging. Performance Guarantees on BP In this section, we provide the theoretical guarantees of BP estimator under our model for the crowdsourced regression. We first describe our proposed task assignment. ( , r)-regular task assignment. In general, the performance of an estimator in our model differs depending how tasks are assigned to workers. We propose a simple assignment scheme, referred to as ( , r)-regular task assignment, popularly adopted in crowdsourcing Dalvi et al. (2013); Liu et al. (2012); Karger et al. (2011Karger et al. ( , 2013Karger et al. ( , 2014; Ok et al. (2016). The assignment graph G is generated as a random ( , r)-regular bipartite graph, i.e., G is drawn uniformly at random out of all ( , r)-regular graphs, where each task is assigned to workers and each worker is assigned r tasks. The presentation of our main results is two-fold. First, in Section 3.1, we provide a sharp upper bound on MSE achieved by BP. This implies that BP approaches optimal MSE if both the number of tasks assigned to one worker (i.e., r) and the total number of tasks (i.e., n) increase. However, our simulations suggest BP is near optimal even for finite r. We make this precise in Section 3.2, where we compare BP directly to the optimal estimator, quantifying the relative gap. We show that under some mild assumptions on the model parameters, this gap vanishes even if we maintain finite r, proving a stronger notion of optimality. Quantitative Performance of BP We first present a performance guarantee of the BP estimator in Theorem 1 whose proof is given in Section 4.1. Theorem 1. Consider the crowdsourced regression model with S = {σ 2 1 , ..., σ 2 S } and a random ( , r)-regular graph G consisting of n tasks and ( /r)n workers. For given ε, σ 2 min , σ 2 max > 0 and ≥ 2, if (i) |σ 2 s − σ 2 s | > ε and σ 2 min ≤ σ 2 s ≤ σ 2 max for all 1 ≤ s = s ≤ S, and (ii) r, k ≤ log log n, then for sufficiently large n, k iterations of BP achieves where the expectation is taken with respect to the distribution of G and A and we define We provide three interpretations of Theorem 1. First, consider an oracle estimator that knows the hidden variances σ 2 u 's and makes optimal inference as follows: This gives the MSE ofμ ora i (A, σ 2 ): Note that the oracle estimatorμ ora always outperforms even the optimal estimatorμ * in (7), providing a lower bound on the MSE of any estimator. This coincides with (15a) in our bound, implying that the gap to oracle performance is (15b). We emphasize oracle here as, without an access to an oracle, the analysis of the actual optimal estimator should give a tighter lower bound than (15a). This is made precise in Theorem 2. Second, for sufficiently large n, when the number r of per-worker tasks and the total iterations k grow with n, BP's performance approaches that of the oracle estimator, as (15b) vanishes, i.e., This is because under ( , r)-regular task assignment, for increasing r with the total number of tasks n, BP estimator accurately infers all workers' variances and thus optimally estimates the true positions µ. Note that the above performance limit holds for any r = ω(1), implying that a reasonable number of tasks per worker is enough in practice to achieve BP's optimality. Third, we compare BP with a simple average-based estimatorμ avg where the expectation is taken with respect to the distribution of A. MSE(μ avg i (A)) increases proportionally to the arithmetic mean of variances of workers assigned to each task, while MSE(μ BP(k) i (A)) is proportional to the harmonic mean of variances of workers and prior, i.e., . This gap can be made arbitrarily large by increasing the difference between the maximum and minimum variances of workers. For example, if a single worker u ∈ M i assigned task i has high accuracy, i.e., σ 2 u 0, and the others' variances are x's, then Hence, the existence of a single worker with high precision in each task can reduce MSE significantly. Our estimator iteratively refines its belief and identifies those good workers, when r is sufficiently large. Relative Performance of BP We provide the relative performance of BP estimator by comparing with the optimal estimator, in particular, when the quantitive guarantee in Theorem 1 is not tight, i.e., r is small, in Theorem 2 whose proof is provided in Section 4.2. Theorem 2. Consider the crowdsourced regression model with S = {σ 2 min , σ 2 max } and a random ( , r)-regular graph G consisting of n tasks and ( /r)n workers. For given ε > 0 and , there exists a constant C ,ε , depending on only and ε, such that if (i) σ 2 min + ε ≤ σ 2 max ≤ 2σ 2 min (ii) C ,ε ≤ r ≤ log log n, and (iii) k ≤ log log n, then for sufficiently large n, where the expectation is taken with respect to the distribution of G and A and E ,S is defined in (16). As a corollary, it follows that when we set k increasing with n, e.g., k = log log n, we have an asymptotic optimality of BP estimator: This result is not directly comparable to Theorem 1 as they apply to different regimes of the parameters. In particular, the oracle optimality gap (15b) does not vanish for finite r. We believe this is because the oracle is too strong to compete against when r is small. Hence, we need to compare against a more practical lower bound on the optimal estimator as described in (8) that does not rely on the oracle. Such a comparison can be made rigorous by constructing the following lower bound on the fundamental limit. We use the fact that the random ( , r)-regular bipartite graph has a locally tree-like structure with depth k ≤ log log n Bollobás (1998) and BP is exact on the local tree Pearl (1982). By revealing the ground truths at the boundary of this local tree of depth k, we construct a weaker oracle estimator that gives a tighter lower bound. Directly analyzing the performance of such a weaker oracle is hard. Instead, we show that the gap between our estimator (that does not have the ground truths at the boundary of local tree) and the weaker oracle decreases as the depth of the tree increases. This is made clear by establishing decaying correlation from the information on the outside of the local tree to the root of the tree. However, for the analytic tractability, we need a constant lower bound of r and S = 2, i.e., r ≥ C ,ε and |S| = 2 as similar conditions are required in the analysis Ok et al. (2016); Mossel et al. (2014). In addition to S = 2, our analysis further requires the assumption on S, i.e., σ 2 min + ε ≤ σ 2 max ≤ 2σ 2 min . However, the condition on σ max , σ min is the most challenging/important regime for inference algorithms since the ratio of the maximum and minimum variances is bounded by some constant, i.e., it is hard to distinguish the workers' variances. If the ratio is large, most inference algorithms would provide outputs of enough quality to use in practice. The experimental results in Section 5.2 indeed confirm that BP is optimal for general regimes violating the conditions assumed in Theorem 1. Proof of Theorem 1 We start with a bound on the conditional expectation of MSE ofμ be the conditional expectation given σ 2 =σ 2 . Using Cauchy-Schwarz inequality for random variables X and Y , i.e., where the detailed steps for (18) is provided in Appendix A.4. Let ρ ∈ W denote a worker chosen uniformly at random for given G = (V, W, E). Then it is enough to show that for σ 2 ∈ S W such that σ 2 u =σ 2 u for every u ∈ W , where the first expectation is taken with respect to G. It is known that a random ( , r)-regular bipartite graph G is a locally tree-like. Formally, from Lemma 5 in Karger et al. (2014), it is straightforward to check that where we let G ρ,2k = (V ρ,2k , W ρ,2k , E ρ,2k ) denote the subgraph of G induced by all the nodes within (graph) distance 2k from root ρ. From (20), it follows that for any given σ 2 ∈ S W , where from the choice of r, k ≤ log log n and constant , it follows that for sufficiently large n, Recalling (14), it follows that if G ρ,2k is a tree, where we let 2k ] is concentrated at σ 2 ρ =σ 2 ρ . We present Lemma 1 providing the concentration formally whose proof is given in Appendix A.1. Proof of Theorem 2 Recalling the calculations of µ * i (A) and µ BP(k) i (A) in (7) and (13), the only difference between them is the estimation on σ 2 u , i.e., BP uses b k u (σ 2 u ) instead of P[σ 2 u | A]. Using Cauchy-Schwarz inequality and some calculus, similarly as (18), we can quantify an upper bound on the expectation of the gap between MSE's of µ * i (A) and µ BP(k) i (A) in terms of the difference between P[σ 2 u | A] and b k u (σ 2 u ) as follows: where the detailed steps for (25) is provided in Appendix A.5. Hence it is enough to show that for sufficiently large m = r n with constant and r, k ≤ log log n, where the expectation is taken with respect to G and A. Recalling (14), it is clear that in the case of = 1, BP is exact, i.e., b k u (σ 2 u ) = P[σ 2 u | A], since G is the set of disjoint one-level trees each of which root corresponds to a worker. We will show (26) for ≥ 2. Let ρ ∈ W denote a worker chosen uniformly at random for given G = (V, W, E) and fixσ 2 ∈ S W . Recalling (20) and (22), it follows that for sufficiently large n, We present Lemma 2 providing an upper bound on the last term of (27), which implies (26) with (27) and (23) and completes the proof of Theorem 2. A rigorous proof of Lemma 2 is given in Appendix A.2. Here, we briefly provide the underlying intuition on the proof. As Lemma 1 states, if there is the strictly positive gap ε > 0 between σ 2 min and σ 2 max , one can recover σ 2 ρ ∈ {σ 2 min , σ 2 max } with small error using only the local information, i.e., A ρ,2k . On the other hand, A \ A ρ,2k is far from ρ and is less useful on estimating σ 2 ρ . In the proof of Lemma 2, we quantify the decaying rate of information with respect to k. Experiment Results In this section, we present experimental results that support our analytical findings and the superiority of BP for the crowdsourced regression, where we consider the task of locating objects of interest in images. Tested Algorithms We test four algorithms: BP, Oracle, Average, and Simple as implemented in what follows: • BP: We implement BP without any use of prior information on true positions by taking limit τ → ∞, i.e., it outputs µ BP (A) in (10)-(12) with lim τ →∞ C i (A i , σ 2 Mi ). We terminate BP at the maximum of 100 iterations or after checking convergence of messages. • Oracle: For comparison, we consider an artificial estimator having free access to workers' variances σ 2 . It outputs µ ora i (A, σ 2 ) := lim τ →∞μi (A i , σ 2 Mi ). |Nu| i∈Nu A iu − µ avg i (A) 2 . In our model, for mathematical rigorousness, we assumed that we have some information of the true position µ i in advance as specified by the density of µ i as the spherical Gaussian with mean ν i and variance τ . Since the exact knowledge of such statistical information might not be easy to obtain in practice, we implement BP with no prior information on true positions by taking the limit of BP as τ → ∞. Note that our theoretical guarantee on BP still holds in this regime. To obtain a lower bound on the minimum MSE that any estimator can achieve, we use Oracle for computational tractability, instead of the optimal estimator which is computationally intractable while it provides a tighter lower bound. Synthetic Datasets We first test synthetic datasets generated by the set of random ( , r)-regular bipartite graphs, having 200 object detection tasks, where each task i is associated with the true position µ i chosen uniformly at random in a 100 × 100 image. We randomly choose each worker's variance using S small = {10, 100, 1000} or S large = {10, 100, 5000}. The simulation results with varying either r or are plotted in Figures 1a-1b and Figures 1c-1d, respectively, where we take the average of 50 random samples. Optimality of BP. As discussed in Section 3.1, in Figures 1a-1b, we observe that MSE of BP matches with that of Oracle when each worker is assigned just 5 or more tasks. In addition, Figures 1c-1d show that when increases, MSE of BP decreases at the optimal rate of Oracle and the gap between MSE's of BP and Oracle is negligible. The other two algorithms decrease at much slow rate but also the MSE differences between them and Oracle increase. For example, in order to make MSE less than 100 with S small , BP and Oracle require only = 3, but Simple and Average require = 4 and 9, respectively, i.e., Simple and Average need to hire more workers than BP. Tolerance to high variance worker. Comparing Figures 1c-1d, we observe that with the minimum of workers' variance fixed, for small and large maximum variances of workers, BP sustains good performance, whereas Average performs bad for the large maximum variance. In particular, the performance of Average is extremely degenerated by increasing the worst workers' variance to 5000 from 1000, while BP is not. This is because BP is able to identify good workers and exploit their answers as the oracle estimator so that BP is tolerant to spammers who have large variances. It is interesting to see that MSE of Simple estimating workers' variance decreases as r increases, similarly as BP. Visual Object Classes Datasets In this section, we provide the experiment results demonstrating the impact of crowdsourced regression on real-world machine learning tasks. To do so, we investigate how much an efficient estimator, refining the crowdsourced training dataset, improves the performance of convolutional neural network (CNN) for the object detection problem. The vision task requires a huge amount of the training datasets often obtained by the crowdsourcing system, e.g., Amazon's mechanical turk Deng et al. (2009). Emulating a crowdsourcing system. We use two PASCAL visual object classes (VOC) datasets from Everingham et al. (2015): VOC-07 and VOC-12 consisting of 12, 608 and 27, 450 annotated objects in 5, 011 and 11, 540 images, respectively. Each object is annotated by a rectangular bounding box expressed by two opposite corner points. We emulate the crowdsourcing system with a random ( = 3, r = 10)-regular bipartite graph between images and virtual workers each of which has variance drawn uniformly at random from support S = {10, 1000}. The choice of 10 and 1000 is made our experimental experience on object annotations, as shown in in Figure 3. This means that each image is assigned to 3 workers and each worker is assigned 10 images ( 24.2 objects) to estimate all the corner points of the bounding boxes of objects in the set of images. We then gather the noisy estimations on the corner points and run BP, Average, Simple, and Oracle to produce four different crowdsourced training datasets, whose MSE values are presented in Table 1. Performance evaluation. We train a CNN of single shot multibox detector (SSD) 300 × 300 model developed in Liu et al. (2016) 2 with the crowdsourced datasets from different estimators, separately. Then we compare the performance of SSD trained with different training datasets in terms of the mean average precision (mAP) which is a popular benchmarking metric for the datasets (see Table 1). Comparing mAPs of Average and Simple, that of BP is 5% higher in the experiment with VOC-07+12 datasets. Note that achieving a similar amount of improvement is highly challenging, as evidenced in recent extensive research efforts on smarter machine learning algorithms. For example, Faster-RCNN in Ren et al. (2015) is proposed to improve the mAP of Fast-RCNN in Girshick (2015) from 70.0% to 73.2%. Later, SSD in Liu et al. (2016) is proposed to achieve 4% mAP improvement over Faster-RCNN. In addition to the mAP improvement, more accurate training dataset with less MSE leads to more qualified detection with higher overlap ratio. We present how SSD detect objects in Figure 2, where we observe that the training dataset from BP or Oracle enables SSD to not only detect more objects but also draw tighter bounding boxes than Average or Simple. Conclusion We propose a new probabilistic model to address the problem of aggregating real-valued responses from a crowd of workers, when worker noise levels are heterogeneous. We pose this crowdsourced regression problem as an inference problem over a graphical model, naturally motivating the use of belief propagation. Typically, the performance of a BP algorithm is analytically intractable. However, we bring ideas from a long line of work in BP (e.g. Mossel et al. (2014) ) to provide sharp analysis on the performance achieved by BP under our model and show its optimality for a broad range of parameters. A promising research direction with significant practical interest is the question of how to adaptively assign tasks to make more efficient use of the budget. As workers typically arrive in an online fashion, such heuristics are used widely in practice with little theoretical understanding. Efficient and principled schemes have given significant gain in, for instance, voting in social media Jun et al. (2016). There are recent advances for adaptive crowdsourced classification Ho et al. (2013); Khetan & Oh (2016), but these approaches rely on the discrete nature of the problem. For crowdsourced regression, it requires innovative ideas to characterize confidence intervals for non binary responses. A Supplementary Material of Theoretical Analysis A.1 Proof of Lemma 1 Let s ρ ∈ {1, . . . , S} be the index ofσ 2 ρ , i.e.,σ 2 ρ = σ 2 s . Consider the classification problem recovering given but latent s from A ρ,2k in the following: where the optimal estimator, denoted byŝ * ρ , minimizes the classification error rate. By standard Bayesian argument, it is not hard to check that the optimal estimatorŝ * ρ is given as follows: From the above, it is not hard to check that Thus an upper bound of the error rate of an arbitrary estimator for (29) will provide an upper bound of Eσ2 P[σ 2 ρ =σ 2 ρ | A ρ,2k ] . Consider a simple estimator for (29), denoted byŝ † ρ , which uses only A ρ,2 ⊂ A ρ,2k as follows: where we define From now on, we condition σ 2 ∂ 2 ρ additionally to σ 2 ρ where ∂ 2 ρ is the set of ρ's grandchildren in G ρ,2 . For every i ∈ N ρ , we define Since the conditional density of Z i given σ 2 =σ 2 is φ(Z i | 0, a i ), the conditional density of Z i 2 2 /a i is χ 2distribution with degree of freedom d. In addition, it is not hard to check that Z i 2 2 is sub-exponential with parameters Thus it follows that for all |λ| ≤ min i∈Nρ 1 2ai , From this, it is straightforward to check that rσ 2 (A ρ,2 ) = i∈Nρ Z i 2 2 is sub-exponential with parameters ((6σ 2 Using Bernstein bound, we have where we let Pσ2 denote the conditional probability given σ 2 =σ 2 . Using Hoeffding bound with (33), it follows that Combining (34) and (35) and using the union bound, it follows that where for the first inequality we use |σ 2 s − σ 2 s | ≥ ε for all 1 ≤ s , s ≤ S such that s = s . Hence, noting thatŝ † cannot outperform the optimal oneŝ * in (31), this performance guarantee onŝ † in (36) implies (24) and completes the proof of Lemma 1. A.2 Proof of Lemma 2 We start with several notations for convenience. For u ∈ W ρ,2k , let T u = (V u , W u , E u ) be the subtree rooted from u including all the offsprings of u in tree G ρ,2k . Note that T ρ = G ρ,2k . We let ∂W u ⊂ W ρ,2k denote the subset of worker on the leaves in T u and let A u := {A iv : (i, v) ∈ E u }. Since each worker u's σ 2 u is a binary random variable, we define a function s u : S → {+1, −1} for the givenσ 2 as follows: It is enough to show since for each u ∈ W , P[σ 2 u = σ 2 1 ] = P[σ 2 u = σ 2 2 ] = 1 2 . To do so, we first define Then Using the above definitions of X u and Y u and noting |X u − Y u | ≤ 2, it is enough to show that for given non-leaf worker u ∈ W ρ \ ∂W ρ , where we let ∂ 2 u denote the set of grandchildren of u in T u . To do so, we study certain recursions describing relations among X and Y . For notational convenience, we define g + iu and g − iu as follows: where we may omit A i in the argument of g + iu and g − iu if A i is clear from the context. Recalling the factor form of the joint probability of σ 2 in (9) and using Bayes' theorem with the fact that P[s u (σ 2 u ) = +1 | A u ] = 1+Xu 2 and some calculus, it is not hard to check From the above, it is straightforward to check that where we let ∂u be the task set of all the children of worker u and ∂ u i be the worker set of all the children of i in tree T u . Similarly, we also have For simplicity, we now pick an arbitrary worker u ∈ W ρ which is neither the root nor a leaf, i.e., u / ∈ ∂W ρ and u = ρ, so that ∂ 2 u = ( − 1) · (r − 1). It is enough to show (38) for only u. To do so, we will use the mean value theorem. We first obtain a bound on the gradient of h u (x) for x ∈ [−1, 1] ∂ 2 u . Define g + u (x) := i∈∂u g + iu (x ∂ui ) and g − u (x) := i∈∂u g − iu (x ∂ui ). Using basic calculus, we obtain that for v ∈ ∂ u i, Using the fact that for x ∈ [−1, 1] ∂ 2 u , both g + u and g − u are positive, it is not hard to show that We note here that one can replace g − u /g + u with g + u /g − u in the upper bound. However, in our analysis, we use (42) since we will take the conditional expectation Eσ2 which takes the randomness of A generated by the condition σ 2 =σ 2 . Hence X u and Y u will be closer to 1 than −1 thus g − u /g + u will be a tighter upper bound than g + u /g − u . From (42), it follows that for x ∈ [−1, 1] ∂ 2 u and v ∈ ∂ u i, where we define Further, we make the bound independent of x ∂ui ∈ [−1, 1] ∂ui by taking the maximum of |g uv (x ∂ui )|, i.e., where we define Now we apply the mean value theorem with (43) It follows that for given X ∂ 2 u and Y ∂ 2 u , there exists λ ∈ [0, 1] such that where for the first and last inequalities, we use the mean value theorem and (43), respectively. We note that each term in an element of the summation in the RHS of (44) is independent to each other. Thus, it follows that where we define function Γ iu (x ∂ui ; A i ) for given x ∂ui ∈ [−1, 1] ∂ui as follows: . Note that the assumption on σ 2 min and σ 2 max , i.e., σ 2 min + ε ≤ σ 2 max < 5 2 σ 2 min . This implies Hence, for constant and ε > 0, it is not hard to check that there is a finite constant η with respect to r such that where η may depend on only ε, σ 2 min , and σ 2 max . In addition, we also obtain a bound of the last term of (45), when r is sufficiently large, in the following lemma whose proof is presented in Section A.3. A.4 Proof of Inequality Hence, using Cauchy-Schwarz inequality for random variables, it directly follows that For any σ 2 Mi ∈ S , the conditional density of the random vectorμ i (A i , σ 2 Mi ) − µ i conditioned on σ 2 =σ 2 is identical to B.1 Worker's Noisy Annotation To give an intuition on the choice of S = {10, 1000} in our experiment, we first note that the estimation of a worker u with σ 2 u = 1000 on a task is concentrated on the disk of radius 50 centered at the true position with probability more than 0.7, where the average size of images and bound boxes in VOC-07/12 are 359.5 × 496.2 and 113.5 × 182.6, respectively. We also provide few examples of the worker u's noisy annotations with σ 2 u = 10 and σ 2 u = 1000 in Figures 4a and 4b, respectively. B.2 Hyper Parameter Settings of Single Shot Multibox Detector We use the following hyper parameter settings. For VOC-07+12, we trained 120, 000 iterations with initial learning rate 4 × 10 −5 and decrease it by factor of 0.1 at iteration 80, 000 and 100, 000 as Liu et al. suggested. For VOC-07, we trained model 60, 000 iterations with initial learning rate 10 −5 and reduce it by factor of 0.1 at iteration 40, 000. The other hyper parameter setting is equivalent to suggested in Liu et al. (2016).
2017-02-28T16:15:13.000Z
2017-02-28T00:00:00.000
{ "year": 2017, "sha1": "d7226001e444ad2658cb499ab7979a6087ecede9", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "d7226001e444ad2658cb499ab7979a6087ecede9", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
7044272
pes2o/s2orc
v3-fos-license
N2 Gas Plasma Inactivates Influenza Virus by Inducing Changes in Viral Surface Morphology, Protein, and Genomic RNA We have recently treated with N2 gas plasma and achieved inactivation of bacteria. However, the effect of N2 gas plasma on viruses remains unclear. With the aim of developing this technique, we analyzed the virucidal effect of N2 gas plasma on influenza virus and its influence on the viral components. We treated influenza virus particles with inert N2 gas plasma (1.5 kpps; kilo pulses per second) produced by a short high-voltage pulse generated from a static induction thyristor power supply. A bioassay using chicken embryonated eggs demonstrated that N2 gas plasma inactivated influenza virus in allantoic fluid within 5 min. Immunochromatography, enzyme-linked immunosorbent assay, and Coomassie brilliant blue staining showed that N2 gas plasma treatment of influenza A and B viruses in nasal aspirates and allantoic fluids as well as purified influenza A and B viruses induced degradation of viral proteins including nucleoprotein. Analysis using the polymerase chain reaction suggested that N2 gas plasma treatment induced changes in the viral RNA genome. Scanning electron microscopy analysis showed that aggregation and fusion of influenza viruses were induced by N2 gas plasma treatment. We believe these biochemical changes may contribute to the inactivation of influenza viruses by N2 gas plasma. Introduction Infection mediated by medical devices is thought to be a major contributor to hospital-acquired infections [1]. However, medical devices and instruments are often not sufficiently robust to withstand repeated rounds of sterilization by autoclaving or dry-heat treatment [2]. Alternative sterilization techniques involve the generation of -rays or electron beams, which require expensive facilities and are not appropriate for routine daily use [3]. Although ethylene oxide gas (EOG) can be used to sterilize heat-sensitive medical instruments, the gas is both toxic and carcinogenic, which limits its usage [4]. Recently, sterilization using hydrogen peroxide gas plasma was proposed, although it is ineffective against endotoxins and lipopolysaccharides (LPSs) [5,6]. Residual amounts of endotoxin derived from bacteria may cause symptoms including fever [7]. A gas plasma is generated by removing electrons from a gas to produce a highly excited mixture of charged nuclei and free electrons [8,9]. Recently, we succeeded in generating N 2 gas plasma using a fast high-voltage pulse from a static induction (SI) thyristor power supply [6,[8][9][10]. N 2 gas plasma treatment efficiently inactivates bacteria and bacterial spores, as well as degrading LPS, which showed a more than 5 log reduction in 30 min [6]. The value (the decimal reduction time) of Geobacillus stearothermophilus was less than 1.3 minutes, whereas that of Aspergillus niger was even smaller [6]. However, the effect of N 2 gas plasma on viruses remains unclear. Therefore, we treated influenza virus, as a representative enveloped virus, with N 2 gas plasma and analyzed the sterilizing efficiency. We also analyzed the effect of this treatment on viral components such as proteins and RNAs. The distance between the high-voltage electrode and the earth electrode was 50 mm. N 2 gas flow rate was 10 L/min. During N 2 gas plasma generation, the sample box was kept at 0.5 atmospheric pressure. (b) Photograph of discharge region during generation of N 2 gas plasma. A static induction (SI) thyristor was used as a pulsed power supply. Viruses Nasal Aspirates. Nasal aspirates were collected from children at the Baba pediatric clinic (Kadoma, Osaka, Japan) as described previously [11]. Briefly, saline was introduced into the nasal cavity, and then the wash solution was aspirated using Belvital (Melisana, Nogent-sur-Marne, France (NGK Insulators, Ltd) was used as a device to produce N 2 gas plasma by generating a fast high-voltage pulse utilizing a SI thyristor power supply ( Figure 1). The sample was exposed to N 2 gas (99.9995%, Okano, Okinawa, Japan) at about 0.5 atmospheres prior to applying the high-voltage pulse. A 20 L aliquot of each sample solution was dropped onto a cover glass, air dried, and then treated with N 2 gas plasma at 1.5 kpps (kilo pulses per second). Each sample on the cover glass was subsequently recovered by dissolving in 20 L of distilled water (Otsuka Pharmaceuticals Co., Tokyo, Japan). Inc., Otsu, Japan) and random primers to make cDNA by using the following temperature regime: 65 ∘ C for 5 min, 4 ∘ C for 5 min, and 42 ∘ C for 60 min. The resultant cDNAs were subjected to PCR for matrix protein (M1), hemagglutinin (HA), neuraminidase (NA), and nonstructural protein (NS) using Takara Ex Taq (Takara Bio Inc.). The temperature cycling conditions used for the PCR were 95 ∘ C for 5 min, 25 cycles of 95 ∘ C for 1 min, 55 ∘ C for 1 min, and 72 ∘ C for 1 min with one final cycle of 72 ∘ C for 10 min. PCR was carried out using the following primers modified from previous papers [13,14]: The intensity of the amplified bands from the PCR products was semiquantitatively analyzed by agarose gel electrophoresis. Bands in test samples were visually compared to those in untreated controls. The amplified PCR products generated from each pair of primers were verified by DNA sequencing. Hemagglutination Assay. Samples were serially diluted two-or three-fold in 25 L of PBS in V-shaped well plates, and an equal volume of 1% chicken erythrocytes in suspension was added. The mixture was then incubated at room temperature for 1 h. The agglutination pattern was read, and the hemagglutination titer was defined as the reciprocal of the last dilution of sample that showed hemagglutination. SEM was then performed using a JSM-6320F (JEOL Ltd., Tokyo, Japan) instrument at a magnification of x50,000. Influenza Virus Bioassay Using Embryonated Eggs. N 2 gas plasma treated samples were injected into 11-day-old chicken embryonated eggs. The eggs were cultured at 37 ∘ C for 48 h before allantoic fluid was collected. The obtained samples were then subjected to immunochromatography for influenza virus and analyzed by hemagglutination assay. Statistical Analysis. Results were compared by nonpaired Student's t-test. In cases where < 0.05, the differences were considered significant. Results First, we investigated the N 2 gas plasma treated influenza virus (A/PR/8/34) in infected allantoic fluid and determined whether influenza virus was inactivated by N 2 gas plasma treatment (Figure 2). Samples including influenza virus (A/PR/8/34) at 3.16 × 10 14 TCID 50 /mL were treated with N 2 gas plasma for 5 min and injected into 11-day-old chicken embryonated eggs. After 48 h incubation at 37 ∘ C, the allantoic fluids were subjected to immunochromatography for influenza A virus to check whether the N 2 gas plasma treated influenza virus had proliferated in the embryonated eggs. Influenza viruses derived from all six independent spots treated with N 2 gas plasma for 5 min were unable to proliferate. By contrast, influenza viruses derived from six untreated spots did proliferate all. Morever, these results were consistent with those obtained from the hemagglutination assay. In addition, an infection assay using MDCK cells showed that viral titers of TCID 50 /mL changed from 7.5 × 10 4 and 10 × 10 4 at 0 min to 5.6 × 10 3 and 10 × 10 3 at 0.5 min and 1.3 × 10 3 and 1.0 × 10 3 at 1 min (Figure 3). These results also supported the inactivation of influenza virus by N 2 gas plasma. Next, the effect of N 2 gas plasma on viral proteins was investigated. The results from immunochromatography show that NP of influenza A and B viruses was decomposed by N 2 gas plasma treatment of (i) nasal aspirates for 5 min (Figures 4(a) and 4(b)), (ii) allantoic fluid for 5 min or 15 min (Figure 4(c)), and (iii) purified virus for 15 min (Figure 4(d)). Specifically, a band corresponding to NP in the test line A and B was detected at 0 min, but it became less obvious after N 2 gas plasma treatment for 5 or 15 min. A band in the reference line was detected at all time points indicating that the immunochromatography was working as anticipated. Regarding the nasal aspirates, all 3 influenza A and 3 influenza B samples showed a similar tendency. Based on this result we conclude that NP in the nasal aspirates and allantoic fluid was decomposed by N 2 gas plasma treatment. Next, ELISA using anti-influenza B virus NP antibody was carried out to verify degradation of NP in influenza B virus derived from infected allantoic fluid that had been treated with N 2 gas plasma ( Figure 5). Within 5 min, the concentration of influenza B virus NP was decreased to less than 1/6 for B/Gifu/2/73. These results are consistent with previous and present immunochromatography findings regarding NP of influenza A and B viruses, which was shown to be degraded by the N 2 gas plasma [15]. SDS-PAGE analysis followed by CBB staining was also used to monitor viral proteins and/or induced proteins in the allantoic fluid after infection with influenza A virus (A/PR/8/34) (Figure 6(a)). Our results show that the proteins were degraded after N 2 gas plasma treatment (1.5 kpps) for either 15 or 30 min (Figure 6(b)). A previous study reported that the major viral proteins in influenza virus derived from allantoic fluid are NA oligomer (over 70 kDa), HA1 (around 70 kDa), NA monomer and NP (around 70 kDa), and M1 and HA2 (around 30 kDa) [16]. Similarly sized bands appeared to be detected in the present SDS-PAGE and CBB stained gels of virus-infected allantoic fluid but faded after treatment of the fluid with N 2 gas plasma. Therefore, the bands observed and degraded by N 2 gas plasma may be mainly influenza viral proteins. Next, morphologies of the influenza viruses were observed by SEM (Figure 7). Our SEM observations showed that the N 2 gas plasma treatment (1.5 kpps, 5 min) disrupted fibers connecting influenza viruses in the allantoic fluid. Moreover, the N 2 gas plasma treated influenza viruses displayed a shrunken appearance. In addition, fused viruses were also observed in the treated samples, suggesting that the N 2 gas plasma may modify the viral envelope. Next, the N 2 gas plasma treated influenza viruses were subjected to viral genomic RNA extraction. We then attempted to amplify various influenza virus genes, such as M1, NS, HA, and NA, by PCR (Figure 8). The results showed that amplification of each of the genes was greatly repressed by N 2 gas plasma treatment (1.5 kpps; 5 and 30 min), suggesting that the viral genomic RNA was damaged following N 2 gas plasma treatment. The magnitude of the observed decrease in the amplified product varied among the different viral genes. The inhibition efficiency of HA and NA was high compared to that of M1 and NS. This may be due to the structural differences and arrangements of influenza virus segments encoding each viral gene in the viral particle. The products amplified for M1 and NS appeared to be slightly more intense after 30 min of treatment than those observed after 5 min of treatment, whereas the amplified products for NA appeared to be slightly more intense after 5 min of treatment than those observed after 30 min of treatment. Repeated experiments showed that the band intensities for M1, NS, HA, and NA at 5 min and 30 min of treatment were similar. Discussion Influenza virus can be relatively easily disinfected by chemicals such as alcohol [17,18]. The presence of organic matter interferes with the action of chemical sterilants because they can react, thereby decreasing the efficiency of disinfection. Indeed, disinfection efficiency of alcohol against influenza virus varies depending on the presence of coexisting organic material [19]. Most regulatory authorities require sterilant efficacy testing to be conducted in the presence of 5% soil [20]. Influenza virus is usually encountered in the nasal fluid of patients and allantoic fluid of eggs. In the case of medical devices, bronchoscopes in particular may be at risk of influenza virus contamination. In our study, we used 100% nasal aspirates and 100% allantoic fluid as a source of organic material. The obtained results showed that N 2 gas plasma can inactivate influenza virus in these environments. Currently, the items/areas that can be treated by the method are restricted to the size of the chamber box of the N 2 gas plasma instrument. To enable the sterilization of medical devices and larger surfaces, it would be necessary to expand the size of the discharge area. NP and genomic RNA, which are both localized to the central region of the influenza virus particle, were subject to decomposition and/or modification by treatment with N 2 gas plasma. Likewise, lipids localized in the outer envelope of the virus were also modified and/or degraded after this treatment. Microscopic investigations showed that treatment with N 2 gas plasma disrupt the fibers between virus particles in the allantoic fluid, although the significance of the fibers connecting the viruses in the untreated samples in relation to the infectivity of the virus is unclear. Previous studies have shown that oxidative stress contributes to the mechanism of action of a gas plasma. For example, the addition of oxygen to helium has been found to enhance the efficiency of inactivation in the case of bacteria [21]. In addition, oxidation and peroxidation processes on the surface of cells and within cells result in inactivation [22,23]. Furthermore, destruction of the surface structure by gas plasma may be the main mechanism underlying the inactivation of bacteria [24], which may also be the case for viruses. Although these oxidative factors contribute to the mechanism of action in N 2 gas plasma, further studies are required to identify the most critical factor(s) for inactivation. In this study, we analyzed the effect of N 2 gas plasma treatment on influenza virus, which is enveloped. However, the effectiveness of this treatment against various other pathogens, which may differ in resistance to disinfection, is unknown. For example, it would be interesting to investigate the effect of this treatment on nonenveloped viruses, such as norovirus or adenovirus as well as the highly resistant prion agents. Conclusion In conclusion, the present results suggest that N 2 gas plasma treatment modifies viral genomic RNA and degrades viral proteins, including NPs, as well as the viral envelope and fibers related to allantoic fluid. Taken together, the results indicate that N 2 gas plasma treatment may be an effective means of disinfection for influenza virus.
2018-04-03T01:56:36.829Z
2013-09-30T00:00:00.000
{ "year": 2013, "sha1": "150ad07108795ea2c35944832c3d93dad5bcd6c7", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/bmri/2013/694269.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e14ff39f726ce93d6a5ba4aa67f6dd205138f14e", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
203041127
pes2o/s2orc
v3-fos-license
A CONTRASTIVE ANALYSIS BETWEEN ENGLISH AND INDONESIAN KINDS OF SENTENCES The difference between English and Indonesian language becomes one of the hardest things to learn and to be understood. It could be seen from the grammar of the language and the system of communication between both languages. The aim of this study is to identify the difference and similarity of sentences between English and Indonesian language and contrastive analysis between both languages. This study used qualitative descriptive approach to find out the contrast between both languages. The sample in this study was 20 students. They were 2 semester students of English Department Students in Victory University. The result of this study showed that the main errors of the students were in Declarative Sentence (DS), Negative Sentence (NS), Interrogative Sentence (IS) and Exclamatory Sentence (ES). The students’ errors were caused by different pattern sentences whereas the Imperative Sentence (IMS) has the same pattern with English. INTRODUCTION Society uses language as a tool to communicate and interact with each other. Language plays an important role in society particularly to build social interaction during social communication. Humans cannot express their thoughts and feelings without language. We cannot imagine how life will be if the language disappears. Life will be so empty, and people will only live with themselves. Generally, all human activities are always related to language. Language is needed in all human aspects such as religion, education, occupation, business, and health. It helps people to tell their thoughts, ideas, and desires. Moreover, with language people can improve themselves in social interaction and also their career. A language is a vocal symbol or system of arbitrary, that permits all people to communicate or to interact. Kridalaksana (1993) also said that language is a symbol system sound an arbitrary language that allows people to work together, interact and identify. It can be concluded that language is the communication tool full of meaning to unite people and help them to interact, build social interaction in their community and also help them to work together. Each nation around the world has its own language that is their identity. Every language in the world is unique because every nation has a different language that differs into local or national languages. Thus, it will be hard for people from a different country to understand what other people talk in their own language. English appears as the international language to help interaction between different nations. In the globalization era, English becomes the solution to solve the different language among the nations. Even, there are some nations use English as their second national language. English is one of the international languages that plays an important role in education, occupation, and government. Indonesia is one of the nations that realized how important is the role of English in this modern era. The government of Indonesia puts English as one of the main subjects in Elementary School until Senior High School. It can be seen by the constitution 2003 number 20 paragraph 33 verse 3 about foreign language. The constitution tells that foreign language can be used as the introduction of language in education aspect to support the ability of the students. This is important to help the citizens ready to face the challenge in ten years ahead. During the process of language learning at school, there are some problems face by the citizens of Indonesia. These problems cause slow language improvements. Indonesia language is the children's mother tongue. Another problem is because of the different structure in grammar between English and Indonesian. Different sentence structure between English and Indonesia also causes troublesome for students such as lack of confidence to learn and speak in English. They are too afraid to speak English because their grammar is not good. Those problems exist because English and Indonesian have a different pattern of sentence. The important things to do is to compare two languages to know the differences. The step to compare two languages is doing a contrastive analysis (CA). Contrastive comes from the word "contrast" which has meaning to compare two things so that differences are made clear, showing differences when compared (Hornby, 1974:186). An analysis is separated into parts possibly with comment and judgment, the instance of the result of doing (Hornby, 1974:29). Contrastive analysis is considered as the comparison of the language structure to determine the point that differs them and the sources in learning target language (Lado, 1962:21). Based on the problems stated above, this research will identify and analyze the different sentence structure between English and Indonesia language. REVIEW OF LITERATURE Humans are social beings who have the desire to interact with each other. Humans use thoughts, instincts, feelings, and desires to react and interact with their environment. Social interaction is formed because it is influenced by social action, social contact, and social communication. Language (from Sanskrit भाषा, bhāṣā) is the ability that humans have to communicate with other humans using signs, such as words and movements. The scientific study of language is called linguistics. The number of languages in the world can be estimated between 6,000-7,000 different languages. According to Keraf in Smarapradhipa (2005: 1), there were two meanings of language. The first definition states language as a means of communication between members of the community in the form of sound symbols produced by human utterances. Second, language is a communication system that uses arbitrary vowel symbols (speech sounds). Another case according to Owen in Stiawan (2006: 1), language can be defined as a socially accepted code or a conventional system to convey concepts through the use of desired symbols and combinations of symbols governed by the provisions. Another definition according to Santoso (1990: 1), language is a series of sounds that are produced consciously by human beings. It can be concluded that a language is a communication tool between members of the community in the form of sound symbols produced by conscious human utterances and the language that we will talk about English and Indonesian Language. The Indonesian language comes from the Malay language which is used as the official language of the Republic of Indonesia. From a linguistic point of view, Indonesian is one of many Malay languages. Although the language is understood and spoken by more than 90% of Indonesians, Indonesian is not the mother tongue of most speakers. Most of all, the Indonesian might be used one of 748 languages in Indonesia as their mother tongue. Nevertheless, Indonesian is used very widely in educational bench such as college or school, literature, in mass media, official correspondence, software, or various other public forums. English is a language originally from the United Kingdom. English is a combination of several local languages that are often used by Norwegians, Danes, and Anglo-Saxons. English began to intensely influence Latin as well as French. The total modern English vocabulary, is shown that ± 50% comes from French and Latin. At present, English has become the main communication medium for people in various countries in the world, such as Britain, the United States, Australia, New Zealand, South Africa, Canada, and many more countries. English as one of the international languages, nowadays becomes one of the very important tools to compete in the next ten years. In facing the industry 4.0, we may also realize and think of competitiveness in finding increasingly tighter work in the next 10 years. We may be lost if we do not realize the world needs this era. Some of us imagine ourselves losing our way because of confusion in balancing the development of the times that are increasingly progressing. Lots of Indonesian people are reluctant to learn English because they thin that learning English takes a longer time. Some other people also said that learning English is boring. Another reason is that English is difficult because there are lots of tenses kinds or even sentences that are very different from Indonesian language. This study is related to the concept of contrastive analysis. There are various definitions of contrastive analysis which is presented by some experts. According to Guntur Tarigan (1988: 23), contrastive analysis is an activity which tries to compare the structure of L1 and L2 in order to identify the differences between two languages. While Lado (1962:21) introduces contrastive analysis as the comparison of the structures of two languages to determine the point where they differ and the difference is the source of difficulty in learning of target language. From the definition above, it can be concluded that contrastive analysis is an activity in analyzing two differences things. The things that we would like to analyze in this research is the contrastive analysis between English and Indonesian sentence patterns. There are varieties in one language concerning its purpose. It is classified into five, namely: declarative sentence or declaration, a negative sentence or negation, an interrogative sentence or question, an exclamatory sentence or exclamation, and an imperative sentence or command (Kusumawati, 2009). This research will analyse English and Indonesian differences based on those five sentences. a. Kinds of Sentences 1) Declarative Sentence A declarative sentence states a fact or arrangement. A declarative sentence ends with a period (.). Examples: 1. I'll meet you at the train station. 2. The sun rises in the East. 2) Negative Sentence A negative sentence is a sentence that states that something is negative. In English, we create negative sentences by adding the word 'not' after the auxiliary, or helping, verb. Example: 1. He doesn't get up early. She doesn't write the letter 3) Interrogative Sentence The interrogative sentence is sentences that ask a question. The interrogative form ends with a question mark (?). Examples 1. How long have you lived in France? 2. When does the bus leave? 3. Do you enjoy listening to classical music? 4) Exclamatory Sentence The exclamatory form emphasizes a statement with an exclamation point (!). Examples: 1. Hurry up! 2. That sounds fantastic! 3. I can't believe you said that! 5) Imperative Sentence The imperative commands something. The imperative takes subject 'you' as the implied subject. The imperative form ends with either a period (.) or an exclamation point (!). Examples: 1. Open the door! 2. Stop talking! I'm trying to listen! 3. Pick up that mess. b. Types of Sentences Each sentence can be classified into one of four patterns, which is depending on the number and kind of clauses the sentence contains, as follows: 1) Simple sentences contain no conjunction (i.e., and, but, or, etc.). Examples: 1. Fred ate his dinner quickly. 2. Are you coming to the party? 3. Pete and Suzy visited the museum last Saturday. 2) Compound sentences contain two statements that are connected by a conjunction Examples: 1. The company had an excellent year, so they gave everyone a bonus. 2. I wanted to come, but it was late. 3. I went shopping, and my wife went to her classes. 3) Complex sentences contain a dependent clause and at least one independent clause. The two clauses are connected by a subordinator. Examples: 1. That's the man who bought our house. 2. Although it was difficult, the class passed the test with excellent marks. 3. My daughter, who was late for class, arrived shortly after the bell rang. 4) Compound-Complex sentences contain at least one dependent clause and more than one independent clause. The clauses are connected by both conjunctions subordinators Examples: 1. Jerry, who briefly visited last month, won the prize, and he took a short vacation. 2. Glory forgot his friend's birthday, so he sent him a card when he finally remembered. 3. The report which Mario complied was presented to the board, but it was rejected Because it was too complex. METHOD a. Research Method The research method used in this study is a qualitative descriptive method. Qualitative descriptive research is a research procedure that produces data in the form of written words that are descriptions of things. The qualitative descriptive method used in this study is the contrastive analysis between English and Indonesian general sentence. The approach in this research is contrastive analysis. Contrastive analysis in general term is an inductive investigative approach based on the distinctive elements in a language (Kardaleska, 2006). In common definition, the term can be defined as the method of analyzing the structure of any two languages with a view to estimate the differential aspects of their system, irrespective of their genetic affinity of level development (Geethakumary, 2006). b. Procedure and Data Analysis The procedure of this research as systematically below: First, compiling the data or theories supporting this study; books and other materials had a topic related to this writing are examined; Second, analyzing the data obtained followed by the contrasting process between Indonesian and English language, pattern by pattern. Next, providing instrument (15 Indonesian sentences) as representative of the categories. Then, taking data from 20 students of Victory University as samples to translate the sentences (instrument) into English -the target language. Finally, analyzing the students' answers, their translation, and followed by giving a conclusion. RESULT a. Declarative Sentence English Indonesian The sun rises in the East S V Adverb of Place In declarative sentences, there are 8 students who translate the sentence based on the pattern whereas 12 students who wrote the wrong verb, which is unacceptable. In other words, the students' answer is mostly incorrect, the students did some errors when choosing the appropriate verb. b. Negative Sentence English Indonesian The sun does not rise in the East S Negator V Adverb of Place In negative sentences, there are 9 students who translate the sentence based on the correct pattern, whereas 11 students who wrote wrong, which is unacceptable. The 5 students wrote the wrong negator and 4 students only wrote "Not" as negator. In other words, generally, the students did some errors especially to choose the appropriate negator. In interrogative sentences, there are 6 students who translate the sentence based on the pattern whereas 14 students who did errors. Mostly, the 14 students gave the wrong answers in the last two sentences. They did not write the correct auxiliary verb or main verb be, they only wrote WH-Questions as the question words. In other words, generally, based on the pattern the students' answer is mostly incorrect, the students did some errors especially to choose the appropriate question words. d. Exclamatory Sentence English Indonesian What a nice car! Adjective Noun Mobil yang bagus Noun Adjective How nice! Adjective Bagusnya! Adjective In exclamatory sentences, there are 5 students who translate the sentence based on the pattern whereas 15 students who did errors. Mostly, the 15 students gave the wrong answers. They did not write the correct sentence with the right sentence pattern. For instance, mobil yang bagus they wrote "car nice" or even "car that nice", or bagusnya "nice". They did not put "how/what" to indicate the exclamatory sentence. In other words, generally, based on the pattern the students answer are mostly incorrect. e. Imperative Sentence English Indonesian Do it now! Kerjakan sekarang! Verb object In imperative sentences, there are 16 students who translate the sentence based on the pattern, whereas 4 students did errors. Mostly, the 16 students can write sentences with a good pattern. The assumption, because the pattern of English and Bahasa Indonesia sentence pattern is the same "Subject (invisible) + Verb + Object". In other words, generally, the students' answer is mostly correct in the patterns aspect. Furthermore, to make it clear, the frequency of errors on the sentence pattern of each category from the previous data analysis description is going to be served in the table, as follows: As we can see from the table and figure of the errors percentage, the main error of the students was in Declarative Sentence (DS), Negative Sentence (NS), Interrogative Sentence (IS) and Exclamatory Sentence (ES) i.e 92.86%. Based on the research, we found out that the students did those errors because the pattern of those sentences are different whereas the Imperative Sentence (IMS) has the same pattern with English. CONCLUSION AND SUGGESTION The conclusion that can be drawn from the result above is that different sentence pattern between English and Indonesian still becomes a major problem in English language teaching and learning in Indonesia. The students of English Department did not even master all of the sentence patterns in English which is different from Indonesian sentence pattern. The researcher also suggests that English Department has to improve the technique in teaching sentence patterns. The students have to be taught in analyzing the difference of two languages, so they can understand the learning material easily. As long as the students understand the pattern, they will be able to translate easily and use English actively in speaking, listening, writing and reading.
2019-09-17T02:40:52.374Z
2019-07-24T00:00:00.000
{ "year": 2019, "sha1": "477139620a39d082e7335de799221751aae9ecc6", "oa_license": "CCBYSA", "oa_url": "http://jurnalftk.uinsby.ac.id/index.php/IJET/article/download/166/148", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "57fc132869ad068ee37ff9cdddd1533f7a8c18a1", "s2fieldsofstudy": [ "Education", "Linguistics" ], "extfieldsofstudy": [ "Engineering" ] }
248532016
pes2o/s2orc
v3-fos-license
Spirometry-based prevalence of chronic obstructive pulmonary disease & associated factors among community-dwelling rural elderly Background & objectives: Chronic obstructive pulmonary disease (COPD) is a major public health problem in India. Its magnitude is particularly high among the elderly. Old age and comorbidity may lead to misdiagnosis and under treatment of this condition. COPD is not curable; however, various forms of treatment can help control symptoms and improve the quality of life. Most of the earlier studies lacked uniformity in definitions, designs, methodology and reporting techniques. Studies based on spirometry are only a few. Understanding the current prevalence and associated factors of COPD is important for planning control strategies. Hence, this study was conducted to determine the prevalence of COPD and associated factors among the elderly. Methods: In this community-based study among 449 elderly persons in a rural area, information regarding socio-demographic details, selected health conditions and exposure to risk factors was recorded. The assessment of airway obstruction was done by using a portable spirometer (MIR Spirolab). The diagnosis of COPD was based on the GOLD criteria. The association of COPD with sociodemographic and other variables was analysed by the multivariate logistic regression. Results: Acceptable spirometry findings were available for 392 (87.3%) participants. The prevalence of COPD was 42.9 per cent (95% confidence interval 37.9-47.7%). The prevalence was 54.5 per cent among men and 33.4 per cent among women. Smoking, higher age group and low body mass index were significantly associated with COPD. Interpretation & conclusions: The prevalence of COPD was found to be high among the rural elderly in this study. Interventions aimed at cessation of smoking and preparedness of health systems for diagnosis and management of COPD are hence required. the World Health Organization (WHO), COPD was the third leading cause of death in WHO-SEAR (South-East Asia Region) in 2015 2 . Besides death, COPD is associated with significant morbidity and an economic burden due to hospitalization, medical expenditure for home-based care and loss of productivity. In 2013, COPD was the fifth leading cause of Disability Adjusted Life Years (DALYs) lost 3 . In India, as reported by the India State-level Disease Burden Initiative (2017), COPD was responsible for 8.7 per cent of total deaths and 4.8 per cent of total DALYs in 2016 4 . Although community-based studies on COPD are available; however, existing COPD prevalence data vary widely due to differences in study design, diagnostic criteria, methodology and reporting techniques. Moreover, studies measuring airflow obstruction by using pulmonary function tests are few in India. The reported prevalence of COPD varies from 2 to 22 per cent in different studies, conducted among different age groups 5 . The magnitude of COPD is much higher among elderly persons. As per the 2011 census, 103.2 million people in India were of the age of 60 yr or more, accounting for 8.6 per cent of the total population 6 . COPD is associated with a number of co-morbidities among elderly persons. Old age and comorbidity may lead to misdiagnosis and under treatment of COPD among the elderly 7,8 . COPD is not curable; however, various forms of treatment can help control symptoms and improve the quality of life for elderly people with the disease 5,8 . Understanding the current prevalence of COPD and identification of the associated factors is important for planning sustainable prevention and management strategies. This study was conducted among the elderly in a rural community to determine the prevalence of COPD, wherein airflow obstruction was assessed by spirometry. The factors associated with COPD among the elderly were also studied. Material & Methods This community-based study was conducted in the Ballabgarh block of Faridabad district of Haryana. The study area consisted of 28 villages in the rural field practice area of the research institute and had a population of nearly 98,000 individuals in the year 2016 9 . There was a computerized database of all individuals residing in the area, and the same was updated annually. The study was conducted among persons aged 60 yr or more, residing in the area for more than one year. Sample size: The sample size calculation was guided by the findings of a multi-centric study conducted by Jindal et al 10 , where the prevalence of chronic respiratory diseases in the population aged 15 yr and above was reported to be 8.5 per cent. The review of literature suggests that the prevalence among persons aged 60 yr or more is two to three times higher than the general population 7 . Hence, the prevalence among elderly persons in the present study was assumed to be 20 per cent. Taking a relative precision of 20 per cent, the sample size was calculated to be 400. Considering non-response rate of 15 and 5 per cent for death and migration, respectively, the calculated sample size was increased to 500 elderly. Data collection: A list of persons aged 60 yr or more was taken from the computerized database. Out of total 6765 individuals aged 60 yr and above, 500 participants were selected by simple random sampling using computer-generated random numbers. Data collection was undertaken from October to December 2017. House-to-house visit was done for all 500 elderly. In case a participant was not found at home despite three visits, he/she was categorized as a non-respondent. Seriously-ill elderly, those not able to comprehend, or having conditions that affect safety during spirometry (viz. any surgery or severe injury in the abdomen, chest or eye in the last three months, myocardial infarction in the last three months, hospitalization due to heart disease within the past month or currently on treatment for TB) were excluded from the study. Survey methodology: The study was conducted after taking approval from the Institutional Ethics Committee of All India Institute of Medical Sciences, New Delhi. Written informed consent was taken from all the participants. The results of the examination were communicated to the participants on the same day. All information collected was kept confidential. Participants found to be having COPD were referred to appropriate OPD at the nearest health facility. The elderly who were currently suffering or had suffered from acute respiratory infections in the past two weeks were rescheduled to a later date. Elderly taking medications (bronchodilators and/or steroids) were asked to withhold medication overnight for 12 h and early morning examination was done for such cases. A pre-tested semi-structured interview schedule was administered and information was collected regarding socio-demographic details, history of chronic respiratory symptoms or disease, family history of chronic respiratory disease, consumption of tobacco, exposure to tobacco smoke or biomass fuel (indoor air pollution), self-reported or physiciandiagnosed illness (diabetes, hypertension and arthritis) for which the patient was under medication for at least the preceeding six months. Arm span (AS) and weight of participants was recorded, and body mass index (BMI) was calculated as BMI = weight (in kg)/(AS in meter) 11 . Assessment of functional disability was done by using Barthel's Activity of Daily Living Index (ADL) 12 . For assessment of airway obstruction, spirometry was performed. A hand-held portable spirometer-MIR (Medical International Research) Spirolab ® (Roma, Italy) was used. The measurements were done according to the standard guidelines (American Thoracic Society and European Respiratory Society) 13 . For each participant, information of weight and height was entered in the spirometer. The participant was asked to sit comfortably. Two measurementsone each of pre-and post-bronchodilator (salbutamol inhalation - 4 puffs of 100 mcg each), were performed at least 20 min apart, according to standard ATS/ERS guidelines 13 . The diagnosis of COPD was based on the GOLD guidelines 2019 14 . History of exposure to risk factors such as tobacco smoking, exposure to environmental tobacco smoke, biomass fuel or occupational exposure to dust, along with the presence of airflow limitation that was not fully reversible, (with or without the presence of symptoms), i.e. the ratio of post-bronchodilator forced expiratory volume in first second of expiration, and the forced vital capacity (FEV1/FVC) <70 per cent on spirometry, was considered as COPD 14 . ERS-93 was used to predict normal FEV1 15 . The severity of COPD was measured according to the GOLD guidelines 14 . The values of FEV1 from 50 to 79.9 per cent of the predicted value indicated moderately severe disease, FEV1 30-49.9 per cent specified severe disease, while FEV1 <30 per cent indicated very severe disease. The results were printed and communicated to the participants at the end. 'Quality A' meant three acceptable tests, i.e. the variation of the two highest FEV1 values less than or equal to 150 ml 16 . Quality assurance: The spirometer used for the study (MIR Spirolab) complied with the ATS/ERS criteria for accuracy. The investigator (AK) was trained in the pulmonary laboratory of the research institute for 150 h under the supervision of a faculty member of the department of Pulmonary Medicine & Sleep Disorders AIIMS, New Delhi, who had more than 10 yr of experience in this area (VH). During this period AK performed 50 spirometry tests, among patients of all age groups, with the assistance of a pulmonary laboratory technician, and 50 unassisted spirometry tests among elderly patients in the pulmonary laboratory. Before data collection, the investigator (AK) performed the spirometry among elderly patients with the portable MIR spirolab spirometer, the findings of which were confirmed by the spirometer at the institutional pulmonary laboratory. All the tests were re-read by the expert for interpretation. The spirometer was standardized and calibrated at the pulmonology laboratory and the diagnosis of COPD was made according to the GOLD guidelines for spirometry. Standard operational definitions were used for rest of the parameters. Statistical analysis: Data were entered and managed in MS Excel 2016, and statistical analysis was carried out using Stata 12.0 (Stata Corp LLC 4905, Texas, USA) 17 . Mean, standard deviation (SD) and frequency (percentage) were reported for the continuous and categorical variables, respectively. The prevalence of chronic respiratory diseases was reported as percentage with 95 per cent confidence interval (CI). Contingency table analysis was done using the Chi-square results. The association of socio-demographic and various risk factors with COPD was analysed using the logistic regression analysis. Univariable analysis followed by multivariable logistic regression analysis was carried out. Variables with P<0.25 were considered for multivariable analysis. The results were presented as an odds ratio (OR) with 95 per cent confidence level. P<0.05 was considered statistically significant. Results Out of 500 randomly selected elderly, one had died in the previous year, three had migrated, and one had rib fracture. Out of 495 eligible elderly, 44 were not available even after three visits, while two did not give consent. The remaining 449 participants were included in the study. Thus, the response rate was 90.7 per cent (Figure). The response rate was higher among women (94.5%) as compared to men (86.1%). The reason for more participation by women was their availability during the house visits made by the investigator. Comparatively more men were unavailable despite three house visits and were considered as non-responders. Of the 449 participants included in the study, at least one acceptable spirometer finding was available for 392 (87.3%) participants. Table I shows the characteristics of the study participants. Out of 392 participants, 218 (55.6%) were women. The mean age was 68.1 yr (SD=6.6), being 68.4 (SD=6.4) for men and 67.9 (SD=6.6) for women. Majority (66.1%) of the participants were in the age group of 60-69 yr, while 33.9 per cent were aged 70 yr or above. Nearly three-fourth of the participants (73.5%) were illiterate. Among men 46.6 per cent were illiterate while 95.0 per cent of the elderly women were illiterate. Two hundred and seventy-two participants (69.4%) were currently married, whereas 30.6 per cent were widowed or separated (16.1% of men and 42.2% of women). Most (92.6%) of the participants were economically dependent on others. Majority (62.8%) of the participants belonged to low socio-economic status. The past occupation was agriculture/labour for 75.3 per cent of men, whereas 97.7 per cent of the women were homemakers. Nearly half (50.3%) of the participants reported to have ever smoked tobacco while 41.0 per cent of all participants were current smokers. Out of these, majority (60.6%) were men. Nearly 75 per cent of the smokers had been smoking for 30 years or more, whereas 17.7 per cent smoked for 15-30 years. Majority of the smokers (65.5%) smoked beedi (hand-rolled cigarette), while 42.9 per cent smoked hookah (hubble-bubble). Only two participants smoked cigarette. Most of the women (90.2%) cooked food regularly in the past. Wood was the most commonly used fire-fuel (93.4%) among the participants, whereas cow dung cakes were used by 3.9 per cent of the participants. Thus, 97.3 per cent of the participants had exposure to biomass fuel. The mean (±SD) exposure years were 33.2 (±7.9). Regarding self-reported health conditions, hypertension and diabetes were reported by 9.4 and 5.6 per cent of the participants, respectively. Seven (1.8%) elderly reported a history of anti-tuberculosis In our study, the prevalence of COPD, as diagnosed by the post-bronchodilator spirometric value of FEV1/FVC <0.7, was 42.9 per cent (95% CI 37.9-47.7). Among those having COPD, 48.8 per cent had moderately severe disease, 22 per cent had severe disease, while 6.5 per cent had very severe disease. The prevalence of COPD was more among men (54.5%) as compared to women (33.4%) (Table III). Table IV shows the distribution of respiratory symptoms among participants with COPD and without COPD. A significantly higher proportion of participants with COPD had respiratory symptoms as compared to those without COPD. The prevalence of COPD was significantly higher as the age group increased. (Chi-square for trend: 8.62, P-value for trend: 0.013). The prevalence was also found to be more among smokers (58.8%; 95% CI=51.9-65.8%) as compared to non-smokers (26.6%; 95% CI=20.4-32.9%). We found that 26.7 per cent of non-smokers had COPD. The risk factors among non-smokers were exposure to agricultural dust (n=8), exposure to dust at construction sites (n=2) and exposure to biomass fuel (n=35). Table V. In univariate analysis, it was found that age, male sex, low socio-economic status, low literacy, low BMI and smoking were associated with COPD. Following univariate analysis, multivariate logistic regression was Discussion In this study, we report the magnitude of COPD among the community-dwelling rural elderly by using spirometry. Out of 449 participants enrolled, 392 (87.3%) were able to perform spirometry of acceptable quality. This figure is fairly higher in comparison to other reported studies. In a study in rural setting of Tanzania, spirometry could be performed only in 57.1 per cent of the participants 18 . In India, in a communitybased study conducted in Kashmir among adults aged 40 yr and above, acceptable spirometry results were available for 79 per cent of the participants 19 . Higher spirometry rate in our study may be due to the fact that the investigator got thorough training at the laboratory of a tertiary care institute. The investigator was also supervised in the field by a community medicine specialist and support was provided whenever required. In this study, the prevalence of COPD by the GOLD criteria of FEV1/FVC <0.7 was 42.9 per cent (95% CI 37.9-47.7). This finding is similar to a spirometrybased study done by Koul et al 19 in rural Kashmir in which the prevalence of COPD among persons aged 60 yr and above was reported to be 41.4 per cent. In a study done by Sinha et al 20 in Delhi, the prevalence was 31.4 per cent which was lower than ours. Sharma et al 21 reported a prevalence of 12.5 per cent among persons aged 60 yr and above in rural Jammu, which is lower than our study. They, however, used peak flow meter to assess the prevalence of COPD, which is a less sensitive method than spirometry. Similar spirometry-based studies conducted in Russia, Egypt, Iran and Saudi Arabia have reported a much lower prevalence of 6.6, 6.6, 9.2 and 4.2 per cent, respectively [22][23][24][25] . However, studies from Korea and rural Tanzania have reported a similar prevalence of 35 and 41.7 per cent, respectively 18,26 . COPD was significantly associated with smoking, low BMI and higher age. This is similar to other studies conducted in India and abroad [27][28][29] . Male sex, low socio-economic status and low education status have been reported to be associated with COPD in some studies [30][31][32] . In our study, these were found to be associated in univariate analysis but not in multivariate analysis. This study had a few limitations. Biomass exposure index could not be calculated, as detailed history in terms of hours of exposure per day was not enquired in the study. Moreover, the effect of biomass fuel could not be studied because none of the men used to cook and almost all participants (97.3%) used biomass fuel only. In most of the rural households, cooking was done in the open courtyard. Hence, a higher prevalence as is seen in indoor air pollution, where cooking is done in closed rooms was not reported. Past occupation and cooking had to be omitted from the multivariable model due to co-linearity in these two variables. Hence, their association with COPD could not be studied. Due to the rural setting of the study, association with passive tobacco exposure could not be studied, as most of the smokers used to smoke in the courtyard or outside the house. The study was conducted in the winter months due to logistic reasons. Respiratory symptoms were likely to be more common during the study period. In the absence of complete clinical data, other diseases with an obstructive pattern on spirometry such asthma or bronchiectasis may be misdiagnosed with COPD. However, since the symptoms were of late onset, one may conclude that it was less likely to be asthma. Hence, the chances of misdiagnosis were minimal. During the study, X-ray and sputum examination was done on case-to-case basis wherever required for the management of patients. However, data regarding X-ray and sputum examination were not collected for this study. Spirometry-based assessment and high proportion of acceptable spirometry results (87%) are some of the strengths of this study. Overall, the identification of factors associated with COPD and appropriate measures can help in addressing the problem. Since the highest burden was seen among those aged 70 yr or more, increasing trend of ageing of the population is an important determinant of prevalence of COPD in the country. The prevention and management of chronic respiratory diseases including COPD should be emphasized in national health programmes for the elderly. Elderly patients with symptoms or risk factors should promptly be referred to a facility where spirometry can be done. Wherever feasible, facility for spirometry should be made available at the primary level. Smoking was another factor found to be strongly associated with COPD. For people who continue to smoke, counselling for smoking cessation may be useful. Conflicts of Interest: None.
2022-05-06T13:26:05.785Z
2021-05-01T00:00:00.000
{ "year": 2021, "sha1": "cd661d2d156592a088f5cc964ddae46e8b3de46f", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/ijmr.ijmr_358_19", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "67d7be5cf8722ad94e7c88047dc2c00620e7232b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
52243034
pes2o/s2orc
v3-fos-license
Link Layer Correction Techniques and Impact on TCP ’ s Performance in IEEE 802 . 11 Wireless Networks TCP performance degrades when end-to-end connections extend over wireless links which are characterized by high Bit Error Rate and intermittent connectivity. Such degradation is mainly accounted for TCP’s unnecessary congestion control actions while attempting TCP loss recovery. Several independent link loss recovery approaches are proposed by researchers to reduce number of losses visible at TCP. In this paper we first presented a survey of loss mitigation techniques at wireless link layer. Secondly performance evaluation for TCP through Type 0 Automatic Retransmission Request mechanism in erroneous Wireless LAN is presented. In particular, simulations are performed taking into account the wireless errors introduced over IEEE 802.11 link using a well-established 2-State Markov model. TCP performance is evaluated under different settings for maximum link retransmissions allowed for each frame. Simulation results show that, link retransmission improves TCP performance by reducing losses perceived at TCP sender. However, such improvement is often associated with adverse effect on other TCP parameters that may cost a lot in return under extreme network conditions. In this paper an attempt is made to observe impact of link retransmissions on the performance of multiple TCP flows competing with each other. The analysis presented in this paper signifies the scope for maximizing TCP’s throughput at the least possible cost. Introduction In recent scenario, the IEEE 802.11 [1] or 3G/4G wireless networks represent a significant milestone in the provisioning of "anywhere -anytime" Internet connectivity to the wandering users [2] [3].Together, Transmission Control Protocol (TCP) [4] has remained as the dominant transport layer protocol for majority of Internet applications [2] [5].Unlike wired links, wireless links have high, bursty and random errors due to atmospheric conditions, terrestrial obstructions, fast and multi-path fading, active interference, mobility and resource constraint [3] [6].Consequently, significant amount of efforts [7] [8] have been devoted to the provisioning of reliable TCP delivery for a wide variety of applications over different wireless infrastructures. Impact of the losses on TCP performance has been characterized in [9] using the following Equation (1).From the view of TCP, the throughput is inversely proportional to the packet losses perceived by TCP sender. ( ) ( ) Nevertheless, when the losses are seen at TCP sender due to reasons other than network congestion, end-toend throughput is compromised due to its loss recovery attempts with detrimental transmission rate.Therefore it is advocated to endeavor initial loss recovery at link layer (link recovery) and to attempt end-to-end TCP recovery [10] [11], only if inescapable.Attempts to make wireless links resemble wired ones for high level protocols are reflected in several approaches operating at a link layer. In this paper, some of the prominent approaches made at wireless link layer for improving TCP's performance are discussed first.In the modern Wireless LAN (WLAN), due to the technology progress, the physical channel condition is comparatively good enough to provide low Packet Error Rate (PER) and consequently higher datarate [2] [12].On the other hand, to guarantee the correctness of control information, the control data rate increases very slowly (e.g. in IEEE 802.11 g data-rate is raised up to 54 Mbps and control data-rate is increased up to 6 Mbps only).In this situation, the overhead of control information including the link layer acknowledgement (ACK) increases significantly.The above grounds for very low efficiency of Automatic Retransmission reQuest (ARQ) protocol.As a result, the effect of link recovery in such networks should be thoroughly analyzed.In earlier work [13], the performance of a series of ARQ mechanisms are analyzed in the link layer without considering the TCP layer effect.In this work, a set of simulations are performed and results are analyzed to show impact of link recovery on TCP's performance.Analysis presented in this paper not only addresses the merits and demerits of link recovery attempts but also give a new direction towards enhancing TCP's performance further. Our results show that with light errors on the IEEE 802.11 link, better TCP performance is achieved using link layer attempts.Since, link recovery attempt increases the Round Trip Time (RTT) estimate at TCP, it may adversely effects on the TCP performance by deteriorating the transmission rate at TCP sender (It is controlled using TCP's internal parameters congestion window (cwnd) and RTT).In fact, even with the use of link recovery, sub-optimum TCP's performance is seen exclusively in the extreme network environments; wherein total loss recovery is not attained at link layer.The rest of the sections are as follows.Section 2 discusses the pure link layer proposals designed to mitigate wireless errors.Section 3 provides a brief summary of link recovery mechanism over IEEE 802.11 links and Section 4 discusses the simulation results to show its influence on the end-to-end TCP's performance in wireless scenario.Section 5 concludes this paper. Pure Link Layer Enhancements Addressing link errors near the site of their occurrence appears intuitively attractive for several reasons. 1) Link layer approaches are likely to respond more quickly to changes in the error environment and therefore local recovery over a point to point link is more efficient than end-to-end TCP recovery [10]. 2) Since, local recovery mechanism commonly operates on exactly the link that require its rendering, the deployment of new and existing wireless link protocol is more feasible than applying novel transport layer solution. 3) The link recovery approaches do not violate the modularity of the protocol stack [13] and in this context they are different from Cross-layer [14] approaches.Various approaches have been proposed at link layer to optimize the performance of TCP in wireless networks.These approaches are broadly classified into two groups on the basis of the awareness of the transport layer protocol; (a) TCP unaware approaches and (b) TCP aware approaches. TCP Unaware Approaches IEEE 802.11 and Cellular networks employ TCP unaware link and physical layer mechanisms with an objective to reduce the PER to a level that would not cause significant performance degradation at TCP.In the above networks, physical layer achieves high coding gain with the use of convolution and turbo coding [15].Additionally interleaving provides time diversity as a further safeguard against burst errors.The most remarkable TCP unaware link layer implementations in the above wireless technologies uses ARQ mechanism and packet scheduling to reduce effect of losses on TCP's performance.For example, Selective Repeat ARQ with scheduling protocols such as RLP and RLC are used in 3G1X and UMTS, respectively [15].On the other side, IEEE 802.11 uses stop and wait ARQ [13] alone to recover from transmission losses locally.ARQ is a closed-loop mechanism that requires feedback and retransmission and is invoked when packets containing bit errors are discarded.Link ARQ consumes additional network resources only when a packet is retransmitted.The mechanism generally operates more efficiently for low bit rates.On a downside, the above mechanisms may cause delay variability and out-of-order delivery of packets at TCP.This can result in duplicate retransmissions from the two layers and hence inefficient utilization of the network capacity.An undesirable side effect of ARQ is that it may interfere with independent TCP mechanisms.Forward Error Correction (FEC) is a popular error mitigation mechanism for detection and recovery of transmission losses [16] without retransmissions, which is critical for lossy links exhibiting long delays.Unlike ARQ, FEC doesn't interfere with TCP mechanisms.FEC suffers from dead-weight overheads in favorable conditions, resulting in a waste of limited bandwidth.Furthermore FEC requires additional resources (processor, memory) and power consumption. Based on the recommendations made by IETF and 3GPP, most of the link layer techniques for wireless lossy links, involve making the link layer reliable using hybrid FEC/ARQ [17].Formerly a good summary of hybrid FEC/ARQ techniques for the link layer is provided in [13].In a Type 0 scheme, recovery for losses is offered only through ARQ.This technique is applicable to links having losses subject to infrequent interference.In Type I technique, initial transmission is protected using FEC.If this fails, ARQ is used to repeat the transmission.The main characteristic of Type II schemes is that ARQ is used first followed by FEC.Typically the initial transmission has error detection built in but no error correction.Though hybrid ARQ/FEC is better than either FEC or ARQ alone, its performance also significantly degrades for higher loss rates despite unreasonably high amounts of ARQ retries, fragmentation of IP packets, FEC overhead and buffering [15].Asymmetric Reliable Mobile Access In Link Layer (AIRMAIL) and Transport Unaware Link Improvement Protocol (TULIP) are the known link layer implementation based on TCP unaware link layer techniques as discussed in [7]. Barakat et al. [11] reported that ARQ mechanisms may vary the characteristics of the network, affecting the functionality of the upper layer protocols.Moreover lack of knowledge of the protocol operating at the transport level may results into end-to-end performance degradation in presence of independent link layer mechanisms.For instance, an approach without awareness of the transport protocol may cause local link layer retransmission of a packet, as well as duplicate acknowledgement (dupack), since retransmissions can be performed on both layers.This led to significant efforts in development of TCP aware link layer approaches [18]. TCP Aware Approaches The link layer approaches in which enhancements tailored for wireless environments are made known to TCP are broadly referred as TCP aware link approaches.A good overview of these approaches can be found in [7] [17] [18].The representative mechanisms employed in various approaches include Snooping, Delayed Acknowledgement and Performance Enhancement Proxy (PEP). Snooping Mechanism Most of the approaches which operate at this level rely on some intermediate point within the end-to-end connection for the introduction of performance improvement.In Snoop protocol [19], agent located at the Base Station (BS) monitors every packet that passes through the TCP connection in both directions.The Snoop agent suppresses the dupacks for lost TCP packets and retransmits locally, thereby preventing unnecessary invocation of congestion control mechanism by sender.Other enhancements proposed in similar category are TCP SACKaware Snoop Protocol and SNACK-NS (New Snoop) [7].This protocol may cause additional delay for TCP packets, which results into futile TCP retransmissions at TCP sender over the slower wireless link.However, the problem can be solved by minimizing the number of retransmissions at the link level [7]. WTCP [20], although operating in a similar way, introduces more accurate RTT estimation and thus preventing reduction in TCP throughput.WTCP conceals the difference in RTT from the sender, thus avoiding needless timeout caused by reason of local retransmissions.Both Snoop and WTCP must, however, have access to the header of TCP packets in order to function, which reduces their usefulness value if traffic is encrypted.The use of Snoop agent is also exploited in wireless TCP refinements with split connection schemes [7].Snoop and WTCP increase complexity at BS, especially when transport layer per flow support is required.TCP PEPs [21] are TCP aware link layer mechanisms for handling bandwidth asymmetry, TCP aware FEC provisioning or adaptive link frame sizing [22].Split connection approaches [7] and PEPs, though effective in many cases, break TCP end-to-end semantics and its modularity and hence not considered as pure link layer approaches. Delayed Acknowledgements Another class of work includes Delayed Duplicate Acknowledgment (DDA) [23], which preferred particularly when IP encryption is used.With a view to reduce interference between the TCP and link retransmissions, TCP receiver delays the third and subsequent dupacks for some interval "d".This provides enough time for link layer to recover the lost packet using local retransmission, and to prevent TCP sender from fast retransmit.Different from the Snoop protocol, in DDA dupacks are not dropped immediately but rather delayed a certain length of time.Delayed Acknowledgements over Wireless Link (DAWL) [24] tries to simplify the system by introducing modifications only in the ARQ scheme at the link layer and does not consider local retransmission at the BS.This design is advantageous in case of a BS crash [7].However, in presence of congestion, the performance of these schemes is degraded, as the essential fast retransmissions are unnecessarily delayed. Since different link layer technologies have diverse capabilities for error recovery, there is no de-facto standard for link layer protocols.All link layer approaches try to reduce the effect of erroneous wireless links on the performance of TCP, but do not completely shield the TCP sender from all types of wireless errors particularly in the event of lengthy disconnection or high Bit Error Rate (BER).The main advantage of link layer approaches is the maintenance of end-to-end semantics, without modification of higher protocol layers.This makes it possible to leave untouched the existing implementations of the protocol stack in the various operating systems. IEEE 802.11 Link Recovery Mechanism The basic channel access method in 802.11 networks is the Distributed Coordination Function (DCF) in which 802.11 Medium Access Control (MAC) layer uses the Carrier Sense Multiple Access/Collision Avoidance (CSMA/CA) mechanism before transmitting any frame.The IEEE 802.11MAC layer employs Type 0 ARQ concept for loss recovery with the contention based transmission policy.The ARQ is the only error control method specified in the standard and no FEC coding is used as stated earlier. Over an IEEE 802.11 link, whenever a wireless node notices unsuccessful transmission for a frame, it attempts local retransmission of a frame in accordance with RetryLimit (RL) [1].To explain, as shown in Figure 1, a wireless node after sending a frame waits for a positive ACK from wireless receiver.The sender attempts retransmission of the frame whenever ACK is not received before expiration of ACK timeout.The node is allowed to attempt retransmissions for a particular frame maximum up to RL.Then after node initiates for new transmission and the previous frame is considered to be lost.Thereby the upper layer is made responsible for recovery of the frames lost after maximum retransmissions. Such link retransmissions additionally delay the transmission of subsequent TCP packet residing at Interface Queue (IFQ).This additional delay (T ARQ ) is approximated using Equation (2). ( ) T ARQ increases with (a) increase in the number of unsuccessful retransmissions (n) and (b) increase in link delay (t').Here T CONT signifies the contention delay over a shared medium [1].TCP's RTT estimation includes wired link delay and processing delay at nodes (the combined is referred as RTT 0 ), delay due to network con- In short, link recovery mechanisms shield TCP from the wireless losses and hence prevent undue reduction of cwnd, but at the same time increase RTT estimation at TCP sender.The earlier effect prevents sacrificed transmission rate and the later accounts for diminished growth in transmission rate.In the next section, an analysis of results obtained through simulations is presented to illustrate impact of link recovery on TCP's performance. Simulation Scenario A network topology illustrated in Figure 2 was used for analyzing Medium Access Control (MAC) and TCP behavior using a TCP connection between sender (S1) and receiver (D).Wireless losses over an IEEE 802.11b link between S1 and BS were introduced using 2-State Markov model.The queue length at intermediate nodes is set in accordance to Bandwidth Delay Product (BDP) in the network. A TCP/FTP flow was introduced for 100 sec.To investigate the impact of link recovery on the efficiency of link and transport layer protocols, the simulations were performed using different value for RL and Frame Error Rate (FER) in the range mentioned along with results.TCP's RTT estimation and layer efficiency are the measurable parameters used for performance evaluation.Layer efficiency is defined as the ratio of total number of successful transmissions to the total transmissions made by that layer during simulation interval.Nevertheless the most critical parameter under observation is (cwnd/RTT).For better understanding, analysis of results is presented with different traffic conditions; (a) a single TCP flow and (b) multiple TCP flows competing over a bottleneck link.In the mentioned topology, TCP New Jersey [25] (TCP NJ) has demonstrated better throughput compared to that achieved using other well-known TCP variants (i.e.NewReno, Vegas and Westwood [26]).Therefore the analysis is presented for TCPNJ as TCP variant.However similar trend in the results is observed with other TCP variants as well.As shown in Figure 3(a), with link recovery (RL = 7), increase in FER resulted into increase in link retransmissions in order to combat transmission losses locally.Pl. note that even with 0% FER (introduced using error model), losses were witnessed due to channel contention (due to transmission of TCP data and acknowledgement packets on the same channel in reverse direction).It led to noticeable MAC retransmissions for recovering from such losses.During simulations with link recovery (RL = 7), % of wireless TCP drops were significantly reduced together with increase in FER compared to those witnessed during simulations without link recovery (RL = 0) (refer Figure 3(b)).Since, the error model introduces losses in % of total transmissions; in order to demystify the impact of link loss recovery correctly the analysis is presented in terms of % of TCP drops.Reduction in wireless TCP drops has caused significant reduction in end-to-end TCP retransmissions (analysis is presented in % of total TCP transmission due to similar reasons stated earlier), as shown in Figure 3(c). Analysis with Single TCP Flow The simulations were performed using single TCP flow and therefore the packet losses were attributed only to the transmission errors on wireless link.During simulations TCP flow encountered timeouts due to either loss of retransmissions or due to loss of TCP acknowledgement in presence of errors on the wireless links, those are very costlier to TCP. Figure 4 represents comparison for number of TCP timeouts with RL = 0 and RL = 7.As In absence of any support for loss discrimination, TCP reacts to timeout with drastic reduction in cwnd, which is inappropriate for wireless losses.Figure 5(a) shows that without link recovery (RL = 0), TCP's maximum sequence number dropped rapidly with increase in FER; since loss of TCP segments has reduced the cwnd and slowed down TCP's transmissions per RTT.However with link recovery, significantly higher value of TCP's maximum sequence number is obtained, relative to that obtained with RL = 0. This improvement is additionally attributed to the massive reduction in costlier TCP timeouts, in presence of link loss recovery attempts (RL = 7).It is apparent that link ARQs can improve the performance of TCP by shielding it from wireless losses as much as possible.Comparison for average cwnd with and without link loss recovery is shown in Figure 5(b).When RL = 7, TCP's average cwnd and maximum sequence number decreased linearly with increase in FER. Figure 6(a) and Figure 6(b) represent transmission efficiency for MAC and TCP protocols with RL = 0 and RL = 7.As anticipated, link recovery (RL = 7) improves efficiency for both the layers.It must be noted that without link recovery (RL = 0), TCP's efficiency is reduced to a great extent with higher FER.Hence, link recovery improves end-to-end performance in the network by improving TCP's transmission efficiency on large scale.In Figure 6(b), about 30% of improvement for TCP's efficiency at 5% FER is recorded.Improved efficiency of MAC and TCP protocols result into significant improvement in TCP's throughput as shown in Figure 6(c). In Figure 7(a), comparison for TCP's RTT estimated with RL = 0 and RL = 7 is presented for different FER.Without link recovery (RL = 0), the average RTT (say RTT 0 ) decreases with increase in FER.As shown in Figure 5(b) earlier, cwnd 0 has reduced sharply with increase in FER up to 2%.This reduction in cwnd 0 grounded for less number of TCP segments at an IFQ and hence as expected RTT 0 estimated at sender was also lowered sharply.It must be noted that the RTT 0 is found very close to the theoretical RTT in the network.A bit higher value is attributed to the contention delay at wireless interface and processing delay at nodes.In contrary, with link recovery (RL = 7), average RTT (say RTT 7 ) is increased along with increase in FER; as the link layer on average took a longer time for error recovery before delivering the TCP segments or acknowledgements in either directions.Figure 7(a) shows that the average RTT is increased with increase in FER until it reaches about 2%.With increase in FER above 2%, there is insignificant change in MAC retransmissions (refer Figure 3(a)) and hence similar was the impact on RTT 7 estimation.With FER above 2%, marginal reduction in RTT 7 was observed due to reduction in cwnd. Based on the foremost observation it seems that the TCP with link layer recovery (RL = 7) utilizes network with higher value for average cwnd (cwnd 7 ) compared to that accomplished without link recovery (cwnd 0 ).Consequently, it leads to enhanced TCP's performance.In fact TCP with link recovery has achieved higher value of average cwnd at the cost of additional rise in average RTT estimation. Since TCP's throughput is proportionate to its effective sending rate (i.e.cwnd/RTT ), it is apparent that the net improvement in end-to-end TCP throughput is realized only when (cwnd 7 /RTT 7 ) > (cwnd 0 /RTT 0 ).In improvement in TCP's throughput is following the net improvement for average cwnd/RTT.Pl. note that even with 0% FER (introduced using error model), transmission losses were witnessed due to channel contention (due to transmission of TCP data and ack packets on the same channel in reverse direction).Link ARQs recovered from majority of contention losses and hence higher TCP's throughput is seen compared to that observed without link recovery. Analysis with Multiple TCP Flows After analyzing impact of link ARQ on TCP's performance in non-congested erroneous wireless network, our simulations were extended for further investigations in a congested wireless network.For that number of simultaneous TCP senders in the networks were increased in the range of 2,4,8 and 16.The statistics obtained for FER of 0.001% are as shown in Figure 8. As seen from Figure 8(a), with increase in number of competing flows, effective value of cwnd (for all flows over a bottleneck link) reduces with or without loss recovery as anticipated.However, for given number of competing flows, effective value of average cwnd is found much higher when link recovery was enabled.With increase in the number of TCP flows, congestion in the network increases. Moreover link recovery in presence of transmission losses gave additional rise in RTT (as shown in Figure 8(b)).This aggravated for congestion in the network and consequently large number of congestion drops are witnessed, particularly with increase in the competing flows (for a given error rate).This has remarkably pulled down growth in cwnd due to rise in RTT, which can be seen from the Figure 8(c).As mentioned in the Figure , with link recovery (RL = 7), improvement in cwnd/RTT is seen when there was a single TCP flow.However with increase in number of TCP flows improvement for cwnd/RTT using link recovery is found insignificant.In fact when number of competing TCP flows increased upto 4 or above, link recovery failed in protecting cwnd/RTT and consequently, as presented in Figure 8(d), TCP performance is found deteriorated. This gives insight to the issue related to the TCP's performance over an IEEE 802.11 network, even in presence of link recovery mechanism.The problem really occurs when link layer mitigation substantially diminishes growth in cwnd by increasing RTT to a much higher value with increase in FER.The similar problem may occur when link layer mitigation fails due to substantial underlying error conditions.In fact use of link layer approaches may degrade performance especially in presence of highly variable error rates.In the stated situation, reduction in cwnd and rise in RTT is unavoidable at sender on account of failed ARQs.This adversely effects on the throughput conceived by TCP. Conclusions We have performed a detailed analysis of using link ARQs to improve the end-to-end performance of TCP in wireless networks.The activities are concluded with the following outcomes. 1) Link recovery attempts using ARQ in IEEE 802.11 networks effects on two important TCP parameters; i.e. cwnd and RTT.TCP performance is decided jointly by both of the above parameters; i.e. always attributed to the improvement in the cwnd/RTT value. 2) Link ARQs in absence of network congestion and light wireless errors yield superior value of cwnd/RTT in comparison to that observed without link ARQs.This in turn results in better TCP performance. 3) Recovering from all types of wireless losses using link mitigation techniques is not always possible; particularly with high and bursty error environments.This may lead to performance degradation due to unavoidable cutback in cwnd and increase in RTT. 4) Link ARQs may degrade performance in congested networks even with very small amount of wireless errors.The performance degradation is attributed to any of the following reasons: • When link level error mitigation fails, reduction in cwnd at TCP sender is unavoidable.Since loss recovery attempts increase RTT and unable to protect cwnd, TCP''s end-to-end performance is compromised.• If multiple TCP flows are sharing common wireless channel for transmission, loss recovery attempts made for one TCP flow may affect RTT of all competing flows and therefore overall network performance is found compromised.This shows possibility for further improvement in cwnd/RTT (and hence TCP throughput) using a corrective mechanism that makes RTT unaffected from link recovery attempts.Nevertheless, any correction that reduces RTT inappropriately may cause the sender to timeout earlier when the retransmission is being performed on the wireless link.Therefore, a well thought upon approach for RTT correction may be considered for maximizing TCP's throughput in existence of link layer ARQs.This highlights the scope for further performance enhancement of TCP proposals those advocate for retaining of cwnd based on loss discrimination. Figure 1 . Figure 1.IEEE 802.11 link loss recovery mechanism.gestion (d c ) and T ARQ as shown in Equation (3). Figures 3 - 8 Figures 3-8 present analysis based on the results obtained from the experiments.As shown in Figure3(a), with link recovery (RL = 7), increase in FER resulted into increase in link retransmissions in order to combat transmission losses locally.Pl. note that even with 0% FER (introduced using error model), losses were witnessed due to channel contention (due to transmission of TCP data and acknowledgement packets on the same channel in reverse direction).It led to noticeable MAC retransmissions for recovering from such losses.During simulations with link recovery (RL = 7), % of wireless TCP drops were significantly Figure 6 . Figure 6.(a) Impact of link ARQ on MAC layer efficiency, (b) Impact of link ARQ on TCP layer efficiency, (c) Impact of link ARQ on TCP's throughput. Figure 7 (Figure 7 . Figure 7. (a) Impact of link ARQ on TCP's RTT Estimate, (b) Impact of link ARQ on TCP's RTT Estimate. Figure 8 . Figure 8.(a) Impact of link ARQ on average congestion window, (b) Impact of link ARQ on average RTT, (c) Impact of link ARQ on effective cwnd/RTT, (d) Impact of link ARQ on total TCP throughput.
2018-09-05T10:42:29.364Z
2014-05-08T00:00:00.000
{ "year": 2014, "sha1": "7037d55c356bd0687abbb7bf178357e34ec6548b", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=45714", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "7037d55c356bd0687abbb7bf178357e34ec6548b", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
13367653
pes2o/s2orc
v3-fos-license
Inhibitory effect of matrine on blood-brain barrier disruption for the treatment of experimental autoimmune encephalomyelitis. Dysfunction of the blood-brain barrier (BBB) is a primary characteristic of experimental autoimmune encephalomyelitis (EAE), an experimental model of multiple sclerosis (MS). Matrine (MAT), a quinolizidine alkaloid derived from the herb Radix Sophorae Flave, has been recently found to suppress clinical EAE and CNS inflammation. However, whether this effect of MAT is through protecting the integrity and function of the BBB is not known. In the present study, we show that MAT treatment had a therapeutic effect comparable to dexamethasone (DEX) in EAE rats, with reduced Evans Blue extravasation, increased expression of collagen IV, the major component of the basement membrane, and the structure of tight junction (TJ) adaptor protein Zonula occludens-1 (ZO-1). Furthermore, MAT treatment attenuated expression of matrix metalloproteinase-9 and -2 (MMP-9/-2), while it increased the expression of tissue inhibitors of metalloproteinase-1 and -2 (TIMP-1/-2). Our findings demonstrate that MAT reduces BBB leakage by strengthening basement membrane, inhibiting activities of MMP-2 and -9, and upregulating their inhibitors. Taken together, our results identify a novel mechanism underlying the effect of MAT, a natural compound that could be a novel therapy for MS. Introduction Multiple sclerosis (MS) and its animal model, experimental autoimmune encephalomyelitis (EAE), are T cell-mediated inflammatory diseases characterized by lymphocyte infiltration, demyelination, and axonal injury [1,2]. Although MS pathology is not fully understood, blood-brain barrier (BBB) dysfunction plays an essential role in the pathogenesis of this disease. In both MS and EAE, proinflammatory cells and toxic molecules migrate into the brain via the damaged BBB, resulting in cerebral edema, demyelination, and neural cell death [3,4]. The BBB is composed of basement membrane, interendothelial tight junctions (TJs), and perivascular astrocytes [5]. The basement membrane, which is composed of two distinct types, namely, endothelial basement membrane and parenchymal basement membrane, is a tight assembly of specialized extracellular matrix molecules [6]. This membrane, together with the endothelial cell monolayer, forms a structural barrier that selectively filters blood elements [6,7]. Collagen IV comprises 90% of total protein in the basement membrane and plays a decisive role in maintaining the structural integrity of the vessel wall [8,9]. Collagen IV, as a major component of the cerebral microvascular basal lamina, is widely used as a marker to determine the extent of destruction of the basement membrane. TJs, composed of large multiprotein complexes, seal the gaps between biological barriers [4]. Altered distribution or loss of TJs is frequently seen in ischemic cerebral microvessels, resulting in diminished BBB integrity [10]. Zonula occludens-1 (ZO-1) is the primary cytoplasmatic protein associated with TJs, which links the C-terminal ends of occludin and claudins to the underlying actin cytoskeleton [7]. A decrease in ZO-1 expression results in increased BBB permeability [11]. In addition, disease severity during the acute phase of EAE is directly associated with the extent of BBB permeability [12]. It has been shown that BBB disruption is accompanied by excessive expression of matrix metalloproteinases (MMPs) [13]. MMPs, including MMP-9 and MMP-2, belong to a class of zinc-bound proteases, whose functions include induction of inflammation, cleavage of myelin proteins, activation or degradation of disease-modifying cytokines, and direct damage to CNS cells [14]. Abnormal increases in MMP-9 and MMP-2 in endothelial cells may collectively impair endothelial barrier function by degrading the vascular basement membrane and TJs [10,14,15]. Furthermore, MMP-9 and MMP-2 are upregulated in the CNS of rat models of EAE [16]. Tissue inhibitors of metalloproteinases (TIMPs) are endogenous inhibitors of MMPs. TIMP-1 controls MMP-9 activity through high affinity, noncovalent binding to the MMP catalytic domain, whereas MMP-2 is bound by TIMP-2 [17]. It has been shown that TIMP-1 deficiency enhances disease severity during EAE [18]. Under normal physiological conditions, there is a constant balance between MMP and TIMP activity, which is essential in maintaining the physiological functions of the organism [19]. In contrast, an imbalance in MMP/TIMP ratio is found in various pathological conditions in humans, such as cancer, rheumatoid arthritis, and vascular diseases [20]. For example, the serum MMP-9/TIMP-1 ratio in relapsing-remitting MS patients correlates with development of the disease [17]. An imbalance between MMP-2 and TIMP-2 caused by radiation plays a role in the pathogenesis of brain injury [21]. Currently, treatment of MS is limited to immunomodulatory or immunosuppressive therapy, which is not always successful and often has severe side effects [22]. Hence, the search for more effective and more tolerable compounds is of great importance. Matrine (MAT) is a natural alkaloid component extracted from the herb Radix Sophorae Flaves, with a MW of 258.43 (C 15 H 24 N 2 O, Figure 1(a)). It has been reported that MAT suppressed immune activities of T cells, B cells, and macrophages [23]. Matrine has long been used for the treatment of viral hepatitis, cardiac arrhythmia, and skin inflammation, without known side effects [24,25]. While MAT suppressed development of EAE, its mechanism for neuroprotection has not been elucidated. The purpose of this study was to determine whether MAT treatment inhibits BBB disruption by reducing BBB leakage, strengthening the basement membrane, enhancing TJs, and regulating the balance between MMPs and TIMPs during disease progression of EAE. Materials and Methods 2.1. Animals. Female, 6-7 week-old Wistar rats were purchased from Shanghai Xipuer-Bikai Experimental Animal Company, China, and housed in the aseptic laboratory of the Experimental Animal Center of Henan, China. All efforts were made to minimize the numbers of animals used and to ensure minimal suffering. EAE Induction and Treatment. EAE was induced as described previously [23] with only minor modifications. Spinal cord homogenate of guinea pig (Experimental Animal Center of Hebei) weighing 300-350 g was emulsified with the same volume of complete Freund's adjuvant (CFA) (Sigma, USA) containing 6 mg/mL Bacillus Calmette-Guérin vaccine (Shanghai Institute of Biological Products, China). Each rat received a subcutaneous injection of 0.5 mL emulsion divided among 5 sites draining into the nape and back. All procedures were approved by the Bioethics Committee of Zhengzhou University. Immunized rats were randomly divided into four groups ( = 16 each group) for different treatments. Briefly, MAT (Jiangsu Chia Tai Tianqing Pharmaceutical Co., Jiangsu, China) was dissolved in normal saline and injected intraperitoneally (i.p.) daily at two doses: low (150 mg/kg; MAT-L) and high (250 mg/kg; MAT-H), the dosage calculated at 6.7 mL/kg, from day 1 until day 17 after immunization (p.i.). Dexamethasone (DEX) (Henan Hongrun Pharmaceutical Co., Henan, China), as the positive control drug, was dissolved in normal saline (6.7 mL/kg) and injected (i.p.) daily, from day 1 until day 17 p.i. at 1 mg/kg. Immunized rats that received the same amount of normal saline only i.p. served as a vehicle control, and 16 nonimmunized naive rats that received the same amount of normal saline i.p. served as the naive group. Clinical Scoring and Weight. Rats were monitored and weighed daily by two independent observers to evaluate clinical scores of EAE after immunization. Neurological signs were assessed as follows [23]: 0 = no clinical score; 1 = loss of tail tone; 2 = hind limb weakness; 3 = hind limb paralysis; 4 = forelimb paralysis; 5 = moribund or death. Histopathological Evaluation. On day 17 p.i., two independent observers randomly selected 8 rats from each group. When animals were sacrificed, sera and spinal cords were collected after extensive perfusion. The lumbar enlargement of spinal cords was embedded in paraffin. After embedding, 2-3 m thick sections were prepared and stained with hematoxylin-eosin (HE) for inflammatory infiltration and chromotrope 2R-brilliant green (C-2R-B) for demyelination. Histopathological examination was performed and scored in a blinded fashion as follows [22]: for inflammation: 0, no inflammatory cells; 1, a few scattered inflammatory cells; 2, organization of inflammatory infiltrates around blood vessels; 3, extensive perivascular cuffing with extension into adjacent parenchyma, or parenchyma; for demyelination: 0, none; 1, rare foci; 2, a few areas of demyelination; 3, large (confluent) areas of demyelination. Evaluation of BBB Leakage. BBB leakage was assessed using Evans Blue (EB) dye as previously described [26]. On day 17 p.i., the 8 rats remaining in each group were anesthetized. EB dye (2%; 4 mL/kg; Sigma, St. Louis, MO, USA) was injected slowly into the tail vein and was allowed to circulate for 60 minutes. When the rats were sacrificed, brains were removed and immediately weighed. The EB dye was extracted in 2.5 mL of PBS, and 2.5 mL of 60% trichloroacetic acid was added. The mixture was then vortexed and centrifuged for 40 min at 4000 rpm, and the amount of EB dye in supernatants was determined at 610 nm by spectrophotometry and quantified to g/g brain tissue. RT-PCR Analysis of Collagen IV and ZO-1 mRNAs. The cervical spinal cords were harvested on day 17 p.i. and were prepared for analysis of collagen IV and ZO-1 mRNA using real-time polymerase chain reaction (PCR). Total cellular RNA from these tissues was isolated using TRIzol reagent (Beijing TransGen Biotech Co., Beijing, China) following the standard protocol. cDNA synthesis was performed by reverse transcription using a Promega reverse transcription kit. The cDNA copy number for each gene was determined using standard curves of the corresponding PCR product. Primers for collagen IV were 5 -GGCCCCTGCTGAAG-CGTT-3 (forward) and 5 -GTTCCCCGAGCACCTTAG-3 (reverse), which produced a 306 bp PCR product. Primers for ZO-1 were 5 -CCATCTTTGGACCGATTGCTG-3 (forward) and 5 -TAATGCCCGAGCTCCGATG-3 (reverse), which produced a 372 bp PCR product. Gene expression was normalized to expression of the endogenous housekeeping gene -actin. The -actin primers were 5 -CCTCTGAACCCTAAGGCCAAC-3 (forward) and 5 -TGCCACAGGATTCCATACC-3 (reverse), which produced a 564 bp PCR product. To determine the relative quantification of target gene expression, we used the gel imaging analysis system (Dalian Jingmai Biotech Co., Liaoning, China). ELISA Analysis of MMP-9 and TIMP-1. Serum collected on day 17 p.i. was assayed for concentrations of MMP-9 and TIMP-1 by ELISA following the manufacturer's instructions (R & D Systems, USA). Samples were quantified by comparison with the standard curves of MMP-9 and TIMP-1 (0-200 ng/mL). Immunohistochemical Analysis of MMP-2 and TIMP-2. Paraffin-embedded tissue of spinal cords from each group was cut into 5 m thick sections. Immunohistochemistry was performed on these slices, using anti-rat antibodies for MMP-2 and TIMP-2 (all from Beijing TransGen Biotech Co.; Beijing, China). Sections were rinsed and incubated in nonbiotinylated goat anti-rabbit IgG secondary antibody. The chromophore product was developed using a Simple Stain DAB solution (Beijing TransGen Biotech Co., Beijing, China). The integral optical density (IOD) of positive cells in a restricted area was determined to represent the expressions of MMP-2 and TIMP-2 using Biosens Digital Imaging System v1.6. 2.9. Statistical Analysis. All data are presented as mean ± SD. Statistical analysis was performed with SPSS 16.0 (SPSS, Chicago, USA). Multiple comparisons were performed using the Kruskal-Wallis test, or ANOVA, followed by the LSD-t test, as appropriate. A value less than 0.05 was considered significant. MAT Treatment Alleviates Clinical Severity of EAE. In the vehicle-treated group, clinical decline typically started on day 10 p.i., while EAE onset for MAT-treated rats occurred on day 12 p.i. (low dose) and 13 p.i. (high dose) (Figure 1(b)). Compared to the vehicle-treated group, both MAT-treated groups exhibited significantly lower mean maximum clinical scores (Figure 1(c)) and body weight loss (Table 1). Treatment with DEX also delayed disease progression and reduced clinical scores compared to the vehicle-treated group ( < 0.05). There was no significant difference between animals treated with DEX and the two different doses of MAT. Effect of MAT Treatment on CNS Histopathology. To assess EAE neuropathology, lumbar enlargements of spinal cord samples were examined using H&E and myelin staining ( Figure 2). Perivascular cuffing with mononuclear cells and infiltration into CNS parenchyma were observed in the spinal cord of rats in the vehicle-treated group (Figure 2), while the extent of cellular infiltration was significantly decreased in the MAT-treated groups (both < 0.01). Treatment with DEX showed a stronger inhibition in cellular infiltration than MAT-treated groups (both < 0.01). Cellular infiltration was not observed in the naive group. Moreover, as shown in Figure 2, large areas of demyelination were observed in the vehicle-treated group, while MAT-and DEX-treated groups exhibited only a few areas of demyelination. No significant difference in demyelination was observed between MAT-H and DEX groups. MAT Inhibits Evans Blue (EB) Leakage through the BBB. Destruction of the BBB is one of the important features of MS and EAE. We quantified the extravasation of EB dye into the brain as an indicator of BBB permeability. EB extravasation into the brain of vehicle-treated EAE rats was significantly higher than in naive brain ( < 0.01) (Figure 3). The content of EB was significantly decreased in MAT-and DEX-treated groups compared to the vehicle-treated group (all < 0.01), while a more profound decrease in EB content was observed in rats treated with MAT-H and DEX than with MAT-L ( < 0.05). No significant difference was observed between those treated with high doses of MAT and DEX. MAT Protects Collagen IV and ZO-1 mRNA Expression. To determine the basement membrane disruption, we evaluated mRNA expression of collagen IV, a major component of the cerebral microvascular basement membrane, using RT-PCR analysis. As shown in Figure 4(a), the brightest band was exhibited in the naive group, while the faintest band was in the vehicle-treated group, consistent with quantitative analysis (Figure 4(c)). A dose-dependent increase in collagen IV expression was observed in MAT-treated groups. While the MAT-L-treated group showed lower collagen IV expression than the DEX group ( < 0.01), there was no significant difference between MAT-H and DEX groups. We also assessed ZO-1 mRNA expression to determine TJ disruption. A similar pattern of mRNA expression of ZO-1 was observed for collagen IV (Figures 4(b) and 4(c)). MAT Adjusts the Balance between MMP-9 and TIMP-1 in Serum. Serum concentration of MMP-9 was measured by ELISA. The amount of MMP-9 was dramatically increased in the vehicle-treated group compared to the naive group ( < 0.01) ( Figure 5). MAT treatment largely reduced MMP-9 content compared to the vehicle-treated group (both < 0.01), and the effect was dose dependent. Furthermore, a significantly lower amount of MMP-9 was observed in the DEX-treated group than in the MAT-treated groups ( < 0.01). We also measured serum concentrations of TIMP-1 in the different groups. Figure 5 shows a significant decrease in TIMP-1 concentration in the vehicle-treated group compared to the naive group ( < 0.01). TIMP-1 serum levels in the MAT-treated groups were significantly higher than in the vehicle-treated group (both < 0.01). While a low dose of MAT induced a lower serum TIMP-1 level than DEX ( < 0.01), a high dose of MAT increased the serum TIMP-1 level to a greater extent than DEX ( < 0.01). MAT Regulates the Balance between MMP-2 and TIMP-2 in the CNS. To determine the MMP-2/TIMP-2 balance in the CNS, their expression in the spinal cord was measured by immunohistochemistry. As shown in Figure 6, MMP-2 expression was significantly increased in the vehicle-treated group over the naive group ( < 0.01). The differences in MMP-2 expression between vehicle-treated and the two MAT-treated groups were significant (both < 0.01). Furthermore, the effect of DEX in decreasing MMP-2 content was stronger than low-dose MAT ( < 0.05), but there was no significant difference compared with the MAT-H group. We then measured TIMP-2 content using the same method. As shown in Figure 6, TIMP-2 expression in the vehicle-treated group was significantly lower than in the naive group ( < 0.01) and MAT-treated groups ( < 0.05-0.01). A significantly lower amount was obtained in the MAT-L group than in the MAT-H group ( < 0.05). While the MAT-L-treated group showed lower TIMP-2 expression than the DEX-treated group ( < 0.05), there was no significant difference between MAT-H and DEX groups. Discussion Although administration of MAT reduced CNS inflammatory infiltration and demyelination, the effect of this natural compound on the BBB has not yet been studied. In the present study, we provide evidence that the therapeutic effect of MAT is, at least partially, through its strengthening of BBB integrity, its protection of the basement membrane as well as TJ proteins, and its ability to regulate the balance between MMPs and TIMPs in both the periphery and CNS. BBB destruction has been implicated in many CNS diseases, such as MS and stroke [4,11]. In MS patients and in the EAE model, breakdown of the BBB is an early critical event, which is associated with the influx of inflammatory cells and, ultimately, with a poor outcome [3]. We thus quantified the extravasation of EB dye into the brain as an indicator of BBB permeability. Indeed, our study showed that EB leakage was markedly increased in the EAE model, which was associated with a decrease in collagen IV and ZO-1 expression in vehicletreated rats compared to naive rats. MAT treatment largely Figure 3: BBB integrity. Evans Blue was i.v. injected at day 17 p.i. and brains were harvested at 60 min to determine Evans Blue extravasation. Results are expressed as mean ± SD ( = 8 each group). △△ < 0.01, compared to naive group; * * < 0.01, compared to vehicle group; ## < 0.01, compared to DEX group; ◊ < 0.05, comparison between MAT-L and MAT-H groups. and significantly decreased EB content in the brain compared with the vehicle group. Together, these results demonstrate that BBB integrity is compromised during the development of EAE and that MAT treatment protects the BBB. BBB integrity depends on adequate structural support from the basement membranes, and collagen IV comprises up to 90% of total protein in the basement membrane [7]. In addition, collagen IV is critical for cell signaling by the interaction with various receptors and adhesion molecules, and its expression is therefore a marker of barrier damage and impairment [27]. A study by Lee et al. clearly shows that degradation of collagen IV has a role in the pathogenesis of BBB destruction and brain injury [28]. To date, it is not known whether decreased collagen IV contributes to BBB damage caused by EAE. We have shown in the present study that expression of collagen IV mRNA was significantly decreased in the vehicle-treated group compared to the naive group, and a dose-dependent increase of collagen IV mRNA expression was observed in MAT-treated groups. These results indicate that collagen IV degradation plays an important role in BBB disruption and that MAT treatment effectively preserves the content of collagen IV. In the BBB, TJs are composed of large multiprotein complexes that mediate tight intercellular contacts among adjacent cells and play a critical role in maintaining BBB function by improving the barrier function at the endothelial level [29]. Previous publications have reported varying degrees of TJ pathology in EAE [30,31]. Bennett et al. found that relocalization of ZO-1, which is a multidomain polypeptide required for the assembly of TJs [32], precedes disease onset and correlates with CNS infiltration in EAE [33]. Similar to these observations, our study found a decrease in ZO-1 mRNA expression in vehicle-treated group compared to the naive group and that ZO-1 mRNA levels were significantly improved with MAT treatment in a dose-dependent manner. These results are consistent with preserved collagen IV levels and indicate a protected BBB basement membrane. In order to further study the mechanism of BBB protection induced by MAT treatment, we evaluated the activity and balance of MMPs and TIMPs. MMPs, a group of zinc-containing endopeptidases, cleave most components of the basement membrane including fibronectin, laminin, proteoglycans, and collagen IV [34,35]. It has been found that focal MMP-2 and MMP-9 activity is closely associated with the infiltrating T cells penetrating through the parenchymal basement membrane [5,6]. Upregulation of MMP-2 and MMP-9 resulted in the degradation of TJs after focal ischemia/reperfusion, which can be reversed by MMP inhibition [36,37]. In addition, a selective upregulation of MMP-9 in MS disease activity has been described [17]. Similarly, a significant increase in MMP-2 expression in the central canal of the cervical spinal cord is a sign of inflammation in acute EAE [38]. Furthermore, among the MMP family, MMP-9, together with MMP-2, is a member of the collagenase IV family, which has been implicated in the degradation of constituents of the basement membranes [15]. Targeting MMPs and chemokines has been considered an important therapeutic approach, alone or in combination with current medications, in enhancing their effect in neurological disorders such as MS [39]. In the present study, significant upregulation of MMP-9 in serum was observed in the vehicle-treated group compared to the naive group, and MAT treatment reduced the levels of MMP-9. Further, the loss of collagen IV and ZO-1 was reduced by blocking MMP-9 and MMP-2 in EAE. These studies suggest that overexpression of MMP-9 and MMP-2 in EAE could have been a causative agent in the reduced intensity of collagen IV and ZO-1; MAT treatment inhibition of MMP-9 and MMP-2 levels will preserve levels of collagen IV and ZO-1 and will thus be beneficial for BBB integrity. The active forms of all MMPs are inhibited by a family of specific inhibitors, tissue inhibitors of metalloproteinases (TIMPs) [18,19]. MMP-9 and MMP-2 are preferentially inhibited by TIMP-1 and TIMP-2, respectively, [17]. It has been found that normal homeostasis in the CNS requires a balance between MMPs and TIMPs, while an imbalance between these molecules is often associated with CNS pathology [40]. For example, radiation-induced brain injury is associated with an increased ratio between MMP-2 and TIMP-2 [21], and a significant increase in the MMP-9/TIMP-1 ratio also correlates with MS activity [17]. In our study, an imbalance between MMP-9 and TIMP-1 was observed in the vehicle-treated EAE group, with upregulation of MMP-9 and downregulation of TIMP-1; this was also the case for balance between MMP-2 and TIMP-2. We could speculate that overexpression of MMP-9/-2 in EAE was counterbalanced by MAT-mediated increase in TIMP-1/-2 expression, thus constituting a steady balance of MMPs and TIMPs that can be attributed to improved BBB function. To further investigate the therapeutic effects of MAT on EAE, we compared MAT with DEX, a glucocorticoid, which was chosen as the positive control drug because of its strong ability to suppress inflammation. While MAT at a low dose showed a weaker effect than DEX, a comparable effect was observed between DEX and MAT-H groups, with a stronger effect of MAT in improving the TIMP-1 content in the serum. We believe that MAT would prove to be superior to DEX, whose long-term use carries with it the risk i., sera were harvested from treated and nontreated EAE rats, with sera from naive rats serving as control. MMP-9 and TIMP-1 production was determined by ELISA. Results are expressed as mean ± SD ( = 8 each group). △△ < 0.01, compared to naive group; * * < 0.01, compared to the vehicle group; ## < 0.01, compared to the DEX group; ◊◊ < 0.01, comparison between MAT-L and MAT-H groups. of side effects common to systemic glucocorticoids occurring over a relatively prolonged period. These side effects include hyperglycemia, hypertension, negative calcium balance, osteoporosis, weight gain, or even immunodeficiency [41,42]. The extensive use of DEX in the treatment of severe acute respiratory syndrome (SARS) has often resulted in hypocortisolism and osteonecrosis of the femoral head, causing patients to lose the ability to work [43]. In contrast, patients with hepatitis B who used MAT for a long time showed significant therapeutic effect and good tolerance, with only minor side effects such as infrequent, transient dizziness and nausea. Whether long-term use of MAT would be safer and have fewer side effects than DEX needs further investigation. In summary, our study demonstrates that improving BBB integrity is one of the mechanisms of MAT action in EAE therapy. This effect is at least partially through inhibiting activities of MMP-2/-9 and protecting the basement membranes and tight junction proteins, thus improving BBB integrity. As a result, inflammatory infiltration into the CNS is largely reduced, thereby protecting CNS tissues from proinflammatory cell/mediator-induced damage. While the process of immune cell extravasation is partially an endothelial cell-mediated process [44], whether MAT reduces this pathway of immune cell infiltration is not yet known. Nevertheless, results from the present study, together with the suppressive effect of MAT on Th1/Th17 cells [23] and its safety, suggest that MAT could qualify as an effective, alternative medication in MS therapy and that further study to test this possibility is warranted.
2018-04-03T05:40:14.860Z
2013-09-08T00:00:00.000
{ "year": 2013, "sha1": "16667a6a7590ff74661aa5306b8c95987426bb6e", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/mi/2013/736085.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3892a126c127795e96e4e071cf14fd7fdc2e4231", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
226436594
pes2o/s2orc
v3-fos-license
Incomplete Circle of Willis and cerebrovascular reactivity in asymptomatic patients before and after carotid endarterectomy * Accepted papers are articles in press that have gone through due peer review process and have been accepted for publication by the Editorial Board of the Serbian Archives of Medicine. They have not yet been copy-edited and/or formatted in the publication house style, and the text may be changed before the final publication. Although accepted papers do not yet have all the accompanying bibliographic details available, they can already be cited using the year of online publication and the DOI, as follows: the author’s last name and initial of the first name, article title, journal title, online first publication month and year, and the DOI; e.g.: Petrović P, Jovanović J. The title of the article. Srp Arh Celok Lek. Online First, February 2017. When the final article is assigned to volumes/issues of the journal, the Article in Press version will be removed and the final version will appear in the associated published volumes/issues of the journal. The date the article was made available online first will be carried over. INTRODUCTION Circle of Willis (CoW) provides the most significant collateral flow in the presence of significant stenosis or occlusion of internal carotid artery (ICA). Anterior collateral segment of CoW (ACA1, AcomA) is a connection between opposite carotid arteries and posterior collateral segment (ACP1, AcomP) provides collateral from posterior cerebral circulation [1]. Morphology of CoW can be evaluated by non-contrast enhanced magnetic resonance angiography (nCEMRA) and it depicts the functional status of collateral flow [2]. Although there is a number of CoW morphology types, in terms of collateral flow "incomplete" type and "complete" type of CoW can be recognized. Contrary to the "complete" CoW that depicts normal CoW morphology, "incomplete" CoW refers to the hypoplasia or occlusion of anterior and posterior collateral segment and consequent absents of collateral flow provided by CoW. In the presence of significant ICA stenosis, incomplete CoW can be associated with impaired cerebral blood flow, reduction of cerebral autoregulation decreased circulatory reserve and low cerebrovascular reactivity leading to increased risk of stroke [3]. Cerebrovascular reactivity describes the capacity of adaptation of cerebral blood flow as a reaction to different stimuli. If insufficient cerebral blood flow is present, blood vessels are maximally dilated, and the residual capacity to increase blood flow is limited. In this study we analyzed changes in cerebrovascular reactivity after carotid endarterectomy in asymptomatic patients with respect to complete and incomplete CoW morphology. Based on aforementioned diagnostic procedures, inclusion and exclusion criteria for this study were defined: patients with unilateral carotid disease (contralateral carotid stenosis was less than 50%) were included, with no significant lesions on intracranial portion of carotid arteries, vertebral and basilar arteries and cerebral arteries and no evidence of "silent brain infarctions" larger than 1cm. Patients who were presented with insufficient data, poor insonation window for measurement of cerebrovascular reactivity or low compliance with the procedure, patients whose written consent was not p were also excluded from the study. Research We collected preoperative data on patients' general characteristics, risk factors and comorbidity: age and gender, presence of hypertension, diabetes, smoking, hyperlipoproteinemia, history of ischemic heart disease or heart failure, left ventricle hypertrophy, significant heart valve diseases, atrial fibrillation, chronic kidney disease, chronic obstructive pulmonary disease and peripheral artery disease (PAD). Assessment of clinical cardiologist has been provided. Morphology of CoW was determined based on 3D TOF sequence of MRA. By morphology, patients were classified into two groups: For estimation of cerebrovascular reactivity we used "Apnea test" method, previously described by M. Silvestrini et al. [4]. In Apnea test patients are asked to hold breath for 30 seconds, and consequent increase in blood CO 2 is used as a stimulus for dilatation of cerebral "Cut-off" point for normal finding was set on 0.69. In this research apnea test was done to all patients one day before and one month after surgery. We compared BHI values before and after surgery in groups of patients with complete and incomplete CoW for both sides: ipsilateral and contralateral to stenosis. Statistical analysis included descriptive statistics: mean value, frequency (count) and relative frequency (percentage) for categorical data; comparative statistics included univariate analysis of variables with odd's ratio calculation; differences between BHI values before and after surgery in groups of patients with complete and incomplete CoW for both sides: ipsilateral and contralateral to stenosis has been analyzed by ANOVA . SPSS Statistics ver. We did not register any major perioperative adverse event (stroke or death) and we registered one postoperative case of acute coronary syndrome that was successfully treated by percutaneous coronary angioplasty. Patients' general characteristics and comorbidities with respect to complete and incomplete CoW morphology are shown on Table 1. In both groups of patients with complete CoW and incomplete CoW degree of ICA stenosis (patients were classified into groups 75-84% and 85-99% stenosis) was equaly distributed as it is shown on Table 2. In asymptomatic significant ICA stenosis revascularization is indicated only in low risk patients who feature increased risk of stroke [5]. In this respect, investigation of cerebrovascular reactivity in asymptomatic patients has been recognized in up-to-date guidelines in preoperative assessment and risk stratification of patients with carotid artery stenosis [6]. Low cerebrovascular reactivity means that cerebral arteries are already dilated to their's maximum due to low cerebral perfusion and there is a limited reserve of adaptation of cerebral flow. In asymptomatic patients with significant carotid artery stenosis reduced cerebrovascular reactivity increases risk of stroke 13-25% per year [7][8][9][10][11]. Decreased circulatory reserve and lack of collateralization may increase the risk of stroke by the mechanism of impaired hemodynamics and due to the fact that arterio-arterial embolization from diseased carotid artery occurs more often in the zone of reduced circulatory reserve [12,13]. Association of incomplete CoW finding on nEMRA and low cerebrovascular reserve has been documented as well [14]. nCEMRA that was used in our research, represents functional morphology of CoW as it displays only blood flow within the vessels [2]. Although numerous types of CoW morphology have been described, simplification to "complete" and "incomplete" CoW has been accepted for easier use in clinical practice [15]. It is evidenced that 25-30% asymptomatic patients and 45-60% of symptomatic patients with carotid artery disease have incomplete CoW [16]. In symptomatic patients with significant ICA stenosis increased annual risk of stroke up to 13-17% is evidenced if the incomplete CoW is present [17,18]. For asymptomatic patients with ICA stenosis there is a lack of data from controlled prospective studies [19]. Retrospective post-hoc analyzes of a SMART group showed increased but not statistically significant risk of stroke in patients with "incomplete" CoW [14]. Our study showed that in patients with incomplete CoW, circulatory reserve at the side of ICA stenosis was significantly lower (median BHI = 0.62) compared to the patients whose MRA findings showed the complete CoW (BHI median = 0.88). Ass well BHI at the side of ICA stenosis was lower compared to the side opposite to the stenosis (BHI median = 1, 09). BHI in the group of patients with incomplete CoW tended to be lower than the proposed cut-off value for normal findings which is 0.69 [4]. Operative treatment resulted in the significant increase in BHI at the side of the stenosis both in groups of patients with complete and incomplete CoW. We registered both significant improvement circulatory reserve and normalization of the findings in the majority of patients in which BHI was below the threshold of 0.69. Such effect indicates that the revascularization of stenosed ICA removes the cause of impaired circulatory reserve and reduced vasomotor reactivity. More beneficial effect of surgical treatment we found to be in asymptomatic patients with incomplete CoW with more significant increase of BHI. For the opposed side we found trend of greater postoperative increase in BHI value in the group with complete CoW, which can be explained by the phenomenon of "steal" from the healthy side over active collaterals that was present before the operation. The literature emphasizes the importance of the effect of carotid endarterectomy on patients with extremely low parameters of cerebral vasoreactivity [20,21]. Silvestrini and Soine founded beneficial effect of surgery only in symptomatic but not in asymptomatic patients [22]. In aforementioned research the asymptomatic patients were not stratified according to CoW morphology. A significant improvement of cerebrovascular reactivity after carotid endarterectomy in asymptomatic patients can be registered for both sides of brain [23,24]. Surgical treatment of asymptomatic and symptomatic patients is followed by normalization of cerebrovascular reactivity and collateral flow in the CoW [25]. Improvement of cognitive function after carotid endarterectomy along with the improvement of cerebrovascular reactivity is emphasized [26]. Previously mentioned SMART study group was one of the rare studies that followed operated and non-operated asymptomatic patients with complete and incomplete CoW , still it was the retrospective study [14]. Apnea test and its modifications are easy available and can be done in most vascular labs, it is also proved to be comparable to other methods of measurement of cerebrovascular reactivity [27]. Still there is a problem of its reliability especially in patients who are of low compliance with the procedure, which is recognized as a limitation of this study. Association of incomplete CoW and low cerebrovascular reserve is evident, as well as effect of ICA revascularizations on cerebrovascular reactivity, but should presence of incomplete CoW can be observed as risk feature in asymptomatic ICA stenosis is still to be debated. CONCLUSIONS In most asymptomatic patients cerebrovascular reactivity restores to normal following carotid endarterectomy. Parameters of cerebrovascular reactivity are lower in patients with incomplete CoW and the increase after carotid endarterectomy is more significant in such patients. This suggests that carotid endarterectomy is more beneficial in asymptomatic patients with incomplete CoW in terms of cerebrovascular reactivity, but does it indicate clinical benefit in such patients (i.e. reduction of the risk of stroke) is yet to be approved by future prospective studies.
2020-09-10T10:16:35.936Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "24a314ad7d8e1276c3c1339341bfda15b4772d2b", "oa_license": "CCBYNC", "oa_url": "http://www.doiserbia.nb.rs/ft.aspx?id=0370-81792000068M", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "e0d707a10ac73988899b550c00dc318866aa8828", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
270079609
pes2o/s2orc
v3-fos-license
Dietary Patterns and New-Onset Diabetes Mellitus in Southwest China: A Prospective Cohort Study in the China Multi-Ethnic Cohort (CMEC) (1) Background: There is little known about the relationship between Dietary Approaches to Stop Hypertension (DASH) pattern and diabetes in cohort studies, and the dietary patterns in the Chongqing natural population are unknown. (2) Methods: 14,176 Chinese adults, aged 30–79 years old, participated in this prospective study, from September 2018 to October 2023. A dietary assessment was conducted using a food frequency questionnaire, and three main dietary patterns were extracted from the principal component analysis. DASH patterns were calculated by standards. (3) Results: During the 4.64 y follow-up, 875 developed diabetes (11.3/1000 person-years). Each posteriori diet pattern is named after its main dietary characteristics (meat pattern, dairy products–eggs pattern, and alcohol–wheat products pattern). The high consumption of DASH pattern diet reduced the risk of diabetes (Q5 vs. Q1 HR: 0.71; 95% CI: 0.40–0.56) while high consumption of alcohol–wheat product pattern diet was associated with a high risk of diabetes (Q5 vs. Q1 HR: 1.32; 95% CI: 1.04, 1.66). The other two dietary patterns were not associated with diabetes. In subgroup analysis, there was an interaction between DASH pattern and sex (P for interaction < 0.006), with a strong association in females. (4) Conclusions: DASH pattern may be associated with a reduced new-onset diabetes risk and Alcohol-wheat products pattern may be positively associated with new-onset diabetes. These findings may provide evidence for making dietary guidelines in southwest China to prevent diabetes. Introduction Diabetes mellitus (DM) is a common metabolic disease characterized by abnormal hyperglycemia.The latest report (2021) released by the International Diabetes Federation (IDF) estimated that 537 million (10.5%) adults aged 20-79 years old have DM around the world and that there is 1 diabetic in every 10 people [1].The overall prevalence of diabetes in mainland China increased from 10.9% in 2013 to 12.8% in 2017 [2].With the increasing prevalence of this chronic disease, the prevention of DM is urgently needed. Individual food consumption as a lifestyle intervention can reduce the potential risks of diabetes (such as obesity, hyperlipemia, and abnormal serum glucose) by adjusting single dietary intake [3][4][5].However, the effect of different combinatorial foods on the body may conflict with single food results.In recent years, there has been an increasing amount of research on dietary patterns.Analyses of dietary patterns can be approximately divided into a priori eating pattern studies and a posteriori eating pattern studies.Dietary approaches to stop hypertension (DASH), a well-known priori eating pattern, has been widely shown to be beneficial in reducing cardiovascular disease, especially in hypertension [6].However, its research evidence regarding diabetes is scarce [7].Unlike a priori eating patterns with established standards, the a posteriori pattern is structured based on regional data and multivariate statistical analysis tools [8].Due to different socio-demographic characters in different regions, there are many studies on posteriori dietary patterns.In fact, it is now generally accepted that there is not a "one-size-fits-all" eating pattern for individuals with diabetes, and the American Diabetes Association (ADA) recommends that meal planning should be individualized [9]. The China Multi-Ethnic Cohort (CMEC) Study is a large-scale epidemiological study undertaken in Southwest China [10], while the study of the dietary pattern and diabetes was limited in cross-sectional observation.As one of the main areas in CMEC, Chongqing has a great deal of mountainous and hilly terrain, and it is one of the most humid areas in China.As well, the cuisine of Chongqing is characterized by spicy food.Therefore, the food pattern in this region is also different from that of southwestern China as a whole.Hence, this study aimed to establish food patterns and examine the association between dietary patterns and DM incidence in Chongqing, China. Study Design and Study Population This prospective cohort study was conducted from September 2018 to October 2023 in Chongqing, a municipality in southwestern China.After multi-stage, stratified, communitybased cluster sampling from 13 main districts and counties (districts and counties of the same grade), 23,308 Chinese adults of Han nationality, aged 30-79 years old, who had lived in the local area for half a year or more participated.Ethical approval was obtained from the Medical Ethics Review Committee of Sichuan University (K2016038) on 9 November 2016.All participants gave written informed consent.In the present study, a total of 14,176 potential participants were screened for eligibility (Figure 1) and people with the following conditions were excluded at baseline: (1) those with self-reported diabetes; (2) those who had cardiovascular diseases or gastrointestinal disease [11,12] or cancer; (3) those who have extreme value of BMI (<14 or >45 kg/m 2 ); (4) those who were pregnant; (5) those without any food intake information; and (6) those with extreme energy intake (defined as <800 or >4800 kcal per day for males and <500 or >4000 kcal per day for females [13]). widely shown to be beneficial in reducing cardiovascular disease, especially in hypertension [6].However, its research evidence regarding diabetes is scarce [7].Unlike a priori eating patterns with established standards, the a posteriori pattern is structured based on regional data and multivariate statistical analysis tools [8].Due to different socio-demographic characters in different regions, there are many studies on posteriori dietary patterns.In fact, it is now generally accepted that there is not a "one-size-fits-all" eating pattern for individuals with diabetes, and the American Diabetes Association (ADA) recommends that meal planning should be individualized [9]. The China Multi-Ethnic Cohort (CMEC) Study is a large-scale epidemiological study undertaken in Southwest China [10], while the study of the dietary pattern and diabetes was limited in cross-sectional observation.As one of the main areas in CMEC, Chongqing has a great deal of mountainous and hilly terrain, and it is one of the most humid areas in China.As well, the cuisine of Chongqing is characterized by spicy food.Therefore, the food pattern in this region is also different from that of southwestern China as a whole.Hence, this study aimed to establish food patterns and examine the association between dietary patterns and DM incidence in Chongqing, China. Study Design and Study Population This prospective cohort study was conducted from September 2018 to October 2023 in Chongqing, a municipality in southwestern China.After multi-stage, stratified, communitybased cluster sampling from 13 main districts and counties (districts and counties of the same grade), 23,308 Chinese adults of Han nationality, aged 30-79 years old, who had lived in the local area for half a year or more participated.Ethical approval was obtained from the Medical Ethics Review Committee of Sichuan University (K2016038) on 9 November 2016.All participants gave written informed consent.In the present study, a total of 14,176 potential participants were screened for eligibility (Figure 1) and people with the following conditions were excluded at baseline: (1) those with self-reported diabetes; (2) those who had cardiovascular diseases or gastrointestinal disease [11,12] or cancer; (3) those who have extreme value of BMI (<14 or >45 kg/m 2 ); (4) those who were pregnant; (5) those without any food intake information; and (6) those with extreme energy intake (defined as <800 or >4800 Kcal per day for males and <500 or >4000 Kcal per day for females [13]). Dietary Patterns Assessment The baseline survey from September 2018 to January 2019 consisted of a tablet-based electronic questionnaire via face-to-face interviews, anthropometric measurements, and blood tests.More details regarding the questionnaire design are provided in Table S1.The Food Frequency Questionnaire (FFQ) mainly contains 13 crude common food items, including the quantity (average grams per meal according to standard serving size molds) and frequency (four frequency categories ranging from how many times per day to year) that they consumed during the past 12 months.In other sections, information regarding alcohol, tea, and beverages was collected by questions about current status, duration, types, and frequency.Cooking oil and salt were roughly recorded by asking about household consumption.In 2020, we conducted repeated FFQ and 24-h dietary recalls (24HDRs) to assess the reproducibility and validity of the baseline FFQ.The Interclass Correlation Coefficient (ICC) for reproducibility ranged from 0.25 (rice) to 0.68 (tea).The Spearman coefficients for validity ranged from 0.41 (fresh vegetables) to 0.63 (dairy products).More details can be seen in Table S4. In the a posteriori dietary pattern, we first calculated the personal daily food intake (g/day) for 18 food categories, and then standardized the values by Z score.The Kaiser-Meyer-Olkin measure of sampling adequacy was 0.610 > 0.60, and Bartlett's test of sphericity was significant (chi-square = 10,138.57,p < 0.001).Principal component analysis (PCA) was used to construct the a posteriori dietary patterns.After varimax rotation, we picked out three dietary patterns which met the statistical criteria of initial eigenvalue > 1 and food categories with factor loading coefficient > |3.0| as the patterns' principal component (Figure 2).In the a priori dietary pattern, we focused on the DASH pattern and use a slightly adjusted calculation criteria to form the pattern (more information can be seen in Table S5).All the patterns were divided into quintiles for analysis. Dietary Patterns Assessment The baseline survey from September 2018 to January 2019 consisted of a tablet-based electronic questionnaire via face-to-face interviews, anthropometric measurements, and blood tests.More details regarding the questionnaire design are provided in Table S1.The Food Frequency Questionnaire (FFQ) mainly contains 13 crude common food items, including the quantity (average grams per meal according to standard serving size molds) and frequency (four frequency categories ranging from how many times per day to year) that they consumed during the past 12 months.In other sections, information regarding alcohol, tea, and beverages was collected by questions about current status, duration, types, and frequency.Cooking oil and salt were roughly recorded by asking about household consumption.In 2020, we conducted repeated FFQ and 24-h dietary recalls (24HDRs) to assess the reproducibility and validity of the baseline FFQ.The Interclass Correlation Coefficient (ICC) for reproducibility ranged from 0.25 (rice) to 0.68 (tea).The Spearman coefficients for validity ranged from 0.41 (fresh vegetables) to 0.63 (dairy products).More details can be seen in Table S4. In the a posteriori dietary pattern, we first calculated the personal daily food intake (g/day) for 18 food categories, and then standardized the values by Z score.The Kaiser-Meyer-Olkin measure of sampling adequacy was 0.610 > 0.60, and Bartlett's test of sphericity was significant (chi-square = 10,138.57,p < 0.001).Principal component analysis (PCA) was used to construct the a posteriori dietary patterns.After varimax rotation, we picked out three dietary patterns which met the statistical criteria of initial eigenvalue > 1 and food categories with factor loading coefficient > |3.0| as the patterns' principal component (Figure 2).In the a priori dietary pattern, we focused on the DASH pattern and use a slightly adjusted calculation criteria to form the pattern (more information can be seen in Table S5).All the patterns were divided into quintiles for analysis. Assessment of Covariates Each participant enrolled in the CMEC study in the baseline was asked about sociodemographic characteristics (sex, age at recruitment, education level, marital status, occupation, household income), personal and family history of major diseases (cancer, CVD, diabetes, hypertension, etc.), lifestyle (smoking status, drinking status, tea consumption, dietary habits, physical activity), sleeping, and mental health using a standard questionnaire.Physical examinations were measured by medical professionals.The examination Assessment of Covariates Each participant enrolled in the CMEC study in the baseline was asked about sociodemographic characteristics (sex, age at recruitment, education level, marital status, occupation, household income), personal and family history of major diseases (cancer, CVD, diabetes, hypertension, etc.), lifestyle (smoking status, drinking status, tea consumption, dietary habits, physical activity), sleeping, and mental health using a standard questionnaire.Physical examinations were measured by medical professionals.The examination indicators include height, weight, waist circumference, gynecological B-ultrasound (uterus, ovary), and so on.Measurements of height and weight were taken using a vertical altimeter and an electronic scale without wearing shoes, hats, or heavy coats (accurate to 0.1 cm and 0.1 kg, respectively).Blood pressure was measured at a calm state (sit still for at least 5 min).Three consecutive sets of data were collected by an electronic sphygmomanometer, with an interval of 1 min.Intravenous blood samples were taken between 7 a.m. and 9:30 a.m.(fasting for at least 8 h).Then specimens were collected and transported to a third-party laboratory in dry ice packaging.Blood samples were tested by the laboratory, including fasting blood glucose (FBG), glycated hemoglobin (HbA1c), total cholesterol (TC), triglyceride (TG), Low density lipoprotein cholesterol (LDL-C), high-density lipoprotein cholesterol (HDL-C), etc. Continuous Variables Converted to Categorical Variables Except for the nutrients (daily energy, protein, fat and carbohydrate calculated by daily food categories intake; see the standard in Table S3), other variables were all converted to categorical variables.Age was divided into two groups: the youngsters (<60 years old) and the elderly (≥60 years old).In consideration of the few participants in the underweight or obese categories (n = 247 (1.7%) and n = 634 (4.5%), respectively) defined by BMI [14], we divided BMI into two groups: standard and below (<24 Kg/m 2 ) and overweight and above (≥24 Kg/m 2 ).Educational level was divided into three levels: primary school or below, middle school or high school, and high school or above.Weekly physical activity was categorized according to the Physical Activity Guidelines for Americans (2018) [15]: <12.0 METs-h per week as insufficient physical activity, ≥12.0 METs-h per week as sufficient physical activity.Frequency of eating spicy food was divided into three levels: <1 day per week as low-rate eating, 1-5 days per week as medium-rate eating, ≥6 days per week as high-rate eating.Smoking, alcohol, and tea were recognized as binary categorical variables (e.g., current smokers or not).According to the guidelines for the prevention and treatment of type 2 diabetes mellitus in China (2020) [16], five conventional biochemical indexes were divided into two categories.serum creatinine (Cr) was divided into two groups: 57-97 µmol/L among 20-59 years old and 57-111 µmol/L among 60-79 years old as normal in males, 41-73 µmol/L among 20-59 years old and 41-81 µmol/L among 60-79 years old as normal in females; beyond this range is abnormal.Systolic blood pressure (SBP) was divided into two groups: 90-140 mmHg as normal, <90 or >140 mmHg as abnormal.Diastolic blood pressure (DBP) was divided into two groups: 60-90 mmHg as normal, <60 or >90 mmHg as abnormal.Triglyceride (TG) was divided into two groups: 0.56~1.70mmol/L as normal, <0.56 or >1.70 mmol/L as abnormal.Total cholesterol (TC) was divided into two groups: 2.84~5.68mmol/L as normal, <2.84 or >5.68 mmol/L as abnormal.Low-density lipoprotein cholesterol (LDL-C) was divided into two groups: 2.10~3.10mmol/L as normal, <2.10 or >3.10 as abnormal.High-density lipoprotein cholesterol (HDL-C) was divided into two groups: 1.14~1.76mmol/L as normal, <1.14 or >1.76 mmol/L as abnormal. Outcome Ascertainment The outcomes in this study were collected at two follow-up visits in 2021 and 2023.Typical diabetic symptoms with fasting plasma glucose (FPG) ≥ 126 mg/dL (7.0 mmol/L) or 2-h plasma glucose (2-h PG) ≥ 200 mg/dL (11.1 mmol/L) or during oral glucose tolerance test (OGTT) or a random plasma glucose ≥ 200 mg/dL (11.1 mmol/L) or HbA1c (glycosylated hemoglobin) ≥ 6.3% were the criteria for diagnosing diabetes [16].Participants with a doctor's diagnosis information were considered new-onset diabetics.Monitoring systems or database formed in the routine work were used to supplement or confirm their diagnosis information. Statistical Analyses We used PCA to establish three dietary patterns and quintile the patterns and DASH pattern.The socio-demographic baseline characteristics, lifestyle factors, and family history of diabetes were compared across the quintiles of the dietary pattern scores, described by percentage (%) for categorical variables and means ± standard deviations (SDs) for continuous variables, by Student's t-tests, Mann-Whitney U tests or Chi square tests as appropriate.COX regression analysis was used to calculate the hazard ratios (HRs) and 95% confidence interval (CI) for the risk of diabetes incidence across the quintiles of the dietary pattern scores with the lowest quintile as the reference category.A trend test was performed by containing median values for each quintile in the corresponding model.Model 1 adjusted for sex and age (<60 or ≥60 years old).Model 2 adjusted for region (urban or rural), educational level (primary school or below, Middle school or high school, high school above), household annual income (<20,000, 20,000-99,999, 10,000-19,999, ≥20,000 CNY/year), family history of diabetes, BMI (standard and below, overweight and above) based on Model 1. Model 3 adjusted for weekly physical activity (<12.0,≥12.0 METs-h per week), smoking, drinking alcohol, drinking tea, eating spicy food (<1, 1-5, ≥6 days per week), daily total energy intake (kcal/day) based on Model 2. Model 4 adjusted for serum creatinine (SCr) (male in normal: 20-59 years old is 57-97 µmol/L, 60-79 years old is 57-111 µmol/L; female in normal: 20-59 years old is 41-73 µmol/L, 60-79 years old is 41-81 µmol/L, beyond this range is abnormal), systolic blood pressure (SBP) (90-140 mmHg for normal, beyond this range is abnormal), diastolic blood pressure (DBP) (60-90 mmHg for normal, beyond this range is abnormal), triglyceride (TG) (0.56~1.70 mmol/L for normal, beyond this range is abnormal), total cholesterol (TC) (2.84~5.68mmol/L for normal, beyond this range is abnormal ), low-density lipoprotein cholesterol (LDL-C) (2.10~3.10mmol/L for normal, beyond this range is abnormal), highdensity lipoprotein cholesterol (HDL-C) (1.14~1.76mmol/L for normal, beyond this range is abnormal) based on Model 3. We also used a stratified analysis to examine the relationship between various dietary patterns and the incidence of diabetes in 15 subgroups (sex, age, BMI, family history of diabetes, region, smoking status, alcohol status, the frequency of spicy food per week, SCR, SBP, DBP, TC, TGs, LDL-C, and HDL-C). All the data were preprocessed in Excel software 2021 (edition 2404 Build 16.0.17531.20152)and imported to R version 4.2.3 for statistical analysis and plotting.The significance level was set at a p-value of <0.05 for two-sided testing.For the multiplicative interaction tests in the stratified analyses, the statistical significance was defined as p < 0.006. Baseline Description of Each Patterns In Figure 2, three dietary patterns were extracted from the PCA method, explaining 8.76%, 8.11%, and 6.51% of the variation of food variable.The first pattern, which we named "dairy products-egg pattern", was mainly characterized by a high intake of dairy products, eggs, fresh fruits, coarse grains, and soybean products and a low intake of rice.The second pattern, which we named the "meat pattern", was mainly characterized by a high intake of poultry, red meat and its processed products, and fish/seafood.The third pattern, which we named the "alcohol-wheat product pattern", was mainly characterized by a high intake of tea, alcohol, and wheat products.The slightly-adjusted DASH pattern was characterized by a high intake of vegetables, fruits, legumes, whole grains and dairy products and a low intake of red meat and its products and sodium (see Table S5 for establishment rules).A slightly adjusted DASH pattern was formed by the standard (Table S5).The patterns' food intake distribution can be found in Figure S1. Of the 14,173 participants, 875 developed diabetes (incidence rate: 11.3/1000 personyears) over a 4.64-y follow-up on average.At baseline, the mean age of the study subjects was 48.6 years old, and 54.5% of the subjects were women.In Table 1, compared with the participants in the lowest quintile of the Meat pattern, those with higher dietary pattern scores were more likely to be females, youngsters, current smokers, current drinkers, from rural, with a higher education and a higher household income; to engage in sufficient weekly physical activity; to have an overweight figure; and to regularly eat spicy food.Participants with higher meat pattern scores were more likely to have abnormal levels of TC, LDL-C, and DBP (p < 0.01).Participants with higher scores in the dairy products-egg pattern were more likely to be females and younger, to live in urban areas, to have a higher education attainment and a higher household income, to have a family history of diabetes, and to prefer smoking and drinking tea; these participants were thinner and drank alcohol and ate spicy food less.There were less abnormal levels of SBP, DBP, TC, and LDL-C among these people (p < 0.001).Elderly men who were more likely to smoke, drink tea and alcohol, and eat spicy food and had a family history of diabetes with a lower educational level and a insufficient weekly physical activity and lower BMI are more likely to be observed in the alcohol-wheat products pattern.As well, people with higher alcohol-wheat products pattern scores intended to have abnormal levels of SBP, DBP, TC, TG and LDL-C (p < 0.05).In the DASH pattern, younger and thinner women with a higher education attainment and a higher household income in urban region, who drink alcohol but did not smoke were the characteristics of the participants with a higher score.They were more likely to have normal levels of SBP, DBP, TC, TG, and LDL-C (p < 0.01). Discussion In this prospective cohort study of adults from southwestern China, we identified three main dietary patterns by PCA and assessed the adherence to slight-adjusted DASH pattern.The alcohol-wheat product pattern-characterized by a high intake of tea, alcohol, and wheat products-had a positive association with incident diabetes.Both the dairy products-egg pattern, which was characterized by a high intake of dairy products, eggs, fresh fruits, coarse grain and soybean products and a low intake of rice, and the meat pattern, characterized by a high intake of poultry, red meat and its processed products, and fish/seafood, were not significantly associated with diabetes risk.There was a negative association between the diabetes events and the DASH pattern rich in vegetables, fruits, legumes, whole grains and dairy products and low in red meat and its products and sodium. Discussion In this prospective cohort study of adults from southwestern China, we identified three main dietary patterns by PCA and assessed the adherence to slight-adjusted DASH pattern.The alcohol-wheat product pattern-characterized by a high intake of tea, alcohol, and wheat products-had a positive association with incident diabetes.Both the dairy products-egg pattern, which was characterized by a high intake of dairy products, eggs, fresh fruits, coarse grain and soybean products and a low intake of rice, and the meat pattern, characterized by a high intake of poultry, red meat and its processed products, and fish/seafood, were not significantly associated with diabetes risk.There was a negative association between the diabetes events and the DASH pattern rich in vegetables, fruits, legumes, whole grains and dairy products and low in red meat and its products and sodium. For the DASH pattern, almost all studies have shown its effective role in reducing cardiovascular outcomes [17,18].However, little evidence of prospective studies is found in diabetes.CMEC has published an article related to this pattern in a cross-sectional study [19], which estimated that a slightly adjusted DASH pattern was a superior dietary recommendation to reduce cardiometabolic risks.We further confirmed this view for diabetes in this cohort under rigorous inclusion and exclusion criteria.A meta-analysis of six large prospective studies in 2013 established an inverse association between adherence to a DASH-like diet (rich in fruits, vegetables, and low-fat dairy products and low in SFA, total fat, cholesterol, refined grains, sweets, red meat, and salt) and risk of T2DM incidence (overall risk ratio (RR): 0.81; 95% CI: 0.72, 0.92) [20].Similar results (RR: 0.80; 95% CI: 0.74, 0.86) were found in the most recent meta-analysis of 8 prospective studies (DASH is composed of 10 food groups: total grains or high-fiber grains, vegetables, fruits, total dairy or low-fat dairy, nuts, seeds, and legumes, low intake of meat, fats or oil, sweets) [21].Recently, a national prospective cohort study in the USA has proved the DASH pattern (high intake of fruits, vegetables, nuts and legumes, low-fat dairy products, and whole grains and low intake of sodium, sweetened beverages, and red and processed meats) was statistically significantly associated with increased risk of type 2 diabetes (Q1 vs. Q5) [22], which was similar to our findings.A clinical trial shows adults with overweight or obesity, hypertension, and prediabetes or type 2 diabetes had improvements in SBP, glycemic control, and weight over a 4-month period combined with DASH pattern, which means the potential in preventing diabetes [23].The evidence from RCTs examining the effect of the DASH diet on glycemic control is limited to GDM [7].In subgroup analysis, we found that the prevention of diabetes was most significant in women (Q5 vs. Q1: HR: 0.37; 95% CI: 0.26, 0.53) and there's an interaction between the sexes (p < 0.006).This may be one of the reasons that DASH diet studies found more in GDM.Moreover, we also observed the factor of drinking alcohol status had a slight effect on the interaction between the DASH pattern and new-onset diabetes (p < 0.05).In the ordinary DASH pattern in the clinical trial [24,25], alcohol consumption is limited for the rise of blood pressure.However, the alcohol consumption in the study of diabetes and DASH pattern is still unclear.Further study should be taken to analyze the relationship between alcohol consumption and the DASH pattern. The alcohol-wheat product pattern is a unique diet pattern; we did not find any similar patterns in other studies.A cross-sectional study on the southeast coast of China finds a high-alcohol diet pattern characterized by high intake of alcohol, rice, wheat products, and eggs using K-means clustering analysis on the basis of the percent contribution to the total energy intake was not associated with type 2 diabetes (adjusted odds ratio (OR): 1.5; 95%CI: 1.0, 2.4) [26].A male-specific diet pattern, characterized by a high intake of wheat products, coffee, juice, and fried dough foods and a low intake of fruits, poultry, fresh-water fish, and vegetables is associated with the risk of poor glycemic control in Chinese diabetic adults (Q4 vs. Q1, OR: 2.69; 95% CI: 1.76, 4.10) [13].However, we did not find any statistical significance in the sex subgroup.It seems that we should pay attention to the particular intake on dietary patterns to explain the results.Clinical trials have confirmed that tea is effective intervention in diabetes [27].And in the Alcohol-wheat products pattern, the tea intake in Q5 represents for one cup a day (3.77g/day) which meant in a moderate tea drinking range.For alcohol, the relationship between alcohol and the development of type 2 diabetes follows a U-shaped relationship [13,28].Regarding the lack of accepted limits to moderate drinking, large, prospective, epidemiological studies have shown that consumption rates of 5-20 g of alcohol a day are associated with a reduced risk of developing diabetes [27,[29][30][31][32].In our study, only alcohol intake of Q5 belongs to moderate level (mean: 16.79 g/day).In view of the grams of the daily intake of tea and alcohol, maybe this is the reason why the HR in Q5 is lower than that in Q4 (Q4 vs. Q1: adjusted HR in Model 4: 1.50; 95% CI: 1.20-1.86;Q5 vs. Q1: adjusted HR in Model 4: 1.32, 95% CI: 1.04-1.66).Moreover, as we all know, fresh fruits are beneficial for prevention of DM [33] and wheat products as a high-glycemic index (GI) food is positively correlated with insulin resistance, which is one of the important reasons for the development of diabetes [34].In the alcohol-wheat product pattern, a low intake of fresh fruits was 93 g/day and a high intake of wheat products was 115.32 g/day in Q5 for average.As well, a high intake of preserved vegetables full of salt and soybean products were observed in this pattern.With an eye to the people characterized as elders tending to smoke and a lower educational degree and a insufficient weekly physical activity, this dietary pattern is a risk factor for diabetes without adequate nutrient intake. When comes to the dairy products-egg pattern, significance was only found in the subgroup analysis of female and rural region (Table S6).The studies about dairy products are contradictory and highly controversial.Moreover, few studies report the relationship between eggs and diabetes.One cohort study in Iran discovered no relationship between a healthy diet pattern derived from PCA (with a higher load of whole grains, vegetables, and dairy products) and regression or progression from pre-DM [35].Another study in Spain found no association between eggs and dairy patterns (characterized by high intake of eggs, dairy products, fats and red meat and low intake of salty snacks and soup) and diabetes-related metabolic markers in women [36].We assumed that it may be the moderate intake for eggs and dairy products (49.17 g/day and 173.21 g/day) which result in moderate protein intake that does not cause a rise in blood glucose, though the overall combination seemed healthy, lead to the unobvious results.Our results on the meat pattern were consistent with similar meat patterns from some cohort studies in Asia (high intakes of fish, poultry, or red meat and its processed products and other staples as well as fresh fruit and vegetables) [37][38][39].Nevertheless, there are still some studies different from us.A 14-y follow-up multiethnic study in the United States reported that a fat-and-meat dietary pattern was significantly associated with incident diabetes [40].It is widely believed that red meat and its processed products is positively correlated with the incidence of T2DM [4].Thus, the meat pattern should be studied in greater detail and depth in the future. There are several limitations to our study.Firstly, our food classifications are crude and incomplete.Some food items related to diabetes such as nuts are not included in the FFQ.However, we will supplement this information in future studies.Secondly, we did not assess the effects of cooking styles and other tastes on eating patterns, which may lead to bias in results.In addition, because the PCA method depends on available data, the ability to reproduce the results in other study populations is limited.Not only that, we were not able to distinguish between type 1 and type 2 diabetes.Given the age of our subjects at baseline, the diabetes observed were more likely to be type 2 diabetes.There may have been measurement errors in the estimation of daily intake values, which may have resulted in weaker associations.Last but not the least, dietary investigations were not performed on every follow-up survey, so changes in dietary intake were not considered in this study.However, our study has some strengths.First of all, this is the first prospective cohort study in CMEC to explore the relationship between dietary patterns and diabetes which adds robust evidence to the results of previous cross-sectional study [19].Then, we excluded people with any diseases that could cause reverse causation and provided stronger evidence for causality compared with other cohort designs.Moreover, we not only established the posteriori dietary patterns but also concluded the DASH pattern, which was seldom found in other studies in Asia.Finally, because of the Chongqing's local cuisine is characterized by its spiciness, our study took into account of the frequency of spicy food as one of the important covariates, which was newly different from other studies on food patterns. Conclusions In conclusion, our study identified three dietary patterns by the PCA method and the established slightly adjusted DASH pattern.Among them, the DASH pattern showed the inverse correlation with diabetes, while the alcohol-wheat product pattern was positively associated with diabetes.The dairy products-egg pattern and meat pattern had no rela- Figure 2 . Figure 2. Factor loading matrix of dietary patterns by 18 food groups derived from a principal analysis from 14,716 participants from the baseline survey of the CME cohort study. Figure 2 . Figure 2. Factor loading matrix of dietary patterns by 18 food groups derived from a principal analysis from 14,716 participants from the baseline survey of the CME cohort study. Figure 3 . Figure 3. Forest map for the association between patterns and diabetes stratified by the age, sex, BMI, smoking, alcohol, spicy food (A) and SCR, SBP, DBP, TC, TG, LDL-C, and HDL-C (B). Table 1 . Socio-demographic and lifestyle characteristics by quintile category (Q1 and Q5) of the PCA-derived meat, dairy products-eggs, and alcohol-wheat products dietary pattern scores and DASH pattern scores and their respective simplified dietary pattern scores in the CME cohort study. Table 1 . Cont.The pattern score was divided into five points from low to high (Quintile1-Quintile5, Q1-Q5).†p-value is based on Student's t-tests, Mann-Whitney U tests or Chi square tests as appropriate for two-sided testing, and there is p for trend based on a linear regression analysis for the variable of pattern score.*Patternscore, Cohort time, Age and Total energy intake are as numeric variables shown in Table1, described by means (M) ± standard deviations (SDs).a Elders are defined as age at baseline ≥60 years old.b Sufficient weekly physical activity including work and leisure time is defined as ≥12.0 METs-h per week; METs, metabolic equivalents.c Regular spicy food intake is defined as eating spicy food ≥ 6 days per week for consideration of the prevalence and ritual preference of spicy food in Chongqing.d Overweight and above is defined as BMI ≥ 24 kg/m 2 for there is few obese people (634, 4.5%). Table 2 . Hazard ratios(HR) between dietary patterns and diabetes using multi-level mixed effects cox models.
2024-05-29T15:22:12.427Z
2024-05-27T00:00:00.000
{ "year": 2024, "sha1": "13231a5ef727ff7e98d194fa0ab8d21454b9f9dc", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6643/16/11/1636/pdf?version=1716791275", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "aa36ae29f71a754382d538ba3207007705010113", "s2fieldsofstudy": [ "Medicine", "Agricultural and Food Sciences", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
4955453
pes2o/s2orc
v3-fos-license
Human Milk Oligosaccharides and Associations With Immune-Mediated Disease and Infection in Childhood: A Systematic Review Complex sugars found in breastmilk, human milk oligosaccharides (HMOs), may assist in early-life immune programming and prevention against infectious diseases. This study aimed to systematically review the associations between maternal levels of HMOs and development of immune-mediated or infectious diseases in the offspring. PubMed and EMBASE databases were searched (last search on 22 February 2018) according to a predetermined search strategy. Original studies published in English examining the effect of HMOs on immune-mediated and infectious disease were eligible for inclusion. Of 847 identified records, 10 articles from 6 original studies were included, with study quality ranging from low to high. Of three studies to examine allergic disease outcomes, one reported a protective effect against cow’s milk allergy (CMA) by 18 months of age associated with lower lacto-N-fucopentaose (LNFP) III concentrations (OR: 6.7, 95% CI 2.0–22). Another study found higher relative abundance of fucosyloligosaccharides was associated with reduced diarrhea incidence by 2 years, due to (i) stable toxin-E. coli infection (p = 0.04) and (ii) “all causes” (p = 0.042). Higher LNFP-II concentrations were associated with (i) reduced cases of gastroenteritis and respiratory tract infections at 6 weeks (p = 0.004, p = 0.010) and 12 weeks (p = 0.038, p = 0.038) and (ii) reduced HIV transmission (OR: 0.45; 95% CI: 0.21–0.97) and mortality risk among HIV-exposed, uninfected infants (HR: 0.33; 95% CI: 0.14–0.74) by 24 months. Due to heterogeneity of the outcomes reported, pooling of results was not possible. There was limited evidence that low concentrations of LNFP-III are associated with CMA and that higher fucosyloligosaccharide levels protect infants against infectious disease. Further research is needed. Complex sugars found in breastmilk, human milk oligosaccharides (HMOs), may assist in early-life immune programming and prevention against infectious diseases. This study aimed to systematically review the associations between maternal levels of HMOs and development of immune-mediated or infectious diseases in the offspring. PubMed and EMBASE databases were searched (last search on 22 February 2018) according to a predetermined search strategy. Original studies published in English examining the effect of HMOs on immune-mediated and infectious disease were eligible for inclusion. Of 847 identified records, 10 articles from 6 original studies were included, with study quality ranging from low to high. Of three studies to examine allergic disease outcomes, one reported a protective effect against cow's milk allergy (CMA) by 18 months of age associated with lower lacto-N-fucopentaose (LNFP) III concentrations (OR: 6.7, 95% CI 2.0-22). Another study found higher relative abundance of fucosyloligosaccharides was associated with reduced diarrhea incidence by 2 years, due to (i) stable toxin-E. coli infection (p = 0.04) and (ii) "all causes" (p = 0.042). Higher LNFP-II concentrations were associated with (i) reduced cases of gastroenteritis and respiratory tract infections at 6 weeks (p = 0.004, p = 0.010) and 12 weeks (p = 0.038, p = 0.038) and (ii) reduced HIV transmission (OR: 0.45; 95% CI: 0.21-0.97) and mortality risk among HIV-exposed, uninfected infants (HR: 0.33; 95% CI: 0.14-0.74) by 24 months. Due to heterogeneity of the outcomes reported, pooling of results was not possible. There was limited evidence that low concentrations of LNFP-III are associated with CMA and that higher fucosyloligosaccharide levels protect infants against infectious disease. Further research is needed. Keywords: oligosaccharides, human milk, breastfeeding, infants, allergy and immunology, respiratory tract infections, diarrhea, Hiv iNtRODUctiON Human milk contains a wide range of immunologically active components with the potential to protect against disease (1,2). Research has emphasized the importance of human milk as an influential early-life exposure for the development of a healthy immune system; however, the mechanisms for this are still not clearly understood. Some studies have shown positive immunological effects and HMO Associations With Childhood Diseases Frontiers in Pediatrics | www.frontiersin.org April 2018 | Volume 6 | Article 91 anti-infectious properties of human milk, particularly in the prevention of respiratory and gastrointestinal infections (3,4). Other research suggests that breastfeeding influences the intestinal microbiome, which may in turn influence autoimmune and allergic disease development (4). The association between human milk and allergic disease is controversial, with numerous studies reporting inconsistent results (5). A possible explanation for the contradictory findings may lie in the diverse composition of bioactive factors present in human milk (6) and even when specific milk components are addressed, there may be significant differences in both quantity and variety between human milk from different mothers. Human milk oligosaccharides (HMOs) are a key constituent of human milk. They are a structurally and biologically diverse group of complex indigestible sugars (7,8). To date, more than 200 different oligosaccharides have been identified, varying in size from 3 to 22 monosaccharide units (9). The most common HMOs are the neutral fucosylated and non-fucosylated oligosaccharides (10)(11)(12)(13). The quantity and structure of these HMOs differs significantly among women and is dependent upon Secretor and Lewis blood group status (14,15). Mutations in the fucosyltransferase 2 (FUT2) secretor gene results in human milk that is deficient in α1,2-linked fucosylated oligosaccharides (12). Human milk oligosaccharides provide no direct nutritional value to the infant, and there is only minor absorption across the intestinal wall with approximately 1-5% detectable in serum and urine (8). It is proposed, instead, that HMOs have many different roles to play for the infant. They are preferred substrates for several species of gut bacteria and act as prebiotics, promoting the growth of beneficial intestinal flora and shaping the gut microbiome, thereby affecting immune responses (8,16,17). Short-chain fatty acids generated by the gut microbiome breaking down HMOs are critical for intestinal health. They further favor the growth of benign gut commensals along with providing nourishment for epithelial cells lining the intestine (18). HMOs also directly modulate host-epithelial responses, favoring reduced binding of pathogenic microbiota to the gut epithelium. Gut microbiota composition differs between formula-fed and breastfed infants, possibly due to the absence of HMOs in infant formula milk (19). There is also evidence that HMOs act as decoy receptors, inhibiting the binding of enteric pathogens to prevent infection and subsequent illness (20). Furthermore, HMOs provide a selective advantage for colonization by favorable bacteria, thereby inhibiting the growth of pathogenic species. Despite substantial interest in this area, to date no systematic review has been undertaken to assess the effects of HMOs on disease prevention. This systematic review aims to identify and summarize the current evidence of the associations between HMOs and immune-mediated or infectious diseases in early childhood. Establishing a clear link between HMOs and disease outcomes may lead to intervention strategies. Search Strategy PubMed and EMBASE electronic databases were systematically searched (last search date 22 February 2018) for original studies examining the effect of HMOs on childhood immune-mediated and infectious disease outcomes. The search strategy included MeSH and free text terms for HMOs, allergic disease, immunemediated disorders and clinical infections (see Table E1 in Supplementary Material). All original studies published in English were included. Papers that did not report original results, or outcome data of interest, were excluded. Titles and abstracts of papers were screened by two authors (Alice M. Doherty and Xin Dai) for inclusion. Reference lists of primary articles and related reviews were checked to identify any other studies appropriate for inclusion. Studies assessed as eligible, potentially eligible or unclear, were retrieved in full text where available. Any uncertainty concerning inclusion of specific studies was resolved by discussion with a third author (Adrian J. Lowe). Outcomes of interest were the development of any immune-mediated diseases (allergic or autoimmune disorders) or clinical infections in childhood. Data extraction Study characteristics were extracted and tabulated from each of the included studies. The data extracted included the following: author's name, date of publication, study design, location, population, exposure classification, outcome definitions, effect sizes, confounders and tests for potential effect modification, and potential sources of bias. Quality assessment The Newcastle-Ottawa scale was used to assess the quality of individual studies (21). The quality assessment was performed independently by two authors (AD, XD) to meet PRISMA guidelines. Each study was scored using a star (*) method to report the quality based on selection of sample, comparability, and the ascertainment of the exposure or outcome measures for case-control or cohort studies, respectively. Included studies were graded on total score: unsatisfactory = 0-3; low = 4-5; moderate = 6-7; and high = 8-9 (see Table E2 in Supplementary Material). Studies were not excluded based on quality assessment. Statistical analysis Where two or more papers reported the association between the same HMO and outcome, we pooled results using meta-analysis. The I 2 statistic was used to document heterogeneity of study results, and random effect models were used where there was wide spread differences between studies (I 2 > 80%). ReSULtS The search identified 847 articles (see Figure 1). After title and abstract screening, 48 articles were selected for full-text assessment. In total, 10 records were included (see Table E3 in Supplementary Material for reasons for exclusion), from 6 original studies. Three articles reported on allergic disease outcomes (22)(23)(24), four articles on diarrheal disease outcomes, all from a single study (11,(25)(26)(27), one article on respiratory and gastrointestinal tract infections (28), and two articles on HIV outcomes from a single study (29,30). Study characteristics Three prospective cohort studies assessed associations with allergic disease outcomes, all from Scandinavia, with sample sizes ranging from 20 (22) to 266 (23) mother-infant pairs (see Table 1). One study was population based (22), while the other two cohort studies sampled participants with parental allergic disease (23,24). A prospective cohort of 93 mother-infant pairs conducted in Mexico City investigated associations between HMOs and infectious diarrhea (25). One prospective cohort study, of 73 participants, examined associations between HMOs and respiratory tract infections and gastroenteritis (28). The two publications examining associations with HIV outcomes were from one nested case-control study conducted in Lusaka, Zambia (29). Bode et al. reported HIV transmission from HIV-infected mothers to their exposed infants (29), and Kuhn et al. recorded mortality rates among HIV-exposed infants (30). Participants comprise 103 HIV infected and 143 HIV exposed but uninfected (HEU) children were randomly selected from an early weaning trial comprising of 958 HIV-infected mother-infant pairs (31). HmO assessment Human milk oligosaccharides were quantified by high-performance liquid chromatography in all included studies. Sjögren and colleagues collected milk samples 2-4 days postpartum and reported exposure as median concentrations (nmol/mL) of nine common neutral oligosaccharides in colostrum (see Table E4 in Supplementary Material) Outcome assessment Allergic Disease Sjögren et al. characterized children as "allergic" if clinical symptoms of allergic disease at 18 months of age were present and "non-allergic" if no clinical symptoms of allergic disease were apparent, along with a negative skin prick test (SPT) (22). Allergic disease was defined as bronchial asthma, allergic rhinoconjunctivitis, atopic eczema, and food allergy, although it was not clear if this was based on parent report or clinical examination. Sprenger measured the associations with any physician diagnosed allergic disease (food allergy, eczema, asthma, and allergic rhinitis) and/ or IgE-associated disease (any allergic disease and sensitization as assessed by SPT) at 2 and 5 years of age (23). Seppo et al. measured outcomes of CMA by 18 months of age (24). Cases of CMA were confirmed by a positive oral food challenge at median 6 months of age. Diarrhea Outcomes reported were all diarrhea episodes due to stable toxin (ST)-E. coli infection and diarrhea as a result of all causes (25). Diarrheal episodes were determined by the study physician. All diarrheal episodes were assessed using a standardized scoring system (32,33). ST-E. coli related diarrhea was tested in a laboratory according to previously published methods (34). Respiratory and Gastrointestinal Tract Infections Outcomes were cumulative occurrences of either (a) respiratory problems, consisting of upper respiratory infections (runny nose or cold), cough, or pneumonia; (b) gastrointestinal tract problems, which included vomiting, diarrhea, or colic; and (c) ear infections; by 2, 6, 12, and 24 weeks (28). Associations with ear infection outcomes were reported as not significant. HIV Bode et al. (29) reported the outcome measure as HIV transmission postpartum. The same study population was used to measure mortality in infants exposed to HIV infection during and after breastfeeding (30). HIV infection was established by heel-stick blood samples collected first at birth, then at 1 week of age, then monthly to 6 months of age, and subsequently every 3-24 months of age (29). HIV DNA was tested by polymerase chain reaction. Causes of deaths were ascertained via verbal autopsy and a review of medical records. Death after weaning was defined if breastfeeding had ceased independently of events before death (30). Study Quality The included studies ranged from low to high quality (see Table E5 in Supplementary Material). Selection of participants was adequately reported for all included papers. The ascertainment of HMO exposure was based on laboratory assays for all included studies. Allergic disease outcomes were determined via parental reports in one study (22) and medical diagnosis for two studies (23,24 (28). HIV status was confirmed via blood samples collected at several age intervals (29). Loss to follow-up was only reported in one study (28). Four of the six original studies considered possible bias as a result of confounding or effect modification (23,24,28,29). One of the papers investigating allergic disease outcomes adjusted for potential confounders (siblings, delivery mode, gender, allergic parents, and gestational age) and tested for interactions (FUT2 status and delivery mode; and FUT2 status and siblings) (23), while another study adjusted for the age of the infant, maternal atopy, duration of lactation and Secretor status (24). Stepans et al. adjusted for breastfeeding behavior (28). Measures of association between HMOs and HIV transmission were adjusted for two potential confounders identified in the study, white blood cell count and human milk HIV RNA viral load at 1 month (29). Potential confounders and interaction terms for the association between HMOs and diarrheal diseases were reported as not significant; however, the authors did not discuss what covariates were tested (25). Sjögren et al. did not discuss confounding (22). Allergic Disease One of the three allergic disease studies found evidence of an association between HMOs and allergic disease (22,24). Sjögren reported a weak trend for higher total concentrations of neutral oligosaccharides in the breastmilk consumed by infants who developed allergic disease by 18 months (p = 0.12) (22) (see Table 2). Seppo observed that infants who consumed breastmilk with low lacto-N-fucopentaose (LNFP) III concentrations (<60 nM) had an increased likelihood of CMA compared with higher concentrations of LNFP-III (OR 6.7, 95% CI 2.0-22) (24). Seppo also noted that infants who received human milk with lower levels of LS-tetrasaccharide c, disialyllacto-N-tetraose (DSLNT) and 6′-sialyllactose were more likely to develop atopic dermatitis. Although Sprenger reported a significant interaction with mode of delivery (C-section or vaginal birth) (p = 0.016) (23), an adjusted regression model found no statistically significant association between oligosaccharide status and allergic disease, regardless of mode of delivery. Diarrhea It was found that infants with diarrhea due to ST-E. coli infection had a significantly lower mean fucosyloligosaccharide ratio than asymptomatically infected infants or uninfected infants (25) (see Table 3). In addition, lower fucosyloligosaccharide ratios were associated with more severe diarrheal disease due to any cause. Infants who developed moderate to severe diarrhea of any cause were fed with human milk that had lower fucosyloligosaccharides ratios than infants with no symptoms (25). Respiratory and Gastrointestinal Tract Infections Higher levels of LNFP-II in colostrum were associated with reduced respiratory infections by 6 and 12 weeks, after controlling for breastfeeding behavior (see Table 4). Increasing LNFP-II concentration was also associated with reduced gastrointestinal illness in infants at 6 and 12 weeks (26). No significant results were reported at 24 weeks postpartum. HIV There was a non-significant trend toward a reduction in HIV transmission risk postpartum (29). An association was found between higher total HMO concentration and reduced risk of HIV transmission after adjustment for maternal CD4 cell count and human milk HIV RNA viral load at 1 month (see Table 5). Assessment of individual oligosaccharides found non-3′-sialyllactose HMOs to reduce HIV transmission at concentrations above the median (OR 0.38; 95% CI: 0.17-0.82). No other significant reductions in HIV transmission were reported for the other oligosaccharides, instead higher concentrations of 3′-sialyllactose oligosaccharides were associated with an approximately 2-fold increased risk of transmission (adjusted OR: 2.21; 95% CI: 1.04-4.73) (29). For HEU infants, higher HMO concentrations were found to reduce mortality during, not after, breastfeeding for both 2-linked fucosylated as well as non-2-linked fucosylated oligosaccharides, following control for maternal CD4 cell count and human milk HIV RNA viral load at 1 month (30). No significant associations between HMOs and mortality were observed for HIV-infected children. DiScUSSiON The six studies included in this systematic review have published 10 articles but provide only limited evidence that HMOs are associated with allergic and infectious diseases in early life. No studies were published on the associations between HMOs and other immune-mediated conditions. In terms of allergic disease, only one study showed that LNFP III was associated with increased risk of CMA. For infectious disease, the evidence was stronger although still limited with one study reporting that increased maternal levels of fucosyloligosaccharides were associated with reduced risk of diarrhea up to 2 years of age, as well as respiratory and gastrointestinal tract infections at 6 and 12 weeks of age. Evidence for an association between high HMO concentration and HIV outcomes was reported in two studies. In infants with a HIV positive mother, HMOs above the median concentration (≥1.87 g/L) had reduced risk of HIV infection. In addition, high concentrations of HMOs during breastfeeding were associated with a lower mortality rate for HIV-exposed, uninfected (HEU) infants but not for HIV-exposed infected infants. Despite the recent interest and discussion surrounding breastmilk oligosaccharides and their potential impact on disease outcomes, very little original research has been conducted in this field. Furthermore, there are a number of important limitations with the available evidence, which prohibit strong conclusions being made at this time. Where two or more studies examined associations with similar outcomes, different measures of association were used (odds ratios versus mean differences in maternal HMO between affected and unaffected infants), preventing any pooling of the results in a meta-analysis. Most of the included studies measured disease outcomes from very small sample sizes (between 20 and 266 mother-infant pairs) both affecting the precision of the effect estimates and limiting the statistical power to detect important associations. There were disparate HMO exposure classifications between the studies, making it difficult to compare the results of each study. Sprenger et al. measured exposure to HMOs as presence or absence of FUT2-dependent oligosaccharides (23). Using this dichotomy assumes that infants who consume high concentrations of FUT2-dependent oligosaccharides are just as likely to develop allergic disease as those exposed to low concentrations. Newburg et al. defined exposure to HMOs in terms of fucosylated oligosaccharide ratios (concentrations of α1,2-linked fucosyloligosaccharides compared with oligosaccharides that contain only α1,3and α1,4-linked fucose), thus grouping a range of HMOs (25), whereas the remaining four studies measured associations with several specific common HMOs (22,24,28,29). Furthermore, most of the techniques used to measure levels of HMO are unable to quantify absolute levels of HMO. Only three papers discussed using an internal standard in the HMO quantification process (23,24,29). With over 200 structurally unique HMOs, it is possible that other important oligosaccharides not considered in the six studies may have important biological effects. It remains possible that the included studies have not measured the important HMO for each of the assessed outcomes. However, with so many different forms of HMOs, testing associations for each HMO will result in multiple comparisons, leading to spurious associations. As this area is novel, such exploratory "hypothesis generating" studies are still needed, but, care should be taken not to over-interpret any one finding in the absence of replication across cohorts. Potential confounding was accounted for differently in each study. Four of the six original studies adjusted for potential confounders (see Figures E1-E3 in Supplementary Material) (23,24,28,29). The remaining articles failed to acknowledge possible confounding (22) or stated confounding was not significant, with no description of how this was confirmed (25). Two of the allergic disease studies adjusted for a range of potential confounders in their analysis including siblings, gender, allergic parents/maternal atopy, and gestational age, which largely appears appropriate (23,24). Bode et al. noted that associations between advanced maternal HIV infection, including low maternal CD4 counts and viral load, and higher HEU mortality were confined to breastfed children, thus CD4 counts and viral load were controlled for in the regression model (29). Stepans et al. controlled for breastfeeding behavior in the analysis; defined as the proportion of days breastfed (28), but it is unlikely that the duration of breastfeeding would confound the relationship between HMO consumption and disease outcomes as maternal levels of HMOs were measured for all women at one fixed time window-2 weeks after recruitment. As all of the available evidence is derived from observational studies (cohort and nested case-control), it is subject to potential unmeasured confounding factors. It is possible that a range of factors modify the effects of HMOs, and this has been examined in some of the included studies. Delivery mode was reported to be a significant modifier of the association between the presence of FUT2-dependent oligosaccharides and allergic disease development, although the strata-specific effects were quite weak (23). It is possible that breastfeeding duration is an effect modifier; longer breastfeeding duration may result in a larger amount of HMOs consumed by the infant, thereby affecting outcomes. Similarly, total volume of breastmilk consumed may also vary between infants and could modify these associations. This possibility has not been examined previously. Follow-up was complete for five of the six studies (22)(23)(24)(25)29). However, it is not clear whether the authors only reported results on participants with outcome data. Stepans reported that only 34 participants from the original sample of 73 (46%) remained after 24 weeks, leading to a high risk of attrition bias. The generalizability of results may be affected by population homogeneity in the regions in which participants were recruited. As the proportion of women with FUT2 gene mutations varies in different ethnic populations, lack of genetic diversity among study participants may result in associations that are unique to that specific population. For example, in the study by Newburg et al., all participants secreted fucosylated HMOs (no mutations in the FUT2 gene in the Mexican population). This meant that there was less variation in the levels of FUT2-dependent oligosaccharides and hence higher fucosyloligosaccharide ratios were reported than would potentially be found in other ethnic populations. Therefore, these results cannot be directly translated to European populations where there is a higher proportion of non-secretors (35,36). An additional limitation in these studies is that while the Secretor status of the mother is often known, based on expressed breastmilk HMOs, the status of the infant has not been measured. Secretor status of the child may also modify the associations between HMO and clinical outcomes. However, infant Secretor status is difficult to measure. This review has a number of strengths and limitations. We prospectively registered the review, searched multiple databases, had duplicate study selection, and listed reasons for excluding studies and quality assessed the included papers. While only published works were included in the predetermined search strategy, additional sources in the form of conversations with colleagues were used to identify possible unpublished manuscripts. Publication bias may have influenced the results of this review, although this seems unlikely given the preponderance of negative associations that have been reported to date. While our search strategy allowed for their inclusion, none of the published papers reported outcomes beyond infancy or reported autoimmune disease outcomes, which are areas for future research. cONcLUSiON We identified limited evidence to support a possible role for HMOs to influence cows' milk allergy, diarrheal diseases, respiratory, gastrointestinal tract infections, and HIV infection in the infant in early life. Despite these positive findings, the evidence base is very limited and has numerous issues, with varying quality of the included studies from low to high. Further research into this area is needed, using larger observational studies with appropriate measures of outcomes and exposures and better control for confounding. Future research would benefit from considering multiple HMO exposures, which could be grouped appropriately, either by similar biological effects or by patterns of associations. Improved understanding of the complex chemical structures of oligosaccharides in milk may potentially allow for the design of intervention studies, to increase exposure to specific HMOs, which may reduce the burden of these conditions.
2018-04-20T13:04:50.527Z
2018-04-20T00:00:00.000
{ "year": 2018, "sha1": "864719a34c98551fb7cf714c07a8ee250107c43e", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fped.2018.00091/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "864719a34c98551fb7cf714c07a8ee250107c43e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
214701618
pes2o/s2orc
v3-fos-license
The Effects of Ethical Factors in Financial Statement Examination: Ethical Framework of the Input Process Output (IPO) Model in Auditing System Basis This study attempts to analyze ethical factors within the framework of the IPO model (input-process-output) as a proxy of audit quality. In more detail, this study creates a model framework that analyzes the influence of ethical factors consisting of integrity, objectivity and independence on audit quality with specific variables. This study was conducted by analyzing 220 respondents from auditors working in public accounting firms in major cities in Java, Indonesia, and analyzed using linear regression techniques. The study results show that the integrity variable has a positive and significant effect on output, and the objectivity variable has a significant effect on input, process and output. Meanwhile, the independence variable has not been empirically proven to have a significant effect on audit quality. These results emphasize the importance of increasing auditor independence in carrying out their duties, and theoretically prove the effect of abstract ethical factor values in empirical testing on audit quality. Introduction The public accountant profession is a profession that relates to stressful work in connection with professional pressures and ethical attitudes that auditors must hold in carrying out their duties (Espinosa-Pike & Barrainkua, 2016;Tian & Peterson, 2016;Clayton & van Staden, 2015;Sunyoto et al., 2019). Implementation of auditing tasks is always supervised and must be in line with applicable regulations from the beginning to completion (Knechel & Salterio, 2016). In fact, until the audit report is verified by competent institutions and stakeholders, auditors must be able to behave ethically and show that they have implemented the applicable accounting standards correctly. Knapp (1985), Johari et al. (2019) found that auditors are vulnerable to environmental pressure and intimidation by clients in a conflict or dispute about accounting issues, including the need to make adjustments to financial statements, the appropriateness of accounting principles applied by clients, or the adequacy of financial statement disclosures. Goldman and Barlev (1975) explain that a common form of intimidation is the threat of auditor replacement. Thus, it is reasonable to assume that client pressure on the auditor at the time of the audit can negatively affect the auditor's work of audit quality. There are many accounting scandals by companies hidden in the financial statements that have been engineered and involve top auditors (Peecer, et al 2007). This is related to the low level of audit quality caused by low professional knowledge and ethics of auditors, as well as high client pressure (Chandrarin & Subiyantoro, 2019;Sunyoto et al., 2019). In Indonesia, audit quality statistics show that of the forty violations committed by Public Accountants during 2007-2009, seventeen of which were violations related to audit quality (Agoes, 2012). Specifically, in 2007 there were dozens of Public Accounting Firms or Public Accountants whose licenses were frozen by the Minister of Finance. The suspension was related to violations of the Professional Standards of Public Accountants. This is related to the auditor's poor understanding of Financial Accounting Standards and SPAP (Suseno, 2013). Carcello and Nagy (2004) explain that financial statement manipulation is a problem that is often faced by the public accounting profession. Based on the above arguments, it can be said that research on audit quality in accordance with the ethics held by auditors in financial audits needs to be done, especially regarding internal input factors of audit quality in the form of professional knowledge and auditor ethics as external input factors of audit quality. This study attempts to analyze ethical factors within the framework of the IPO model (input-process-output) as a proxy of audit quality. In more detail, this study creates a model framework that analyzes the influence of ethical factors consisting of integrity, objectivity and independence on audit quality with specific variables. Conceptually, the originality value offered by this study is its attempt to empirically test an abstract value, namely ethical factors, on audit quality. Basically, ethics is an objective abstract values that are run and are perceived subjectively by each individual, although this perception will tend to be in agreement with things that are considered unethical. On the other hand, audit quality is an aspect that is easily assessed in financial audit procedures. This has become a widely discussed topic in previous studies, only the formation of special models in testing and proving the usefulness of relationships through models like this has not been widely discussed. Effect of Auditor Integrity on Audit Quality Ardelean (2013) explains that auditor ethics as a basic component for perceptions about the integrity, objectivity, and independence of auditors in providing audit opinions. Al Momani and Obeidat (2013) found that auditor integrity, objectivity, and independence had a positive effect on the auditor's ability to detect fraud. Goodwin (1999) found that the integrity of information sources is the most significant variable for determining sensitivity to audit information sources. Intakhan and Ussahawanitchakit (2009) show that audit independence has a positive effect on audit quality. (2004), Ismail et al. (2019), Kertarajasa et al. (2019) explain that audit quality is a picture of good performance and positive characteristics of auditors. A good auditor's performance can not be separated from the existence of good input and audit processes. In terms of audit quality inputs, Watt and Zimmerman (1981) define audit quality as consisting of two components: the auditor's competence to detect errors or irregularities in accounting records, and the auditor's independence to report errors or irregularities in auditor opinion. Based on this description, the proposed hypothesis is as follows: O'Regan H1a. Integrity has a positive and significant effect on audit input H1b. Integrity has a positive and significant effect on the audit process H1c. Integrity has a positive and significant effect on audit output Tangpinyoputtikhun and Thammavinyu (2010) found that CPA auditors who have high personal ethics will produce high audit quality. This is supported by Intakhan and Ussahawanichakit (2010) that auditors who have high ethical reasoning tend to pay attention to the public interest by providing high quality audit reports to achieve audit effectiveness and better audit performance. Intakhan and Ussahawanitchakit (2009) found that auditors who have high ethical orientation tend to act more independently and effectively to produce high audit quality. Barton (2005) suggests that to develop and maintain credibility, accountability, transparency, and accuracy of financial statements, the company's financial statements must be examined by competent and independent auditors. Competence is related to knowledge (technical knowledge) and experience (practical knowledge). Frankel, Johnson, and Nelson (2002) state that the provision of non-audit services can influence auditor independence by increasing incentives to approve client pressure. In general auditors are considered more accepting of client desires when subject to client pressure (Farmer, Rittenberg, and Trompeter, 1987;Lord 1992;Hackenbrack and Nelson 1996). Effect of Auditor Objectivity on Audit Quality In terms of the audit process, GAO (2003) defines audit quality as a measurement of the audit process in accordance with the generally accepted auditing standards (GAAS) to provide reasonable assurance that the audited financial statements and disclosures have been presented in accordance with the generally accepted accounting principles (GAAP) and there is no material misstatement caused by errors or fraud. Thus, audit quality in this study is defined as a combination of a good systematic inspection process (in accordance with GAAS) with good professional judgment from competent and independent auditors, to produce a high quality level of assurance to users of audit services (Knetchel , et al, 2013). H2a. Objectivity has a positive and significant effect on audit input H2b. Objectivity has a positive and significant effect on the audit process H2c. Objectivity has a positive and significant effect on audit output Sudsomboon and Ussahawanitchakit (2009) found that commitment to judgment expertise and ethical awareness had no effect on audit quality. Gendron, Suddaby, and Lam (2006) find that the commitment of the independence of public accountants is lower than accountants who do not work as public accountants, and the independence commitment of public accountants working in large/international audit firms is lower than that of public accountants working in audits small firm. Knapp (1991) found that the length of the auditor's relationship with the auditee could hinder the auditor's independence and accuracy in carrying out the audit task. Hamilton (2005) states that one of the factors that can hamper the ability of auditors to provide quality audits is a work relationship that is too long between the auditor and the client. Levitt (2000) found that non-audit services had an impact on auditor independence. The length of the employment relationship and the provision of non-audit service work can lead to dependence on the auditor's fee on the client and client pressure on the auditor which has an impact on decreasing audit quality and increasing the auditor's litigation potential (Agus & Ghozali, 2019). Studies reveal that competence and independence influence audit quality (DeAngelo, 1981;Deis and Groux, 1992;Wooten, 2003;Agus & Ghozali, 2019). Effect of Auditor Independence on Audit Quality H3a. Independence has a positive and significant effect on audit input H3b. Independence has a positive and significant effect on the audit process H3c. Independence has a positive and significant effect on audit output Theoretical Framework Searching for previous research shows that audit quality research has been investigated through several models, including using the audit quality input-process-output (IPO) model that is related to auditor's professional knowledge and ethics. Tangpinyoputtikhun and Ussahawanitchakit (2008) found that tax auditors in Thailand who have professional knowledge can improve the quality of their work. Tangpinyoputtikhun and Thammavinyu (2010) found that CPA auditors who have professional knowledge tend to exploit their abilities in order to create accountability and improve audit quality. PCAOB (2013) developed an audit quality framework covering three segments, namely audit input, audit process, and audit results. Related to the auditor profession that has the duty and responsibility to produce high audit quality, the audit quality can be seen from three points of view, namely: (a) input point of view, the profession requires the auditor to have technical knowledge, special experience (professional knowledge) and commitment high morals (high auditor ethics), (b) the process perspective, the profession requires the auditor to carry out a work based on standard work standards and professional ethics set by the profession, and can only be done by individuals with the ability and educational background in accounting and auditing, (c) the output point of view, the profession requires the auditor to present financial statements that are free from material misstatements and provide information that is useful to the public for economic decision making. Method This research was conducted on auditors working at Public Accounting Firms (KAP) registered in the IAI-KAP directory in 2013, especially those who live in four big cities in Java, namely; Jakarta, Surabaya, Semarang and Bandung. The sampling technique is done by nonprobability sampling technique that is purposive sampling with the type of judgment sampling. From 600 questionnaires distributed, 220 questionnaires can be analyzed further. The integrity variable is measured by 2 items, namely the honesty and obedience of auditors in the regulations in performing audit tasks (INT1), as well as the courage to express things according to their consideration and beliefs when doing audit tasks (INT2). Objectivity is measured by 3 items, namely carrying out audit tasks in accordance with facts and not looking for faults or hiding mistakes (OBJ1), carrying out audit tasks to meet the needs/interests of the users of financial statements (OBJ2), and taking an impartial perspective in evaluating evidence audit and preparation of audit reports (OBJ3). Independence is measured from 6 items, namely the auditor does not receive audit and non-audit assignments simultaneously from the same client and the same year (IND1), carry out the collection and evaluation of evidence (verification) that is accessed from the client's accounting records, the implementation is not adjusted to the client's request (IND2), decide on materiality level plans and audit risk assessment not adjusted to client requests (IND3), reject audit assignments for clients related to legal cases with him (IND4), reject audit assignments for clients related to him as share/joint owners investor/investor/director/manager or employee. (IND5), refused audit assignments for clients who have financial and kinship relationships with their families (IND6). For the IPO aspects, operationally, input is measured by 5 items, namely the ratio of partner audit to auditor staff that is comparable in KAP can improve the quality of audit implementation (INP1), the number of working hours of partners, managers, and auditors who do not exceed normal workloads or do not exceed total audit work hours that should be able to improve the quality of audit implementation (INP2), exchange or change of audit teams for the same client at each different inspection period needs to be done (INP3), the greater percentage of audit work hours performed by outsourced (freelance) workers compared to the hours worked by full time staff (permanent staff) can reduce audit quality (INP4), adequate technical competence in understanding SAK, IFRS, and SPAP can improve audit quality (INP5). The audit process is measured by 4 items: the auditor makes consideration of inherent risk and control risk in developing audit programs/procedures in accordance with professional standards (SPAP) (PRO1), applying the principle of professional skills (due professional care) and professional skepticism in gathering and evaluating audit evidence to support his opinion (PRO2), conducting audit planning carefully to obtain competent, sufficient, and timely audit evidence and documenting it well (PRO3), and the auditor's work is reviewed by superiors in a manner tiered before the audit report is made (PRO4). Audit Reports as a reflection of audit output measured by 5 items, namely the audit report must be reliable and contains findings of significant weaknesses in the client's internal control system and recommendations for improvement as needed (OUT1), the audit report is presented objectively accordingly with SPAP and submitted on time in accordance with the agreement (OUT2), the audit report is avoided from the case of restatement of the audited financial statements, lawsuits from the use of financial statements, and reprimands from the PPAJP (OUT3), the formulation of opinion (the opinion of the auditor) has been based on the results of the evaluation results regarding conclusions drawn from audit evidence obtained, and adequately documented in the auditor's working paper (OUT4), issuance of the auditor's report and expressing the auditor's opinion, have been based on the results of audits conducted based on Auditing Standards set forth in SPAP and the provisions of the applicable law behavior (OUT5). All variables are measured on a five-point Likert scale (1 = strongly disagree to 5 = strongly agree). Data analysis was performed using linear regression techniques through IBM's Statistical Package for Social Science (SPSS) version 22. Testing included also analyzing descriptive statistics and correlations between variables, as well as the coefficient of determination. Descriptive Statistics Descriptive statistical test results as in Table 1 reveal that the value given to each item of the six variables tested. For the integrity variable, the mean value for the two items is around 4.3 with a standard deviation of around 0.6. For the objectivity variable, the average value is around 4.3 with a standard deviation between 0.61 and 0.73. For the independence variable, the average value is around 4.2 with a standard deviation between 0.65 and 0.79. For the IPO aspects, the average value for input is 3.21-4.04 with a standard deviation of 0.80-0.92, while for process variables, the average value is around 3.65-4.29 with a standard deviation of .73-1.02, and variables with an average value of 3.75-4.12 with a standard deviation of .76-1.03. Reliability and Validity The next test is the reliability and validity test. The test results present the reliability values that are reflected by Cronbach's Alpha (α) for ethical factors for integrity, objectivity and independence respectively 0.778, 0.771, and 0.790, while for the IPO aspects show values in a row for input, output and process are 0.687, 0.650 and 0.793. By using the standard from Nunally (1960) for accepting variable reliability of 0.60, it is stated that all variables in this study are reliable. Likewise, for the validity value of each construct, which is reflected by the Corrected Item-Total Correlation value, is higher than the r-table, it is stated that all items in this study are valid (Table 2). Correlation The correlation test results in Table 3 present that there is a significant relationship between the variables as a reflection of ethical factors (integrity, objectivity and independence), with aspects of the IPO (input, process-output). The test results stated the relationship of integrity with IPO aspects in a row was 0.308 with a significance level of p = 0.000, 0.186 (p=0.006) .466 (p=0.000). The correlational relationship between objectivity variables and IPO aspects was 0.385 (p=0.000), 0.274 (p=0.000), and 0.452 (p=0.000), respectively. Meanwhile, the relationship of independence with the input variable is 0.262 with p=0.000, with the process variable is 0.156 with p=0.021, and with the output variable is 0.340 with p=0.000. The next test is hypothesis testing. Hypothesis Testing The first hypothesis which consists of 3 parts focusing on the influence of integrity states that integrity has a positive and significant effect on audit input, integrity has a positive and significant effect on the audit process and integrity has a positive and significant effect on audit output. The test results show that the coefficient (β) and p-values for the influence of integrity on inputs, processes, and outputs are 0.068 (p=0.464), -0.006 (p=0.949), and 0.291 (p =. 001). These results indicate that the integrity variable has a positive and significant effect on output (H1c). This means that the hypothesis stating the influence of integrity on output is accepted. However, integrity has not been proven empirically to significantly influence input (H1a) and process (H1b), which is reflected by significance values above 0.05. Partially, this study is incompatible with some of the previous findings. Goodwin (1999) found that the integrity of information sources is the most significant variable for determining sensitivity to audit information sources. Intakhan and Ussahawanitchakit (2009) show that audit independence has a positive effect on audit quality. This is most likely due to the fact that this study focused on building models that test abstract ethical values on empirical audit quality. In addition, with respect to the significant influence of integrity on audit quality in the IPO model, this result is obtained by considering that the integrity measurement items are the honesty and obedience of auditors on the rules in conducting audit tasks and the courage to express things that according to their consideration and belief need to be done when performing tasks audits can affect the quality of the report as an auditing output. The second hypothesis states that there is an influence of objectivity on aspects of input, process and output. The test results show that the coefficient value for the effect of objectivity in a row on aspects of the IPO is 0.335 in the input with p=0.001, 0.301 in the process (p=0.003), and 0.238 in the output (p=0.010). These results indicate that objectivity has a positive and significant effect on aspects of the IPO in auditing organizations. Thus, hypotheses that state the positive and significant influence of objectivity on inputs (H2a), processes (H2b), and output (H2c) are accepted. This proves that the greater the auditor's objective in performing his duties, the greater his influence on improving audit quality. In detail, this also reflects that the auditor is able to carry out audit tasks in accordance with the facts and does not look for faults or hide mistakes, carry out audit tasks to meet the needs/interests of the users of financial statements, and take an impartial perspective in evaluating audit evidence and preparation of audit reports. These various measures, in turn, can affect the improvement of audit quality. The results are in line with Barton's (2005) argument that to develop and maintain credibility, accountability, transparency, and accuracy of financial statements, the company's financial statements must be examined by competent and independent auditors. Practically, this proves that objectivity measuring items in this study are carrying out audit tasks in accordance with facts and not looking for mistakes or hiding mistakes, carrying out audit tasks to meet the needs/interests of users of financial statements, and taking an impartial perspective in evaluating evidence audit and audit report preparation, empirically able to improve audit quality in every input process, process and output in the form of financial statement presentation (Fakhimuddin, 2018). The third hypothesis focusing on the effect of auditor independence states that there is a positive and significant effect on independence on audit quality that is reflected by the input, process and output (IPO) aspects. The test results show that the coefficient and significance for the influence of independence on the input is 0.003 (p=0.969), in the process of -0.036 (p=0.687), and at the output 0.010 (p=0.904). This result means that the third hypothesis which states the positive and significant effect of independence on audit quality is not accepted. These results are in line with some of the findings of previous studies that there is the possibility of the auditor's inability to be independent who can be raised from the close relationship with the client and the length of work. For example, Knapp (1991) found that the length of the auditor's relationship with the auditee could hinder the independence and accuracy of the auditor in carrying out audit tasks. Hamilton (2005) states that one of the factors that can hamper the ability of auditors to provide quality audits is a work relationship that is too long between the auditor and the client. Levitt (2000) found that non-audit services had an impact on auditor independence. The length of employment relationship and the provision of non-audit service work can lead to dependence of the auditor's fee on the client and client pressure on the auditor which results in a decrease in audit quality. Conclusion This study aims to analyze the relationship between ethical factors and audit quality in an IPO model. Specifically, ethical factors are reflected by the integrity, objectivity and independence of auditors, while audit quality is proxied by inputs, processes and outputs. The results show that among the three ethical factors, there is only one factor, namely objectivity, which is proven to be empirically influential on audit quality both inputs, processes and outputs. Meanwhile, the integrity factor only has a positive and significant effect on output, and the independence factor has not been proven to have a significant effect on aspects of audit quality within the framework of the IPO model. This research is motivated that research on audit quality in accordance with the ethics held by the auditor in financial audits needs to be done, especially regarding the internal input factors of audit quality in the form of professional knowledge and auditor ethics as external input factors of audit quality. This study offers originality in directly testing the effect of ethics on audit quality within an IPO (input-process-output) framework. Basically, ethics are objective abstract values that are carried out and perceived subjectively by each individual, although this perception tends to be in accordance with things that are considered unethical. On the other hand, audit quality is an aspect that is easily assessed in financial audit procedures. This has become a topic that has been widely discussed in previous studies, only the formation of special models in testing and proving the usefulness of relationships through models like these has not been widely discussed. Furthermore, the findings of this study reveal a model framework that analyzes the influence of ethical factors consisting of integrity, objectivity and independence of audit quality with certain variables. Conceptually, the originality value offered by this study is its attempt to empirically test abstract values, namely ethical factors, on audit quality. As an implication, this research emphasizes the importance of increasing professional knowledge and related auditor ethics. This capability specifically refers to the auditor's ability to find and report fraud. The ability to find fraud relates to technical ability and practical knowledge of auditor experience, while the ability to report is related to integrity, objectivity, and independence as parts of auditor ethics. Thus, the auditor's professional knowledge and ethics determine audit performance, namely audit quality. The limitation of this study lies in the fact that overall ethical factors do not prove significantly to audit quality. In addition, the generalizability value reflected by the adjusted R-square for all tests shows a low value. Future studies are expected to further examine the relationship between auditing ethics and audit quality in a more comprehensive IPO model. In addition, future studies are expected to find in detail the usefulness of the IPO model to empirically prove the importance of ethics in financial testing through auditing. This is because ethics is an objective having abstract values that are run and are perceived subjectively by each individual, although this perception will tend to be in agreement with things that are unethical. On the other hand, audit quality is an aspect that is easily assessed in financial audit procedures. Thus, testing between the compatibility of the relationship between ethics and audit quality will be relevant in subsequent investigations.
2020-03-19T10:30:44.247Z
2020-03-16T00:00:00.000
{ "year": 2020, "sha1": "6f541c7552b9981d745910471e41bcf07feeb1d4", "oa_license": "CCBY", "oa_url": "http://www.sciedupress.com/journal/index.php/ijfr/article/download/16233/10799", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "c63520b32a935f4a4b86d13b8fbfbb818bb4e221", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business" ] }
236834227
pes2o/s2orc
v3-fos-license
Construction Subcontracting Policy Framework for Developing Local Contractors Capacities in Zambia The Zambian construction industry, like many developing countries has over the past years experienced an imbalance in the distribution of works between local and foreign contractors. In a bid to bridge the gap, the Government of the Republic of Zambia in 2012 introduced a policy on subcontracting which provided for mandatory subcontracting of 20% of all major contracts to local contractors. There has however been outcries from subcontractors that the policy has not been beneficial. The study sought to investigate subcontracting practices in order to develop a framework for building capacity for local contractors within the construction industry in Zambia. The objective of the study was to explore the regulatory requirements on subcontracting in Zambia and establish the 20% subcontracting policy inadquescies . The study adopted the mixed method approach in which both semi-structured interviews (main contractors, sub-contractors consultants and project owners) and survey questionnaire were adopted for primary data collection. The questionnaire was distributed to 70 respondents and a response rate of 71% was attained. The investigation was conducted on 40 projects implemented in Zambia between 2012 and 2015. The study established four major deficiencies of the policy which include: subcontractors do not participate early in the procurement process and are introduced after contract is awarded; no clear guidelines on the implementation of the policy; subcontractors do not take part in determining works; and it is difficult to grow capacity of local contractors using the 20% subcontracting policy because contractors engaged to be main on projects do not show interest in developing and building local contractors capacity due to lack of incentives. A framework was developed that can be used to meet the study objectives and that of the policy in sucbcontracting and reduce the current inadequacies. The studyrecommended the use of the proposed framework by the government to reduce the current gaps. Introduction National economies develop with a thriving construction industry as it is a very significant industrial sector. . cited Deloitte (2018) highlighting that at global level, the industry is very enormouse and was valued around USD $17 trillion in 2017 and is expected to grow to USD 69.4 trillion by 2035. Additionally, the contribution of the contruction industry at global level in relation to the gross domestic product (GDP) is estimated at 15% of the GDP in 2020 (PricewaterhouseCoopers, 2013) and is expected to increase by 3.9% per annum to 2030 (PricewaterhouseCoopers, 2017). In South Africa, the construction industry is significant and contributed 3.9% local contractors according to the report by the Ministry of Works and Supply (2018). To respond to this challenge, the Zambian Government adopted a requirement on the subcontracting of all public works in 2012 (NCC, 2014) which saw the increase in the contract sum threshold from 10% to 20% of major constructs in the country; . The aim of the 20% policy threshold in subcontracting was to empower local contractors in order to build their capacity and also to creat jobs (National Council for Construction and Zambia Institute for Policy Analysis and Research, 2017). According to Kulemeka et al. (2002), the existing variances disables local contractor capacities in relation to registration according to institution requirements And in the recent years, Zambia has seen major construction works that are capital intensive hence are limited to contractors who are in higher grades based on the contractors' classification system in Zambia as highlighted by National Council for Construction (2018) and the National Road Fund Agency (NRFA) (2015) and Conferring to the NCC (2018), local contractors registered in Grade 1 and 2 are only 34% of the total contractors registered, implying that foreign contractors take the remaining 66%. . This eliminates most of the local contractors in participating to procure major construction works since a large number of them are registered in lower categories. Nonetheless, despite the introduction of the policy, subcontractors, who are the intended beneficiaries, have complained that this policy had not been implemented well and is not beneficial to them (Daily Nation, 2018;Zambia National Broadcasting Corporation, 2015). There has also been little documentation on subcontracting practices in Zambia. This study however, focused developing a manadatory subcontracting practice framework that would help in capacity building of local contractors in the Zambian construction industry. It was thus important to assess the existing subcontracting regulatory framework in Zambia , inaguarate inadequacies of the mandatory 20% policy manadatory to develop the local contractors' capacities in, then recommend improvements that would culminate into the development of a framework in subcontracting for development of capacity for local contractors. The results will benefit the Zambian government in making policy improvements on the 20% subcontracting policy. It is envisaged that the study findings would add to the construction industry body of knowledge in relation to subcontracting of local contractors. Hoban and Francis (2010), defines subcontractors as being specialists hired by main contractor to perform specific tasks on a project as part of the overall contract. It has been generally accepted that subcontractors play a significant role in the execution of construction work (Akanni & Osmadi, 2015;Abbasianjahromi, et al., 2013;Hoban & Francis, 2010). The general contractors' performance is strongly dependent on subcontractors (Albino & Garavelli, 1998). Mbachu (2008) reinforced this notion and stated that the ability of the general contractor and consultant to deliver the project within time, quality and cost depends largely on the performance of subcontractors. Additionally, the contribution of subcontractors in construction works can be more than 50% while in some sectors it can be as much as 90% of the total project value (Kumaraswamy & Mathews, 2000). According Arditi and Chotibhongs (2005) subcontracting has proved to be efficient and economical in the use of available resources. They argued that qualified subcontractors are usually able to perform their work specialty more quickly and at a lesser cost than the general contractor. It is also noted that subcontracting can improve quality of work and reduce project time and costs (Ng, et al., 2008). Kulemeka et al. (2015) in their study posit that what prevents performance of local contractors are economic related in nature and concluded that local contractors remain unsustainable and their performance unsatisfactory if governments do not intervene. They further added that in order to address the challenges faced by the Malawian local contractors , the review of policies by the governmebt for the development of small scale contractors programmes ensures that local contractors contribute to the growth of the economy.. Literature Review While there are ethical issues to be considered in the construction industry, the Transparency International (2005) notes that the industry is classified as the most fraudulent industry worldwide. Studies carried out in various countries such as United States of America (USA) (FMI/CMAA, 2004;Jackson, 2013), Australia (Vee & Skitmore, 2003), South Africa (Pearl, et al., 2005), and Hong Kong (Fan & Fox, 2005) provide evidence that the construction industry is plagued with ethical issues due to its substantial capital investments (Adnan, et al., 2012). Additionally, Adman et al. (2012) stated that unethical practices can take place at every phase of a construction project, during planning and design, pre-qualification and tender, project execution and operation and maintenance. Such practices have been seen to result in projects which when completed, are considered unnecessary, unsuitable, overlay complex components, overpriced or delayed as postulated by Hamzah et al. (2010) Akintan and Morledge (2013) and Dainty et al. (2001). Within the Zambian construction industry, unethical issues arise from SME's tendency of selling their subcontracted portions back to main contractors (Muya & Mukumbwa, 2013). Literature has shown that there is a general propensity of transmitting huge risks on the projects to small scale contracting firms whose capacity is inadequate to manage the risks (CIDB, 2013;Marzouk, et al., 2013;Laryea, 2010). This makes subcontractors uncertain of main contractors on their authenticity in the association. On the other hand, main contraintors are uncomfortable with subcontractors as they consider them to under employ or employ unskilled employees thus affecting the rate at which the works on site are implemented, thereby leading to conflicts. The subcontractor's lack of capacity is often used as an excuse for harsh conditions and terms leading to failure to meet project set objectives (Construction Excellence, 2004). Various studies to improve subcontracting practice from the point of registration, to selection and monitoring have been done (Ng, et al., 2008). According to Lew et al., (2012), most of the researchers focus on constituents that influence subcontracting and on the development of new techniques that can be used for subcontractor selection or management . Kulemeka et al. (2015) notes that it is critical for governments to continuously review policies on contractor development programmes to ensure contribution to their success. While Yoke-Lian et al. (2012) indicated that effective subcontractor selection and monitoring would minimise the problems during construction. Additionally, Laryea (2010) emphasised that governments has a lot to do to enhance capacity for subcontractors to be involved in large projects through the creation of access to capital and improve structures. 2.1 Subcontracting policy, practices and challenges Many countries are encouraging the promotion of SMEs who are mostly subcontracted, strive to industrialise and bring about economic development, for governments developing policies that encourage subcontracting (Hoban & Francis, 2010). The participation of local constructing firms as subcontractors to foreign firms is an important element in the concept of skills and technology transfer as well as building the capacities of local contractors (Abu Bakar & Tufail, 2012). Choudry et al. (2012) pointed out that as a deliberate policy, governments should formulate regulatory bodies to monitor the policy implemetation on subcontracting. However, CIBD (2013) postulate that successful skills and technology transfer happens if both the main and subcontractors are on long-term strategic relationships, as most subcontractors are reluctant to share confidential information with other companies, especially financial information. Wells (2000) postulates that the construction industry in Africa is characterised by extensive subcontracting, temporary and insecure employment, and poor working conditions. The CIDB (2013) study on subcontracting in the construction industry in South Africa, indicated that legislative and policy interventions around subcontracting should be aimed at improving the environment within which subcontracting takes place. In South Africa, various projects that are related to the public sector encourage the development of local economies by the adherence to set policies and regulatory requirements as can be seen in the targets set on the socio-economic front on training and development of skills, employment of the locals and the black economic empowerment , (CIDB, 2013). However, Mwanaumo et al. (2014) argued that contractors feel that such requirements on projects have worked well especially in creating employment, but are difficult to sustain in times of difficult economic climate where projects are hard to come by. Cheng et al. (2011) in the research on evluating subcontractor performance proposed twelve (12) significant factors to be included in the subcontractor selection policy and concluded that trained input-output mapping relationship and subcontractor final scores should be used as key policy factors in building subcontractor capacity. However, Laryea, (2010) in the Ghana study on the evolution of indegenous constractors revelead that with the dominance of foreign contractors, local contractors lack capacity to carry out major projects and hence recommended the government to develop policies that would encourage local participation including subcontrating them in huge contracts to develop skills. It has been argued that there is no common practice that is standard, in the formulation of subcontracting policy and each country comes up with its own framework based on the local factors, prime contractors, clients the other related policies that support subcontracting (Choudry, et al., 2012). They noted that some of the factors that could positively encourage local contractor capacity building should include technical and professional training through knowledge and skills transfer in financial, managerial, technical and technology transfer; efficient communication; evaluate roles and responsibilities for enforcement. This was also affirmed by CIDB (2013) and Martin (2010) who added that these factors should be guiding the subcontracting policy formulation and should aim at improving the environment in which subcontracting takes place, the contract forms and improve at management level the organisational aspects of the attached to subcontracting firms. Thwala and Mvubu (2007) however, suggested that identification of local capacity is imperative for planning purposes in future projects. They also added that the usage of an integrated construction unit method of procurement helps in growing local contractors in economies such as that in developing countries. Kalemuka et al. (2015) and CIBD (2013) emphasised training at planning and design stage be incorporated for local contractors. While Muya & Mukumbwa (2013) proposed that a policy should have integrity improvements in the supply of equipment done by deducting costs at the beginning of certification, and consultants should approve payments to be executed by local contractors who are expected to be available for a particular period they are engaged. A similar notion was also attested by Abbasianjahromi, et al. (2013). Nonetheless, though subcontracting is widely used in the developed world, it has been criticised to bring about challenges for the firm that is subcontracted and these challenges are mostly common in developing countries than in developed countries according to Mlenga (2002). The Zambian 20% subcontracting policy features provide for empowering and capacity building of local contractors, , employment creation for Zambian people and mandatory subcontracting of 20% of the contract sum be allocated to local contractors on all public projects awarded to foreign contractors. However, Zambia has been experiencing challenges to implement the policy requirement (Kaliba, 2015). Kaliba (2015) pointed that the 20% subcontracting policy is not legally supported as it has not been passed through parliament for ratification and has no existing guideline for implementation, while main contractors are uninterested in helping local contractors, and there is evidence of poor planning to help the local contractors build capacity in managing subcontracted works. While Choudry et al. (2012) noted that knowledge sharing is very power between main contractors as they prefer to continue enjoying the monopoly. Several challenges have been highlighted from subcontracting firms by the CIDB (2013) and range from lack of security payment, weak management practices, poor attitudes, skills shortages, lack of working capital and failure to meet up with the competition from low barriers to entry. These challenges inhibit subcontractors to grow their companies and move to a higher grading system, and thus fail to execute quality work. However, some improvements to the subcontracting policies in countries were similar or related policies have been adopted have suggested review of the policy if its not working (Abbasianjahromi, et al., 2013). This was also affirmed by Kaliba (2015) though he noted that even if the policy can be reviewed, the client needs to have a long term plan on assisting local subcontractors in capacity building to meet deliverable milestones. Research Methodology The study sought to examine practises in subcontracting to develop a framework of capacity building for the Zambian local contractors within the construction sector. The study adopted the exploratory mixed method approach in which semi-structured interviews and a survey questionnaire were used. Only 26 out of the targeted 30 stakeholders participated in the interviews to obtain an in-depth understanding (Creswell, 2014) of the subcontracting practices in Zambia. The target group included management of subcontractors, main contractors, clients and consultants. Data was captured through audio recording and note taking and was analysed using thematic analysis. Purposive sampling was used because it purposely targets a group of people perceived to be reliable and useful to inform the field survey questionnaire (Creswell, 2014) which was pretested for validity. A sample size of 70 was adopted for convenience of which a total of 50 questionnaires were successfully completed giving a response rate of 71.4% and hence acceptable (Creswell, 2014). The distribution was as presented in Table 1. Results The following are the results from the survey questionnaire approach: Category and Grade of Companies' registration This section of the findings presents information on characteristics of the respondents who were contractors. The results represent the contractors in the Building (B), Civil engineering (C) and roads (R) categories. These are recipients of the many public contracts related to the construction of scools and hospitals, health posts, roads and bridges. The highest percentage of the respondents at 27% came from the category 4R followed by the category 1R at 22%; the category 6R at 14%; the categories of 1B, 1C and 6B were at 9% and, lastly, those in 5R and 5B categories were at 5%. The distribution of contractors according to categories and grades obtained from NCC is presented in Table 2. Contractual arrangements The proportion of respondents who had been involved in traditional contractual arrangement was 71% followed by those who had been involved in the design and build at 16%. The third common arrangement was the Integrated Construction Unit (ICU) which accounted for 10% while 3% had experience in the Management Contracting method. Table 3 represents a summary of the findings. On the preference of the contractual method for the subcontracting policy, it was established that 48% of the respondents favoured traditional methods of procurement while 45% were of the view that the management contracting method would be better in implementing the policy. However, 5% and 2% preferred integrated construction unit, and design and build method respectively. Analysis on implementation, constraints and improvements to the policy The respondents were asked to rate statements concerning the mandatory subcontracting policy on a Likert scale of 1 to 5. A total of 46 statements obtained from preliminary interviews and literature review were adopted for this study. The nine statements relating to inadequacies of the mandatory subcontracting policy were established and included "the 20% subcontracting policy is not legally supported as it did not pass through parliament for ratification; and difficult to grow capacity of local contractors using the 20% subcontracting policy as main contractors are not interested in building capacities of local contractors due to lack of incentives". Other statements include "the lack of strategic plan on subcontracting makes it difficult to build capacity of local contractors, no participation of subcontractors in the determination of work; and main contractors want to retain maximum benefits, thus reluctant to subcontract". Additionally, main contractors are not willing to impart skills on subcontractors in order for them to continue enjoying the monopoly; lack of local contractor capacity infringes on main contractors to build capacity for local contractors ; there are no clear guidelines on the implementation of the policy. Lastly, subcontractors do not participate in the procurement process and are only introduced after contracts are awarded. These statements are similar to the ones in the literature reviewed such as by CIBD (2013), Kulemeka et al. (2015), Kaliba (2015), Mbachu (2008), Abbasianjahromi, et al. (2013) and Thwala and Mvubu (2007). Inadequacies in the 20% mandatory subcontracting policy The statements submitted by respondents were analysed with respect to the 20% subcontracting policy inadequescies. The results show that out of the nine statements, six had a man mean score greater that 3.5 . The descriptive statistics are presented in Table 4 indicting that there were dissimilarities in the way respondents alleged the existing inadequacies of the mandatory subcontracting policy by different construction practitioners in Zambia. The statements from Table 4 were further analysed in order to identify those which were either important or very important. The cut off point for the mean score was set at 3.5. Out of the nine statements, six were found to have a mean score greater than 3.5. After taking a standard t-test it was found that only four statements were statistically significant (p<0.05) and they include: subcontractors don't participate in the procurement process and only introduced after contract is awarded; and no participation of subcontractors in the determination of work. The other two are: no clear guidelines on the implementation of the policy, and difficult to grow capacity of local contractors using the 20% subcontracting policy as there is disinterest in capacity building of local contractors by main contractors because of lacking incentives. Improvements to the 20% mandatory subcontracting policy The results from this part of the questionnaire were analysed with respect to respondents' perception on possible improvements to the implementation of the subcontracting policy. The initial stages of the analysis used descriptive statistics and the results are presented in Table 5. Interview results The purpose of the interviews was to obtain an in-depth understanding on how the various stakeholders in Zambia view 20% subcontracting policy. The interviewees agreed that "work allocation has to be done by the engineer/consultant at design stage instead of foreign or main contractors in order to remove bias". The subcontractors indicated that they needed to be involved in work allocation. On establishing the common methods of engaging contractors, the interviewees stated that "it would be difficult to achieve the objective of empowering and creating jobs for the local contractors if main contractors were left alone to engage subcontractors". Prefernance on the nominating of clients was justified that "it would enhance fairness and reduce the cases of main contractors buying off subcontracts whilst pretending to have subcontracted". On awareness of the subcontracting policy, the results showed that generally the respondents understood the main features of the 20% subcontracting policy. Over 90% of the interviewees stated that the main features of the subcontracting policy included: (i) Mandatory subcontracting of 20% of the contract sum to local contractors on public projects provided the contract sum was above Thirty Million Kwacha and contract was awarded to a foreign contractor; (ii) Building capacities and empowering local contractors; and (iii) Fostering employment creation for the Zambian people. To assess if the policy addresses the interests of both main and subcontractors, about 76% indicated that the current policy does not support the interest of contractors and their subcontractors. However, when asked about the functionality of the laws in the construction sector, majority of the respondents, 64%, were affirmative while 36% were not sure. To establish challenges in implementing the subcontracting policy, the data was recoded and analysed qualitatively. The main reasons, in order of importance, attributed to the policy's failure to meeting the interests of the main and subcontractors included: (i) Lack of interest by Main foreign contractors to build capacity of local contractors; (ii) Main contractors view subcontractors as potential competitors; (iii) Main contractors not willing to subcontract 20% of the contract sum; (iv) Main contractors allocate low value works to subcontractors so that they maximise profits; (v) Lack of experience, personnel, equipment and poor financial resources among local subcontractors; (vi) Insufficient capacity in project management among subcontractors inhibits the possibility of subcontracting 20% of huge or high value projects. Discussion The study identified four major inadequacies of the 20% mandatory subcontracting policy. The first one relates to non-participation of subcontractors in the procurement process as they are introduced onto the project after award of contract to the main contractor. This inadequacy was also similar to the insufficiency that was highlighted by Hoban and Francis (2010) in their study on improving the relationship between contractors and subcontractors. The second inadequacy relates to non-participation of subcontractors in the determination of work. This was also highlighted as a major concern in the procurement system of subcontractors and considered a major challenge in traditional procurement method (Mwanaumo, et al., 2014;Akintan & Morledge, 2013) culminating to inappropriate periods of activities that critical at projects duration of critical activities. However, the study found that adequate and broad view of information about the works was not properly done, thereby increasing programme failure possibilities and leading to project delays (Hoban & Francis, 2010;CIDB, 2013). To avert this inadequacy, it would be necessary to engage subcontractors during the design stage, a sentiment which was shared by Ng et al. (2008) . The awareness level in technical professional subcontracting was found to be high from clients, consultants and contractors, and a similar result was affirmed by Akanni and Osmadi (2015) that such subcontracting was widely used. The study also established the third inadequacy that lack of clear guidelines on the implementation of the policy was one of the major deficiencies of the mandatory subcontracting policy as applied in Zambia. CIDB (2013) underscored the importance of legislation and policy interventions around subcontracting firms in a bid to improve the environment within which subcontracting takes place. In the absence of clear guidelines, policy implementation would be a challenge and subject to individual interpretation (CIDB, 2013). Policy should be accompanied by clear strategies on building local contractors capacities. The strategy must be clear as how many local contractors have to be upgraded according to the NCC classification (NCC, 2018). This relates to Kulemaka et al. (2015) on the notion that proving an environment that empowers an small subcontracting firms include the elimination of entry to market barriers , growth and sustainability. The fourth inadequacy established that the 20% subcontracting policy made it futile for local contractors to build capacity because thare was lack of interest by main contracors to assit build capacity attributed to due to lack of incentives. This finding relates to the adversarial relationship which exists between main contractors and subcontractors. The findings agree with Kaliba (2015) and Choudry et al. (2012) whose assertion was based on unconcerned main contractors in developing local contractors subcontracted under them. Main and subcontractors operate with the conflict theory which emphasises the presence of conflicting forces in society, social structures, groups and individuals generally (Abbasianjahromi, et al., 2013;Akintan & Morledge, 2013). The theory perceives the society of humans as a gathering of interest groups and individual that are competitive with each other in relation to motives and expectations. However, this was non affirmative from the study which established that the Zambian construction industry's main contractors are not willing to subcontract their works. Subcontractors can therefore be enganged through nomination by clients. However, Yoke-Lian et al. (2012) and Laryea (2010) difficulties are attained when striving to empower and create jobs for loal contractors whenever main contractors are engage subcontractors on their own.The study disclosed that preference in the nomination of subcontractors by clients enhances fairness and reduces the cases of main contractors buying off subcontracts whilst pretending to have subcontracted. The study findings is affirmative with that of in which subcontracts were alleged to be bought off by main contractors. Hence, it is logical to conclude that, once main contractors were left alone to implement the policy in its current state, very few subcontractors would be engaged. Improvements and modifications to the policy enables mandatory subcontracting policy reviews that includes all other sectors in the Zambian construction industry in order to empower and create jobs for local contractors. This is in line with Kumaraswamy and Mathews (2000) who avowed that the contribution of subcontractors in other sectors of construction is more than 50% and as much as 90% of the total project value to a construction process. Other improvements include the identification of local contractors with potencial for growth under a deliberate programme; using Interated Construction Unitmethod of procuring works done by local contractors for them to grow.; training identified local contractors by consultants based on the works identified.; and Consultants approve payments made by local contrators. These relates to findings by Thwala and Mvubu (2007). The study also gathered that at project design, training of local contractors be included; ensuring that works are made accessible in not less than 3 years for local contractors identified; and the continuation of skills development and employment in other projects subcontracted by main contractors for the identified local contractors and are in line with CIDB (2013). The study established that empowering and developing capacities of small scale contractors involves strategic planning on the part of government. The findings on this study conforms with the findings of who established that the comprehensive and detailed planning processes with set quantitative and qualitative targets guide implementing institutions. However, Thwala and Phaladi (2009) stated in their study that significant lessons can be drawn from those initiatives that have been done before such as advocate the interest of emerging contractors and ensure that policies and procedures in the construction industry create an environment conducive to the development and promotion of emerging contractors; increase the participation of emerging contractors in construction activities; substantially increase the emerging construction enterprises share of work opportunities within the public sector. Additionally, several proposals were made to develop a subcontracting policy framework as a starting point and a means to develop local contractors' capacities so that they can participate major contracts implementation in the country. This was the basis and one of the main study objective. Development of the subcontracting policy framework The study established that the majority of construction companies in Zambia are small scale in nature. Thwala and Phaladi (2009) mentioned in their study that one success factor was not effective to the success of small scale contractors unless different factors are considered to ensure that the capacity is developed in the local contractors, a model was developed. The subcontracting framework was developed to ameet the objectives of the 20%the 20% mandatory subcontracting policy as illustrated in Figure 1. The key stakeholders involved are: the client (public, private or quasi-government), the consultant (engineers, architects, and quantity surveyors), the equipment suppliers or key material suppliers, commercial banks and financial institutions, the main contractors and the subcontractors. The framework is in two ways, firstly the client can make a preliminary assessment whether or not proposed project can be used for capacity building program. And secondly, if a project is useful for capacity building, the client proceeds with the procurement of a design and supervision consultant whose responsibility includes: designing of the project; preparation of project specifications as well as work breakdown packages for the subcontractors; and project supervisionThe consultant can then allocate a prime cost sum to identified works for subcontracting and then propose the equipment and key project materials required for the subcontracting works. Key points regarding the proposed framework include: the client would provide advance payment to subcontractors to kick start the subcontracting works; a meeting would be set up for all the stakeholders involved to explain the roles and responsibilities of all the parties to the contract; during the project implementation process, the cost of the procured equipment/key project materials would be deducted proportionately through the interim payment certificates (IPCs); the authorisation on the use of funds would not be left to the subcontractors alone; the client would guarantee contracts for the earmarked subcontractors for the project duration to ensure that the equipment and or key project materials procured during this time would be paid for by the subcontractors from proceeds of the project. The client is the employer of the project team members. The client has the responsibility of empowering and building capacities of its citizens. The client will have the overall responsibility and control of the project. The client will have to assess initially if the project can be used for capacity development of subcontractors or not. Stages and roles of the stakeholders in the framework The consultant is responsible for designing and supervising the works. The consultant would be responsible for allocating works for subcontracting at design stage. Works to be subcontracted would be reviewed together with the client and other stakeholders upon completion of the designs by the consultant. At this stage, comments on work allocation are added. The consultant's overall responsibility is of delivering the project. The capacity development consultant is responsible for supervising and providing training to the subcontractors. The training package include technical and financial management while subcontractors will be encouraged to maintain qualified personnel. The capacity development consultant would advise and report to the client on all matters relating to subcontracting and capacity development of subcontractors. This stage shall be used to review the designs and works allocated to subcontractors. The stakeholders involved in the review process include the capacity development consultant, design consultant, client and any other stakeholders relevant to the capacity development programme. This stage would eliminate the allocation of low value works to subcontractors. The client will issue the expression of interest for local contractors. Then the client will have a list of local contractors according to their specialisation. Capacity development criteria would be developed in order to shortlist the local contractors. The subcontractors for capacity development programme would then be nominated from the list of approved local contractors. The equipment/key materials suppliers will provide materials, equipment and back up spares to the subcontractors through the projects. The suppliers of equipment will provide on-site service and training to subcontractors' personnel for sustainability purposes. Key materials like cement, steel, fuel, bitumen, aggregates to mention but just a few would provide reasonable boost to the implementation of the project by the subcontractors who are usually financially week compared to major foreign contractors. The role of main and subcontractors would be to deliver the project according to the specifications to the client. The subcontractors will have a duty to learn and develop their technical and financial management skills from the main contractors and consultants. The scope of works for subcontractors will be well defined in the contract documents. The contracts will also provide the criteria on how performance and quality required for the works will be measured, the methods for performance measurement and the acceptance. The terms and conditions for subcontracting will be included in the contract documents, especially payment terms, retention, advance payment bond, defect liability period and liquidated damages. The monitoring of the subcontractors and the project would be carried out for purposes of checking progress and evaluating the capacity building programme. The evaluation would also be carried out to assess which subcontractors would graduate from the programme. The process would also be used to reassess those subcontractors failing to perform. Performing subcontractors would be recommended for upgrade according to NCC grades and categories. At the end of the project, it is expected that some subcontractors would have been capacitated and upgraded. The upgraded local contractors would then be assisted to tender jobs with high values especially those who will be in Grades 1and 2. Conclusions and Recommendations The paper examined the mandatory 20% subcontracting policy in the Zambian construction sector. The policy was formulated to help bridge the gap between firms of foreign origin and indigenous contractors. Through this study, it was established that the current policy has four major deficiencies in as far as meeting the objectives of empowering and building capacities of local contractors is concerned. The mandatory subcontracting policy operates within the traditional procurement realm where the main contractors normally engage subcontractors to carry out part of their contractual work based on the skill and capacity set which the subcontractors supposedly possess. This conflicts with the objective of the policy which emphasizes on the transfer of skills and building capacities of local contractors. Thus, even though the policy is in place, the implementation of this policy could be difficult. The study developed a subcontracting policy framework that would help build local contractor capacities in Zambia so that they can be able to participate in the construction projects. It would therefore be imperative that the procurement and contracting strategies be modified if the objectives of the policy are to be met. The Government of Zambia should thus review the method of engagement, work allocation system and formulate clear guidelines on implementation of the policy. The procurement and contracting strategies should also take into consideration the ever present adversarial tendencies that exist between main contractors and subcontractors.
2021-08-04T00:04:57.485Z
2020-04-27T00:00:00.000
{ "year": 2020, "sha1": "3b39d1ccaca1220edf6e02e0b57b4dd2232641e0", "oa_license": "CCBYNCSA", "oa_url": "https://journals.uct.ac.za/index.php/jcbm/article/download/644/661", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "380d3cdb4f8b61efe4bcacb8f10a230a251fc3e2", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [] }
9959584
pes2o/s2orc
v3-fos-license
Compact, Wearable Antennas for Battery-less Systems Exploiting Fabrics and Magneto-dielectric Materials In this paper, we describe some promising solutions to the modern need for wearable, energy-aware, miniaturized, wireless systems, whose typical envisaged application is a body area network (BAN). To reach this goal, novel materials are adopted, such as fabrics, in place of standard substrates and metallizations, which require a systematic procedure for their electromagnetic characterization. Indeed, the design of such subsystems represents a big issue, since approximate approaches could result in strong deviations from the actual system performance. To face this problem, we demonstrate our design procedure, which is based on the concurrent use of electromagnetic software tools and nonlinear circuit-level techniques, able to simultaneously predict the actual system behavior of an antenna system, consisting of the radiating and of the nonlinear blocks, at the component level. This approach is demonstrated for the design of a fully-wearable tri-band rectifying antenna (rectenna) and of a button-shaped, electrically-small antenna deploying a novel magneto-dielectric substrate. Simulations are supported by measurements, both in terms of antenna port parameters and far-field results. Introduction Nowadays, there is an increasing demand for the miniaturization of all modern transceiver components, mainly due to the need for combining their computational capabilities and portability: one of the main design constraints becomes their integration in small volumes (on the order of some cm 3 or even less). A big push to distributed and pervasive wireless systems applications is also given by the so-called body area networks (BANs), where small and ultra-low-power sensors create a wireless network with a coverage limited to the space surrounding the human body. These technological trends put into evidence the necessity to contemporaneously fulfill different and conflicting needs: in spite of the small volume of the possibly wearable sensor, its energy consumption should be low, and in modern low-environmental impact engineering applications, its energy autonomy is envisaged. In this scenario, a strategic role is played by the accurate design capability of innovative wireless devices to be deployed in wearable [1] and, in perspective, implantable BANs.Due to the typically small power budget involved, to the presence of non-conventional wireless links and innovative materials, the optimization of this family of radiating sub-systems is a delicate issue: the rigorous electromagnetic (EM) description of the whole scenario is mandatory in order to achieve reliable results.The unavoidable presence of nonlinear devices forces a circuit-level layout-wise nonlinear/EM design approach [2]. In this paper, we first focus the attention on the need for accurately characterizing from the EM point of view the innovative materials adopted in the design process [3][4][5].We then describe our computer aided design (CAD) platform based on the concurrent exploitation of nonlinear and EM analyses, representing an indispensable tool for the design of wearable radio frequency (RF) energy harvesters [6].This multi-domain CAD approach is applied to the accurate design of a highly efficient, completely wearable, tri-band rectifying antenna (rectenna), consisting of a compact multilayer structure resonating at the GSM 900, GSM 1800 and WiFi frequencies [7,8].Since the most space-consuming component in modern transceivers is the radiating element, to ensure the best radiation efficiency, we finally describe the promising design of an electrically small UHF antenna built on an innovative magneto-dielectric (MD) material: the EM properties of the adopted hexaferrite substrate can be efficiently exploited for miniaturization purposes by planar patch topologies, as demonstrated by the proposed button-shaped solution [9]. In both the described projects, prototype realization and wide measurement campaigns are presented to demonstrate the accuracy of the design procedure. Wearable Material Characterization The wearable characteristic/feature of radiating systems has been pursued by resorting to truly wearable materials or by achieving a miniaturization of the antenna itself through the exploitation of dense material from both the dielectric and the magnetic constitutive characteristics.In both cases, an accurate electromagnetic characterization of material properties is needed to carry out a successful design of the radiating systems under exam. There is a wide plethora of textile substrates, some of them EM-characterized for wearable applications [10,11], even if not always extensively from the frequency band point of view.Significant effort has also been done to study the effects of adverse conditions in wearable antenna applications [11][12][13][14]. For our first design, we select a common pile fabric to be used as the substrate of a planar antenna, adopting a metallic fabric for the conductive parts of the system.This materials' combination allows a fully-flexible system to be completely integrated, or even hidden, in garments.As an alternative solution, we have studied a new magneto-dielectric material allowing efficient antenna realization in the UHF band with dimensions comparable to garment buttons.The information on material properties is totally missing in these cases.To define the relative permittivity ε r and dielectric loss tangent tanδ ε of the fabrics, analytical methods based on structure can be adopted [3,4], as described in the following.Measurements of the same resonant structures can be used for controlling the analytical results. A more complex situation has to be faced for the complete characterization of a magneto-dielectric substrate, since two further unknowns need be defined, namely the relative permeability μ r and the magnetic loss tangent tanδ μ .In this case, a reverse model engineering approach is chosen and the material characteristics are derived by a set of measured scattering parameters of a two-port planar resonant structure. Let us consider firstly the simpler case of the non-conductive 4 mm-thick pile fabric (labelled Pile 1), used as the dielectric support for the wearable antenna described in Section 3. The procedure we adopt for the EM-characterization of the material consists of the following steps: (I) we start from an estimation of the relative permittivity and the design of a T-resonator based on this estimation; (II) using full-wave solvers [15], a large database containing the scattering parameters of the resonator is built by varying the EM-properties of the substrate (ε r and tanδ ε ); (III) then, the same resonator is realized on a pile substrate (see Figure 1a) and the transmission coefficient between Port 1 and Port 2 (S 21 ) is measured up to 3 GHz with a network analyzer Agilent N1996A; the prototype EM characteristics are derived from the S-parameters measurements, both using [3] and by comparison with the EM-database.In the present case, the identical resulting relative permittivity and loss tangent values provided by the two approaches are 1.23 and 0.0019, respectively.This is demonstrated in Figure 1b, where an excellent correspondence between the measured transmission coefficient S 21 and the corresponding best-fitted simulated result (obtained with ε r = 1.23 and tanδ ε = 0.0019) is clearly demonstrated.Since various sorts of pile fabrics, considerably different from the sight and touch points of view, were provided by the same vendor, we also realize a second identical T-resonator on a second pile sample (labelled Pile 2), whose transmission coefficient is superimposed on the same figure: fortunately, almost identical EM properties pertain to this sample, too, since excellent correspondence with the EM-simulated behavior is again achieved. As regards the conductive fabrics, there are two alternatives: (I) conductive threads, created from single or multiple strands of conductive and nonconductive fibers; (II) electro-textiles, mostly created by incorporating conductive threads into fabrics by means of weaving and knitting.We resort to the second solution, in particular to the Global EMC shielding fabric as the conductive material.This choice is due to several reasons: the low surface resistivity (0.02 Ω/sq) of this material, its minimal fraying with respect to other fabrics, which allows precise and accurate cuts/slots, and the robustness of its thermo-adhesion to the pile substrate. The situation is more complex in the presence of a material with both permittivity and permeability greater than one.The unknowns to be estimated are four in this case (ε r = ε' − jε'', tanδ ε = ε''/ε', μ r = μ' − jμ'', tanδ μ = μ''/μ'), and the theory of the quasi-transverse EM (Q-TEM) microstrip mode exploited in [3,4] is not present in the literature, to the authors' knowledge, and is currently under development.Due to the lack of knowledge of the complex characteristic variability range, the EM-based numerical approach previously adopted would have been too time-consuming in this case.For this reason, we resort to direct measurements of material samples by means of an Agilent E4991A RF Impedance/Material Analyzer in the band 1 MHz-3 GHz [9].Table 1 reports the results of the measured parameters extraction and the corresponding best fitting with the EM tool Lorentz dispersive model [15] of the hexaferrite sample at our disposal: at the frequency of 868 MHz, where the design of the miniaturized patch antenna described in Section 4 is carried out, the adopted material exhibits a value of ε' ≈ 12 and a value of μ' ≈ 2, with a dielectric loss tangent ε''/ε' ≈ 0.01 and a magnetic loss tangent μ''/μ' ≈ 0.38.This results in a refraction index of approximately five. Design of a Wearable Multi-Layer Multi-Resonant Rectenna Recent technology trends in ultra-low power microcontrollers and sensors pave the way to applications requiring small amounts of power (from a few μW to a few hundreds of μW).For this purpose, ubiquitously available radio frequency (RF) sources, operating at different bands, with unknown directions of incidence and polarizations, can be exploited by rectifying antenna (rectenna) systems to provide a solution to this problem [16,17].However the available amount of RF energy is low, even in highly humanized environments [18], and typically, only a few μW of power can be provided by highly efficient rectennas.We thus imagine that these systems could be more suitable for "RF upon request" applications [19], always exploiting the sources typically present in all humanized environments, but in correspondence with a precise demand by the user (e.g., a nearby mobile phone call).From this point of view, a fully-wearable rectenna for technology garments can find its proper application [7,20].The design of such a system is a delicate issue, since there is a number of challenging aspects in RF energy harvesting to be considered, such as the spatial/temporal variability of RF sources and the frequency of operation: thus, a useful rectenna must have a highly efficient antenna, operating at different bands, with unknown direction of incidence of the incoming field, and must efficiently rectify the received RF power.When wearable devices are concerned, additional problems have to be faced: the effects of body and of topology variation (e.g., layer bending and/or shift) on the system performance must be accounted for. Here, we describe the accurate design of a highly efficient, completely wearable tri-band rectenna, consisting of a compact multilayer structure resonating at the GSM 900, GSM 1800 and WiFi frequencies. Design Procedure The different design domains, firstly introduced in [6], that should be adopted to accurately define and dimension the building blocks of a harvesting system are schematically depicted in Figure 2. Note that in this formulation, the RF harvesting system is expected to operate whenever the RF energy is available and to store it for immediate or future usage in an efficient way, through a power management unit.Thus, the operating rates of the entire system spans from the microwave spectrum, where the RF energy is located, down to the baseband, where the DC/DC converter operates.The key difficulty for this class of systems design is thus the ability to combine the results of the nonlinear rectenna regime, more suitable for the harmonic-balance analysis based on complex phasors, with the time-domain results needed for the correct dimensioning of the DC/DC converter.The correct combinations of these domains is of fundamental importance, to correctly account for the impedances of the different sub-blocks.At RF, the subsystem consisting of the receiving antenna, loaded by the rectifier, and of the antenna-rectifier inter-stage network needs to be designed in a steady-state nonlinear regime by means of electromagnetic theory and software tools typical of an RF design environment, such as Harmonic Balance (HB).At this stage, the network function to be optimized is the RF-to-DC conversion efficiency, given by: where P AV is the available RF energy on the antenna location, while P RECT is the rectified power at the rectenna output port, delivered to a fixed optimum load.At baseband, the transient design of a DC-DC converter, keeping the rectifier dynamically close to the optimum load condition, is designed by transient analysis: the efficiency to be maximized in this case is a DC-to-DC efficiency, given by [6]: as the ratio between a fraction (F < 1, ~90%) of the maximum energy stored in the output storage capacitor (E HARV ) during its charging time T C .For the RF nonlinear design, we refer to the situation depicted in Figure 3a, where the wearable rectenna is immersed in a multi-source environment.For any given RF source at an angular frequency ω RF , we resort to EM theory [21,22] to effectively estimate the actual RF power available to the rectifier circuit (P AV in Equation ( 1)).Under the assumption of "plane-wave approximation" for the incoming RF field (E i of Figure 3b), which normally holds for conventional communication systems, the Norton equivalent current source can be easily cast in the following way [6]: where E H is the far-field vector at the same RF frequency radiated by the harvesting antenna (described by the EM-based admittance Y H ), when operating in the transmitting mode, powered by a voltage source of known amplitude U and internal resistance R 0 .Here, r is the spatial vector defining the RF source direction of arrival in the receiver-referred spherical reference frame.It is worth noting that in the case of harvesting upon-request, the link between the RF source (e.g., mobile phone) and the receiving harvester antenna can be "unconventional": it means that the "plane-wave approximation" may not be valid any more, due to the far-field condition violation and/or the exploitation of a link direction different from the maximum one.Furthermore, in this case, the equivalent current generator can be obtained in a rigorous way, resorting to Love's equivalence principle [23], according to the following formula: where the electric and magnetic fields of both the RF source and the harvester antenna are numerically evaluated on a Ʃ plane placed in between the two antennas.Formula (4) provides correct results irrespective of the antenna orientations and transmitter-to-harvester distance and eventually takes into account the nearby objects affecting the link (such as the human body in Figure 3b) [6].Note that by means of Equation (3) or Equation ( 4), polarization mismatch between the rectenna and the incident field is automatically and rigorously taken into account.3) and ( 4)).(c) Circuit-level equivalent description of the tri-band wearable rectenna. By resorting to the circuit-level description of Figure 3c, a straightforward nonlinear HB optimization can be easily performed, contemporaneously focusing on the frequency bands of interest.Moreover, the rectenna nonlinear regime due to the simultaneous incidence of multiple ambient RF sources is directly provided by a multi-tone HB analysis (as in the case to which Figure 3c refers). In the present design, we focus on the two GSM and WiFi frequency bands (900 MHz, 1,800 MHz and 2,450 MHz, respectively).Our architecture choice consists of a single multi-resonant antenna with a single rectifying section: this way, the unwanted EM-couplings always present in multi-antenna solutions are avoided, as well as the minimization of diode losses is achieved.The final engineered solution of the present rectenna is shown in Figure 4, where the multi-layer adopted layout is also evident [7]. Besides the pile substrate for the antenna, another grounded pile layer is adopted for better isolation of the human body.The only non-wearable portion is the yellow portion of Figure 4, consisting of the 0.1 mm-thick flexible Kapton substrate (ε r = 3.4, tanδ ε = 0.002) under the ground plane.Its area is reduced to a minimum in order to guarantee a more comfortable wearable solution: it only contains the antenna feeding network (matching network, phase shifter and power divider) and the rectifier circuit.The use of a single tri-band antenna, instead of a broadband one, represents the best choice in the case of low available power budgets, since the resonant behavior at each frequency is a synonym of high antenna efficiency, a fundamental requisite for the maximization of P RF .In this case, we adopt an annular ring planar antenna for the exploitation of higher order modes, such as the TM12, having superior radiation properties [24].In Figure 5a, the theoretical current distribution for the three selected modes (TM11, TM21 and TM12) is reported for a generic annular patch.The main difficulty is to exactly match the three desired frequencies to the three selected modes: this is impossible by only acting on inner and outer radii dimensions; for this reason, we etch eight symmetric slots in the conductive fabric of the patch and use their dimensions as tuning parameters, thus achieving the following desired correspondence of the modes resonance frequencies: TM11 at 900 MHz, TM21 at 1,800 MHz, and TM12 at 2,450 MHz [8]. Figure 5c shows the final slotted annular ring topology, with inner and outer radii of 21 and 71 mm, respectively, obtained after a sequence of parametric full-wave analyses for different slot area values; while in Figure 5b, the modelled surface currents at the three resonances are reported.In spite of the slotted surface, the current distribution is quite similar to the theoretical one.Another delicate issue is the shape and position of the coupling slot in the ground plane used for antenna excitation (the dashed shape in Figure 5c).Two coupling slots are needed for the circular polarization purposes of the two-ports harvesting antenna, and the best tradeoff between good port matching and low port decoupling is obtained with a "dog bone" shape [7].The multi-band linear feeding network design is first carried out at the circuit level, then layout-wise by means of a fine-tuning EM analysis.In both cases, the actual load offered by the antenna is taken into account: the resulting three-port network on a flexible Kapton layer (see Figure 6b) consists of a broadband 90° phase shifter (for circular polarization purposes) and a power divider.The matching network topology (an impedance step, in this case) is obtained as the final result of the multi-band HB nonlinear optimization (see Section 3.2). The final layout of the multi-layered prototype is shown in Figure 6a.Due to the placement of the antenna on a jacket (Figure 3b), there is enough space for a large antenna ground plane: this improves the antenna directivity and reduces the human body effects. Results The performance of the two-port antenna in terms of scattering parameter behavior and radiation patterns at all of the frequency bands of interest are given in Figures 7a,b and 8, respectively.A satisfactory agreement between measurements and simulations is obtained in both cases.A notable discrepancy occurs in terms of the reflection coefficient at 900 MHz: the sharpness of the corresponding resonance makes it the most delicate operating bandwidth, highly sensible to mechanical tolerances in ground aperture shape.The simulation of the two-port antenna with in-quadrature port excitation and a background plane area of 25 × 25 cm 2 provides efficiencies of 61%, 54% and 85% in ascending frequency order and realized gains equal to 4.7, 4.9 and 9.1 dB, respectively.An axial ratio less than 3 dB is obtained for the TM11 and TM12 mode for a wide elevation range; the TM21 mode is unable to provide circular polarization due to the exact superposition of the excited orthogonal modes (see Figure 5a).Figure 7a also reports the effects of an unpredictable, but probable, misalignment between the antenna and the Kapton layers.This could happen during the manual attaching procedure, despite suitable markers realized on the layers: a ±3 mm shift is the over-estimated range of uncertainty.The modeled scattering parameters shown in the figure guarantee a sufficient robustness of the rectenna design with respect to unwanted asymmetries: the maximum resulting shift is about 4% for the lower frequency, which is the most susceptible band, as measurements have demonstrated. Two further simulations are carried out addressing bending and human body effects.Firstly, the antenna and the feeding layers are bent around a vacuum cylinder with a 150-mm diameter, corresponding to a typical adult chest.No significant variations with respect to the flat configuration are observed, provided that no misalignment occurs.Secondly, the cylinder is filled with a human-body-like material (ε r = 53.3 and σ = 1.52 S/m); the presence of the 25 × 25 cm 2 back shielding fabric guarantees almost unchanged behavior, in this case, too.Finally, a multi-band nonlinear optimization of the whole rectenna is carried out by the HB technique: design specifications are simultaneously given in terms of RF-to-DC conversion efficiency, at the three fundamental frequencies, by combining the frequency-dependent EM description of the antenna/phase-shifter with the nonlinear rectifying circuit.The latter consists of a single-stage, full-wave peak-to-peak RF-DC converter, as shown in Figure 3c: this simple rectifier topology, employing low-threshold-voltage diodes (Skyworks SMS7630), minimizes diode losses and represents the best choice for ultra-low-power applications [6,25].At this stage, the design parameters are provided by the matching network layout: in this case, it simply consists of a microstrip impedance step.The power levels pertaining to the equivalent current generator Equation (3) (or Equation ( 4)): span the low-power range typical of scavenging scenarios (from few to few hundreds of μW), with ω RF equal to 900, 1,800 and 2,450 MHz for the three bands application under exam. Figure 9a finally provides the rectenna performance in terms of RF-to-DC conversion efficiency for the three operating frequencies, as a function of the power transmitted by a resonant dipole placed 30 cm away and in the maximum link direction.For the sake of clarity, the same figure also shows the corresponding RF available power abscissa axes: they obviously put into evidence the strong channel dependence on the operating frequency.Due to the adopted link distance, the field radiated by the dipole cannot be considered a uniform plane wave in the harvester location: for this reason, we resort to Equation (4) in the evaluation of Equation ( 5).The same figure reports also a good comparison with available measurements carried out in a real office scenario for typical low incident power levels.In more complex scenarios (i.e., non-line-of-sight links), the inclusion in the CAD tool of the channel description is mandatory in order to obtain realistic predictions [26].The presented procedure allows us to simultaneously account for the presence of various RF sources by means of a nonlinear multi-tone HB analysis (as envisaged in Figure 3), where the different tones are represented as independent rectenna excitations: intermodulation products up to the third order are considered in the HB simulation.In Figure 9b, the rectenna output DC voltages resulting from the superposition of different sources (solid line) and from single sources are compared, again as a function of the incident transmitted power.The transmitted power values of the figure are associated with one excitation in the single-tone analysis and are equally distributed across the different tones in the multi-tone case.It is worth noting that, at low power levels, the deployment of the intermodulation products generated by the nonlinearities improves the rectenna conversion capabilities.At higher levels, this effect is negligible. Wearable Miniaturized Magneto-Dielectric Patch Antenna The use of wearable or implantable radiating subsystems for body area networks applications has provided growing interest in the miniaturization of the most space-consuming component: the antenna.Many patch antenna miniaturization techniques have been proposed in the literature [27,28], but the corresponding reduction factor is unable to fulfill wearable/implantable applications.The main problem of this research field, which is still an open issue, is the unavoidable reduction of the radiation properties by exploiting electrically small antennas.Chip antennas on electrically dense ceramic materials (ε r high) verify this assumption.Moreover, patch antennas on high-permittivity materials badly exploit the reduction factor provided by the reduced "guided wavelength" value (λ g = λ 0 /√ε r ).This is due to the fact that antennas whose radiation behavior relies on equivalent magnetic currents (as the case of patch antennas) do not take advantage of an increased value of the permittivity.Conversely, an increased value of the permeability can be positively exploited by patch antennas [29]. Starting from these assumptions we adopt an M-type barium-strontium hexaferrite Ba 0.75 Sr 0.25 Fe 12 O 19 [30] for miniaturizing a patch antenna operating at 868 MHz [9].The most significant advantage in using this kind of magnetic material as an antenna substrate is the high value of its ferro-magnetic resonance (FMR) (above 10 GHz, for the adopted sample): we can thus assume that our operating frequency band is sufficiently far from FMR and that the material properties are far from varying strongly and rapidly.This has been verified by the measurements reported in Table 1. Antenna Design The choice of antenna operating frequency is due to a wide deployment of the 868 MHz frequency for UHF-RFID applications in Europe.The sample of magneto-dielectric (MD) material we use has a disk shape without sharp edges, more suitable for wearable applications.Besides the consistent dimension reduction due to the high value of the material refractive index, we make also use of a standard antenna miniaturization technique: the use of a shorting plate [31], i.e., an electric conductor wall directly connecting the antenna upper metallization to the ground plane allows us to halve the antenna length, thus reducing it to λ g /4.Since the free-space wavelength is about 345 mm at 868 MHz, and the MD material refractive index is about the, the antenna's final length is ≈18 mm: this corresponds to a λ 0 /20 antenna.The disk diameter is thus 33 mm, in order to allow enough space between the radiating edge of the patch (the non-short circuited one) and the end of the disk.This way, an electrically small button-shaped antenna is obtained. The use of the shorting mechanism suggests adopting the so-called planar inverted F-antenna (PIFA) architecture [31], employing a coaxial cable as the feeding solution.Two problems arise from previous technological choices: (I) the need to create a hole for the inner conductor of the coaxial cable inside the fragile MD material; and (II) the high permittivity of the MD material (ε′ ≈ 12) can significantly degrade the antenna radiating properties.An MD substrate with a thickness of 5 mm is thus selected as a good tradeoff between the robustness and the desired antenna performance. As regards the shorting wall, the ideal choice would have been a planar plate immersed in the MD material.This solution results in being impractical due to the fragile nature of the MD material: we thus adopt a curved shorting wall realized on the external face of the disk (see Figure 10a).The metallization for the antenna, the shorting plate and the ground plane is a 4 μm-thick silver film.The final antenna topology turns out to be the one reported in Figure 10a, where the position of the 50 Ω coaxial cable providing the best matching is also shown [9]. Figure 10b reports the simulated surface current at an operating frequency of 868 MHz: the expected behavior obeying the image theorem is obtained, corresponding to an almost zero current in the lower (radiating) edge and a maximum current in the upper (shorted) edge. Results The input port matching condition and the far-field performance of the realized prototype are compared with the corresponding full-wave simulation results in Figures 11 and 12, respectively. The comparison of the reflection coefficient behaviors at the cable port of Figure 11 demonstrates the effectiveness of our design and of the MD material parameter characterization: the correspondence between measured and modelled results is excellent over a wide frequency band.In particular, the antenna operating bandwidth is significantly wider than standard patch antennas: a relative value of almost 29% is obtained by assuming as acceptable reflection in the limit of −10 dB.This positive behavior is mainly due to the high magnetic losses of the MD material (tan δ μ ≈ 0.38 at 868 MHz, from Table 1).These high losses represent the main drawback of the realized antenna: the simulated efficiency is equal to 1.5%, only.However, this low value is typical of electrically small antennas.An improvement of the material synthesis procedure from the losses point of view will be a strategic issue in our future research.The Figure 11b underlines the advantage in using our MD material instead of an only-dielectric (OD) one, equivalent from the refractive index point of view.Here, the simulated reflection coefficient behavior of an identical patch built on a high-permittivity dielectric is given, for different feeding point positions.The high permittivity OD antenna cannot be matched to the 50 Ω feeding cable. As regards the radiation performance, we exploit the foreseen wearable application of our button-shaped antenna by extending its ground plane.The first worn antenna measurement is carried out with a 10 × 10 cm 2 Global EMC-shielding fabric (the same used in the design of Section 3) behind the antenna prototype.This way, the broadside behavior of the antenna is preserved, as reported in Figure 12 for both the E-and H-plane of the antenna.Again, an excellent agreement between measurements and EM results is achieved, and this is true, both for simulation with and without background human tissue.The same figure also shows the interesting simulated result of the worn antenna without the back-mounted shield: the skin itself acts as an extended ground plane, thus maintaining the desired broadside radiation behavior; the (computed) directivity is only 1.5 dBi lower with respect to the case with the extended fabric ground plane, and the front-to-back ratio is reduced to 8 dB, but still compatible with wearable applications. Conclusions We have presented some promising solutions for truly wearable and energy-autonomous systems using non-conventional materials.First, to obtain successful design results, the EM characteristics of these materials need to be established with respect to the operating frequency band.A robust procedure, making use of EM theory, full-wave numerical analysis and measurements has been demonstrated, both for dielectric fabrics and for conductive fabrics used in place of traditional planar metallization.A more complex procedure was needed to characterize dielectric and magnetic losses in the case of magneto-dielectric materials.Then, a rigorous nonlinear circuit technique has been described, including EM simulations, in order to accurately account for the actual layout of the compact wearable RF transceivers design.The accuracy in the optimization process is essential, especially when extremely low power budgets are involved.The integrated CAD approach here described combines all of these requisites, and its effectiveness is demonstrated in the design and measurement of two original radiating subsystems: a fully-wearable tri-band rectenna harvesting RF energy from humanized environments (covering the GSM and WiFi frequency bands) and a λ 0 /20 UHF patch antenna on a new MD material.The conductive and non-conductive fabrics deployed in the rectenna projects reveal excellent performance, comparable with traditional RF circuit materials.The synthesized MD substrate for the PIFA operating at 868 MHz justifies the promising exploitation of materials with relative permeability greater than unity for miniaturization purposes: future improvements in the MD material synthesis process, mainly from the magnetic losses viewpoint, pave the way for a new generation of electrically small and efficient antennas. Figure 1 . Figure 1.(a) Photo of the wearable T-resonator prototype.(b) Measured and simulated transmission coefficient behavior vs. frequency. Figure 2 . Figure 2. Block diagram of the multi-domain CAD tool for rectenna design. Figure 3 . Figure 3. (a) Wearable rectenna in the presence of multiple incident RF sources.(b) Electromagnetic (EM) representation of the sources as EM fields interacting with the harvesting antenna (to be used in the evaluation of Equations (3) and (4)).(c) Circuit-level equivalent description of the tri-band wearable rectenna. Figure 5 . Figure 5. (a) Theoretical and (b) simulated electric current patterns for the ring antenna selected modes.(c) Final layout of the tri-band annular ring antenna. Figure 6 . Figure 6.(a) Photo of the multi-layered rectenna prototype.(b) Layout of the three-port feeding network; Port 1 is in DC, while Port 2 and Port 3 are the RF antenna feeding ports needed for ensuring circular polarization. Figure 7 . Figure 7. (a) Measured and simulated reflection coefficient of the two-port ring antenna and simulated reflection coefficient dependence on pile-Kapton substrate misalignment.(b) Measured and simulated transmission coefficient of the two-port ring antenna. Figure 8 . Figure 8. Measured and simulated radiation patterns of the two-port ring antenna at the three operating frequencies. Figure 9 . Figure 9. (a) Conversion efficiencies of the whole wearable rectenna at the three operating frequencies.(b) Rectified output voltage at the three operating frequencies and in the contemporaneous presence of the three tones. Figure 10 . Figure 10.(a) Photo of the "button-shaped" patch antenna on a magneto-dielectric material.(b) Surface current distribution on patch metallization at 868 MHz. Figure 11 . Figure 11.(a) Measured and simulated reflection coefficient at the magneto-dielectric (MD)-antenna port.(b) Reflection coefficient of an only-dielectric (OD)-antenna with an identical refractive index, by varying the coaxial cable position. Figure 12 . Figure 12.(a) E-plane and (b) H-plane normalized radiation patterns of the proposed MD-antenna at 868 MHz, on a conductive fabric and on a body. Table 1 . Measured and modelled values of the complex permittivity and permeability of the hexaferrite sample in the frequency band of interest.
2015-09-18T23:22:04.000Z
2014-08-18T00:00:00.000
{ "year": 2014, "sha1": "c98c9ccbe7fe1671da40d37ca3d00faa3de4ef2e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-9292/3/3/474/pdf?version=1408355165", "oa_status": "GOLD", "pdf_src": "Crawler", "pdf_hash": "c98c9ccbe7fe1671da40d37ca3d00faa3de4ef2e", "s2fieldsofstudy": [ "Engineering", "Materials Science", "Physics" ], "extfieldsofstudy": [ "Engineering" ] }
59359967
pes2o/s2orc
v3-fos-license
Sixty Years of Speech : A Study of Language Change in Adulthood Research on language change has been complicated and hindered by the problem of obtaining quality data. In many cases, the large volume of time required to collect recorded speech at different intervals, as necessary in lifespan studies, is prohibitive. Researchers further risk having participants drop out, leading to a limited pool of data. One way to avoid this is to use recordings available in the public domain that have been recorded for other purposes. The BBC broadcaster Sir David Attenborough is one of the few people who have had occasion to be recorded regularly over a great span of their lives. In this study, a selection of clips from wildlife documentaries that he has narrated furnishes the data for a glimpse into the possibilities of language change in adulthood. Received Pronunciation, the accent that Attenborough commands, is in the spotlight in this study. Two features of speech, namely, the presence and degree of t-glottalisation and the TRAP/STRUT vowel distinction, are examined in Attenborough’s speech against a background of known changes in the general usage of Received Pronunciation. The aim of the study is thus to see if language change occurs within the speech of an adult individual, particularly one whose speech is almost iconic. His narration from the 1960s is compared with narrations from the 1980s and 2000s in a dataset spanning nearly 60 years with the aim of discerning any trajectories of change. Some patterns in his formant values for several vowels across the three year groups are also discussed to provide an idea of what sort of changes can occur in the course of nearly 60 years. The study ultimately finds limited change in level of t-glottaling and only a slight movement of his TRAP/STRUT vowels towards each other between the narrations of the 1960s and the 1980s, with no perceptible change thereafter. The changes in community use of Received Pronunciation seem to affect him little. In terms of the overall vowel space, the trend seems to be towards a centering of most of the vowels, particularly the front vowels. Some plausible explanations for the limited amount of change are discussed in the article, which include Attenborough being seen as a steward of the accent as well as its utility to him in his position as a renowned broadcaster. The article also brings up the need for more research into the interface of gerontology and sociolinguistics, as the quite pronounced centering of the vowels may suggest natural age-related pronunciation effects. Sixty Years of Speech: A Study of Language Change in Adulthood Bei Qing Cham 1 Introduction Background Language change is a well-known linguistic fact, but the processes that lead to it and affect the degree and rate of change are still not fully understood.It is a complicated issue shaped by many factors, not all of which are easily identifiable.Wagner (2012:371) writes that "[h]uman languages arise through a combination of universal shared capacities (Chomsky 1957) and the social interactions of individuals and communities".The shared capacities sometimes amount to tendencies in the direction and shape of language change, which linguists have determined by looking at historical data of language change.While these linguistic factors tell us why language might change, they cannot themselves explain its processes, nor how these changes occur in real time, for which social factors may be more responsible. Some social parameters such as social class, speaker age, and gender are recognized to have some bearing on change: they are generalizable to an extent, and may predict the course of change.Social class is perhaps one of the most well-known factors affecting language use.William Labov (2006Labov ( [1966]]), in his famous study conducted in departmental stores in New York, isolated a connection between social class and usage of a more prestigious versus less prestigious variety.While not directly related to language change, which is our focus, the study established a connection between social class and language use as well as notions of register that continue to be key in any study of sociolinguistics. In terms of age, it is generally accepted that change in the individual's speech is much more common and even expected in youth than in adulthood, where it is seen to stagnate or even fossilize.We now know that this is not definite, through research such as Raumolin-Brunberg's (2005) study of letters written by historical persons of various ages, in which most of the subjects exhibited some increased use of the variant under study over their lifetimes. In many studies a gender effect has been found, although the effect seems to relate to how gender is socially constructed and understood within specific community settings.Deborah Cameron has tracked some developments in the understanding of gender effects on language change in linguistic research, coming to the conclusion that "[i]n recent years, researchers working on various aspects of gender and language change have been challenged to engage with the argument that gender, too, is a form of social and symbolic practice" (Cameron 2003:196).If genderan aspect of social and personal life that the average person may expect to be apparent and stableaffects language use at the same time as being continually constructed by it, we are drawn to question what else might have more complex relationships with language change than might be apparent on the surface. Alongside the more well-known socio-physical factors of social class, age, and gender are other probable factors affecting language change.Change in an individual's environment, such as movement to a new geographical location, moving from one stage of life to another, involvement in new communities with different dialects, or a change in social status are also likely to affect the acquisition or presentation of different variants from the ones the speaker learns to use in childhood.These are noticeably less simple to grade, with no clear binary boundaries, and may, as in the case of geographical movement and coming into contact with new communities, be linkedalthough not necessarily so.Lifespan studies utilizing at least two data points from speakers at two different stages in their lives, provide perhaps the best possible data to gain an understanding of language change across the lifespan, allowing researchers to track individual changes in usage alongside those of the community.However, studies of this kind are rare, since the resources necessary for such research are understandably prohibitive. Such lifespan studies are becoming more plausible with advances in audio-technology and the building up of corpora of speech over long periods.Few speakers, however, have had reason to be recorded regularly over enough time for a comprehensive study of their speech changes to be conducted.One exception is the remarkable and unusual "7 and Up" series of footage recorded by Michael Apted, which has records of 14 individuals recorded at 7-year intervals across a good span of their lives.However, the data are also inevitably incomplete in some aspects: not all of the subjects have been tracked throughout their lives, and their backgrounds and life events are not recorded in extensive detail, making it difficult to relate the development in their speech to particular sets of factors.Sankoff's (2004) study of two such individuals, Neil and Nicholas, brought her to the conclusion that there were observable changes in their usage across time, but she attributes this to them both having "remarkable trajectories of individual linguistic enterprise" (Sankoff 2004:136).She thus regards them as being exceptional in the amount of language change they exhibit.She goes on to cite other research such as Brink andLund's (1975, 1979) studies of Danish speakers, and Sankoff et al.'s (2002) investigation of Montreal French speakers, which conclude that change is unusual and that most speakers remain stable throughout their lives (Sankoff 2004). Differing from the more periodic nature of Sankoff's (2004) study are two that make use of data collected more regularly over a long period of time in very specific situations.A study by Harrington et al. (2000) of the Queen's Christmas broadcasts, and another by Shapp et al. (2014) of Judge Ruth Bader Ginsburg of the New York City Supreme Court make use of audio originally collected for other purposesfor broadcast or for court recordsin each case analyzing the speech of a single eminent individual.Both studies relate their investigations to available information on the particular characteristics, position, and circumstances of the person in their specific situation.While such close investigation of an individual means the results cannot be easily and accurately generalized, it provides us with a much more in-depth understanding of language change on the individual level.To such ends, this study also examines the speech of another notable individual in the course of his 60-year-long career as a broadcaster, with the aim of being able to add to, and be compared against, the results of these studies. Received Pronunciation The Received Pronunciation (RP) accent examined in this study has, perhaps owing to its high status and visibility in the media, been the subject of many studies of language change.For a long time, it has been taken to be the "correct" or standard pronunciation in Britain and its colonies, being the variety spoken by the Queen and many other notable public figures as well as the accent of choice of the British Broadcasting Corporation (BBC) for much of its history.As a living accent in daily use it may not be immune to change from its established standards (in a prescriptive sense).Of interest in this regard is Harrington et al.'s (2000) study of the Queen's Christmas broadcasts, in which her vowel quality over three decades of broadcasts is analyzed in order to discern if this most iconic of RP speakers has been influenced by the speech of her subjects. Aims The present study therefore aims to contribute data toward the study of language change in adulthood and to gain insight into the processes and motivations of change from the particular circumstances of a single speaker.This is done through the transcription and analysis of certain phonological and phonetic properties of the speech of the British broadcaster David Attenborough.He has been chosen because a wealth of narrative speech data is available from his public broadcasting work over a very long period.Furthermore, as a representative of the BBC, whose career is or has been associated with his pronunciation, he is an interesting case study against the background of change that has been observed in the prestigious RP accent.His particular circumstances also make him a good subject for comparison with Harrington et al.'s (2000) study, since both of these subjects have a relationship to RP that can be considered an exaggeration of what is spoken among the general public. Speaker and Data Information The speaker in this study, David Attenborough, is an iconic English broadcaster, naturalist, and speaker of RP.He was born in West London in 1926, but grew up on a university campus in Leicester.His career as a broadcaster began at the BBC in 1954 with his narration of the series Zoo Quest, and has since spanned 60 years. The data for this project has been taken from nature-related documentaries narrated by Attenborough, all of which were produced by or in association with the BBC.Nine different documentary episodes were selected from eight different series with the aim of representing three time periods in Attenborough's career, namely the mid-1950s to mid-1960s, the late 1970s to 1990, and 2000 to the present, henceforth known as the 1960s, 1980s, and 2000s, respectively.From each of these documentaries, an excerpt averaging 7 minutes and 44 seconds was taken for a total of nine short clips. Each clip was transcribed using ELAN (Brugman and Russel 2004) and then put through FAVE-align (Rosenfelder et al. 2011) in order to align each sound segment with the transcribed text.Vowel formant values were then obtained using FAVE-extract (Rosenfelder et al. 2011), while consonantal variables were coded impressionistically using the handCoder script (Fruehwald 2011) on Praat (Boersma and Weenink 2014).The details of the clips are provided in Table 1. Features under Study This paper focuses on two specific features of pronunciation, namely t-glottaling and the TRAP/STRUT vowel distinction, which are known to have shifted across time in RP.Whether the shifts can be observed in a single prominent speaker within the same time period is the question explored in this study.T-glottaling refers to the realization of /t/ in certain environments as non-standard /Ɂ/ where the standard pronunciation is /t/.The incidence of the form has been found to have increased in RP over the course of the 20 th century, with glottalized /t/ having been documented as a variant since the mid-20 th century in environments where /t/ preceded an obstruent or sonorant in the following syllable or word.In the late-20 th century, tglottaling was found to also occur in RP in absolute final position preceding a pause, and in word-final position even where the following segment is not a vowel.Although t-glottaling is perhaps most famous in Cockney, where it often occurs in intervocalic word-medial position, this is not a known feature of RP (Wells 1997).In this study, /t/ has been coded with respect to two variants, non-/Ɂ/ and /Ɂ/, in order to determine the pattern of glottalization in Attenborough's usage. The TRAP/STRUT vowel distinction is one that has been previously found to be getting smaller in RP (Wells 1982).The vowels in question correspond to /ae/ and /ʌ/ in the International Phonetic Alphabet, and AE and AH in ARPAbet, which was the required transcription code of the software employed in this study.The two will be analyzed separately before being compared and addressed together using normalized data obtained through FAVE-extract in order to eliminate differences resulting from anything other than actual shifts in formant values.It is widely agreed that /ae/ has become more open in the 20 th century (Gimson 1966, Wells 1982), while changes in /ʌ/ are less definite (Bauer 1985). This paper will also briefly look at non-normalized vowel formant values across the year groups in order to get a sense of the range of data obtainable from a single speaker's recordings spanning long periods, and to discuss the possibility of identifying age-related physiological changes and how these may impact on the overall perceived changes in a speaker's phonological features. T-Glottaling Taking as possible environments all word-medial and word-final instances of /t/, we find that throughout the data the incidence of t-glottaling is low (Table 2).The amount of t-glottaling ranges between 11% and 6%, so although it is never highly prevalent in Attenborough's speech, it seems that /Ɂ/ is a variable available to him.However, it is unclear from Table 2 if the differences in the level of t-glottaling observed translate to a significant change in his usage of the non-standard variant.Furthermore, there are many other independent variables outside of the time period that could have affected the rate of t-glottaling.A multivariate analysis was therefore conducted using Rbrul (Johnson 2014) in R (R Core Team 2014) to examine the effects of time period (year group), the location of /t/ in the word, the word class, word, preceding segment, preceding segment type (vowel or consonant), following segment, and following segment type (vowel, consonant, or pause): The results of the multivariate analysis (as shown in Figure 1) suggest that the word, word class, position in word, time period, preceding segment, and following segment all have an effect on determining the probability that /Ɂ/ is used by Attenborough.First, the time period shows a significant influence: /Ɂ/ is more likely in the 1960s than the 1980s, although by the 2000s it is less apparent which way the trend is going.It is notable that the actual preceding and following segments do not seem to be a strong factor, possibly due to their large numbers.Factor weights for word-final position, modifier words (number, adverbs, and adjectives), and being followed by a consonant are high, suggesting that these are the environments in which most of the glottaling occurred. To eliminate the effect that classification into year groups could have on the accuracy of the data analysis, the factor of year was also run through Rbrul.No significant relationships were found between the specific year and the features under analysis.In contrast, significant differences have been found between the year groups, meaning that year on year differences in Attenborough's usage were insignificant, but clear differences could be distinguished in his speech from different periods in his life. TRAP/STRUT Vowels The number of instances of the TRAP vowel that were found in Attenborough's recordings were much greater in the 1960s data (n=146) than in the 1980s (n=63) or the 2000s (n=94), but there were enough instances in each case for a useful analysis.The average values and standard deviations of the formants of TRAP tokens plotted in Figure 2 show that the /ae/ of the 1960s is more front and slightly higher than the values of the 1980s and 2000s, so much so that it appears the later averages may be entirely outside the range seen in the 1960s (see Table 3 of the data corroborates this, with significant results obtained for both formants between the 1960s (p<0.05) and 1980s (p<0.05).No significant difference was found between the formant values of the 1980s and the 2000s, and a comparison of the 1960s with the 2000s found that only the F2 value significantly differed (p<0.05)(see Table 4).The results indicate that while there has been a clear centering of this front vowel between the 1960s and the 1980s, the change appears to have been complete by the 1980s and does not extend further into the 2000s.The direction of its vertical movement is less clear, with what appears to have been a lowering between the 1960s and the 1980s, which was then reversed somewhat in the 2000s.The STRUT vowel presented more difficulty, owing to the way that the vowels had been coded in earlier stages of the study.Since the ARPAbet utilized by FAVE-align did not have a separate representation for the phoneme /ə/, all instances of this vowel were collapsed together with the STRUT vowel AH, which normally represents /ʌ/.This means that among the data for the STRUT vowel is a considerable number of /ə/, or AH0, which Figure 3 shows us to be differently patterned.Only stressed instances of the STRUT vowel are considered in the following analysis.The number of instances of stressed /ʌ/, i.e., of AH1 and AH2, were more evenly distributed across the 1960s (n=97), 1980s (n=64), and 2000s (n=99) than TRAP.Unlike TRAP, the STRUT vowel presents very little change, and the formant averages for all three periods are within 50Hz of each other, as Figure 4 illustrates.5 and 6), where there appears to have been a small but significant raising of /ʌ/. The TRAP and STRUT vowels have been plotted in Figure 5 to establish their relative positions in space to each other.They are quite clearly distinct.It is therefore difficult to tell from this initial view if their relationship to each other has changed.In order to quantify the TRAP/STRUT distinction, I have used a simple mathematical measure treating the F1 and F2 formants as a plane such that a straight-line distance between two vowels in the same time point can be taken.This is represented in Figure 5 by the line linking the points of each year group. AH 2000s The distance between the TRAP and STRUT vowels is 488 units in the 1960s, 338 units in the 1980s, and 325 units in the 2000s.This measure is also used in Fabricius (2007), where she accompanies it with a description of the angle of TRAP to STRUT.This second measure is not attempted here, where I seek only to establish if change is occurring, and if so, the direction it is occurring in. Figure 6: Planar distance between TRAP and STRUT averages, across year groups. These imagined planar values have been plotted in Figure 6.There appears to be a downward trend in the distance between the TRAP and STRUT vowels in that they appear to have got closer to each other, particularly between the 1960s and the 1980s.However, this only represents what could possibly be a pattern, and further testing would be required to determine if the decrease was of significance.What is certain is that Attenborough's TRAP and STRUT vowels have not coalesced and remain phonologically distinct. Non-normalized Formant Comparisons for all Vowels In this section, I give a short overview of the non-normalized vowel formant values as presented in Figure 7.The ellipses in the figure encircle the same vowel, or in the case of AH, ER, and OW, a group of vowels, throughout the time period.While not intended to provide any scientific measure, this allows us to visualize the range of movement of the vowels in formant space.Many vowels seem to have shifted considerably.While a comprehensive examination is not possible here, some general patterns are worth pointing out.First, the high-and mid-front to central vowels (IY, IH, EY, and EH) undergo much smaller changes than all the low and back vowels.The pattern within this group is such that the formant values of the 1960s fall between those of the 1980s and 2000s, where those of the 2000s are more central than the more fronted 1980s' vowels.There seems to be no clear pattern linking the low and back vowel shifts. Second, there seems to be a cluster of three vowels that are undifferentiated in the accent: ER, AH, and OW.Of the three, OW, which is a diphthong, forms a trend line from the outlying 1960s point to within the cluster, backing and lowering considerably over time to end up mixed with ER and AH. Third, the vowel formant values of the 2000s are generally clustered nearer to the center than the values from other periodsthis is true of all vowels except AW, UH, and OY.1 Discussion In line with the trend in RP, Attenborough's /ae/ has become more open, although only slightly and not conclusively, since the difference between his 1960s' and 2000s' production is not significant.This is in contrast to the findings of Harrington et al. (2000), whose study of the Queen's Christmas broadcasts found her /ae/ opening over a period of 30 years, following trends in wider RP usage.However, there is also little evidence here that /ʌ/ has fronted or indeed changed at all, although it seems that there has been a comparable decrease in the distinction between the two vowels. Where RP has seen more t-glottaling we find only a generally low usage of /Ɂ/ in the data, likely because the register for narration is a formal one and the non-standard variant would therefore be unsuitable for use in broadcasting, where social expectations are likely to prefer the prescriptive standard.This does not, however, discount the fact that Attenborough could well have a less formal register in which more usage of the nonstandard /Ɂ/ might occur, or indeed even be preferred.The study can only therefore claim that his use of RP accent is relatively stable in terms of t-glottaling as compared to that of the wider community.In the cases where t-glottaling is evident in the data, there is no deviation from the usual RP preference for word-final glottaling over intervocalic glottaling. None of these observations are unexpected, given Attenborough's position almost as a steward of the traditional accent features.Interestingly, a comparison with Harrington et al.'s (2000) study finds Attenborough to be arguably more conservative than the Queen, at least in terms of the vowel qualities of the TRAP and STRUT vowels.Although in both studies the style of the broadcast has remained fairly constant and both subjects are distinguished speakers of RP with far-reaching influence, Attenborough's speech is directly linked to his livelihood, while the Queen's seat, rather than her speech, is what imbues her with authority.A BBC broadcaster could plausibly be seen to be of lesser ilk if his accent is observed to falter and become altered by more widespread trends, whereas the Queen is highly unlikely to seem less royal for small changes in pronunciation that match the direction of changes in the general populace.In this way, perhaps, Attenborough has the greater impetus to maintain the accent in the face of change.Also, while the Queen's broadcasts are primarily for British citizens, Attenborough's documentaries have enjoyed a wide viewership worldwide, and it is possible that he may find the geographically neutral and widely recognized character of traditional RP to be a continued advantage outside of the UK. In contrast, Shapp et al.'s (2014) study of Ginsburg finds her to be increasing her usage of a "more stigmatized" raising of the THOUGHT vowel at the same time as this seems to be in decreased use in the general population, unlike the Queen and Attenborough, who make minimal or no change toward the usage behaviors of their surrounding community.The authors suggest that Ginsburg's change patterns are indicative of her change in social position: as a judge in power she has no need to converge toward the norm.This is quite unlike the social circumstances of the Queen and Attenborough, who both produce their recorded material for public consumption.They are thus held more accountable for their speech, while linguistic capital is not as salient for a woman in Ginsburg's position.This comparison helps us recognize the multiplicity of forces that may be affecting the speech of the individual, and which cause them to change or to resist change. With regard to the general vowel centering in the 2000s, I suspect, on an anecdotal basis, that this could be due to the processes of aging: Attenborough's enunciation in narration is distinctly less sharp in these documentaries, even to the naked ear.It is possible that aging has resulted in lessened dexterity of the tongue through physical and neurological degradation such that vowels are more often forcibly produced nearer the center of the mouth and the tongue moves less.Age-related speech differences are known to exist: Xue and Hao's (2003) investigation of the vocal tract in young adults and the elderly finds a significant increase in vocal tract volume and a consistent lowering of formant frequencies for many vowels in the elderly compared to young adults.Other studies have also found differences between the speech of younger and older adults, for example, Hoit and Hixon's (1987) study of age and speech breathing finds differences in lung capacity-related measures such as the number of syllables produced per breath, which no doubt affects the production and acoustic properties of speech.While it is not a new area of study, this interface between gerontology, biology, and linguistics will no doubt benefit from more study that enables us to better tease apart physiological changes from possibly sociolinguistic ones. Seductive though the notion of uncovering age-linked vowel changes is, however, we cannot discount the possibility of confounds such as advances in recording technology and video quality affecting the nonnormalized data, although the sound quality should logically get clearer rather than muddier. Conclusion Although this study has found limited change in an individual's speech corresponding to change in the direction of the community, there is no doubt that it can be found within the lifespan of a single individual, although its true causes may be difficult to discern.David Attenborough's individual patterns of language change could be due to a combination of changes in his social surroundings, psychology/attitudes, or physiology of age, as well as other changes in his environment that are not immediately available to the purview of the general public.Among other things, his marriage, the birth of his children, and his changing relationship with the BBC could register as major events, which might consequently have had some degree of effect on his language.The main idea is that we cannot actually discard any multitude of factors as implausible on the whole.What means nothing to one person, could be life-changing to another and could therefore manifest itself in the speech patterns of an individual. Separating the effects of each possible factor remains a goal that proves elusive, and only more studies of different speakers in different situations can help to tease apart the complicated linguistic, social, and physiological factors that together bring about the language change that we are able to observe. As with Sankoff's (2004) study of Neil and Nicholas, what we have here is another speaker whose life has been quite different from the everyman's.Very few people have had occasion to regularly broadcast over a period of more than 60 years, and the exceptional Sir David Attenborough is one of them.It is hoped that through this very preliminary examination of only a fraction of his contributions to the genre of wildlife documentary, we are able to glimpse also a snippet of the barely explored, beckoning field of language change across the lifespan. Figure 2 : Figure 2: Formant averages for TRAP, categorized by year group. Figure 4 : Figure 4: Formant averages for STRUT "AH"), categorized by year group.A T-test of the data values gives us the same result: none of the values among any of the years are significantly different from each other (p>0.05),with the exception of F1 between the 1980s and the 2000s (p<0.05)(see Tables5 and 6), where there appears to have been a small but significant raising of /ʌ/.The TRAP and STRUT vowels have been plotted in Figure5to establish their relative positions in space to each other.They are quite clearly distinct.It is therefore difficult to tell from this initial view if their relationship to each other has changed.In order to quantify the TRAP/STRUT distinction, I have used a simple mathematical measure treating the F1 and F2 formants as a plane such that a straight-line distance between two vowels in the same time point can be taken.This is represented in Figure5by the line linking the points of each year group. Figure 5 : Figure 5: Distances between TRAP and STRUT averages, categorized by year group. Figure 7 : Figure 7: All vowel averages by time period. Table 1 : Details of documentary excerpts Table 2 : Variants of /t/ by time period Table 3 : Averages and Standard Deviations for TRAP vowel formants, categorized by year group Table 4 : Student's T-test of TRAP vowel formants, categorized by year group Table 5 : Averages and Standard Deviations for STRUT vowel formants, categorized by year group Table 6 : Student's T-test of STRUT vowel formants, categorized by year group
2018-12-27T08:43:30.662Z
2016-03-21T00:00:00.000
{ "year": 2016, "sha1": "9e574eaf673f37475c38c8fd44ea91446610508a", "oa_license": "CCBY", "oa_url": "https://doi.org/10.2218/ls.v2i1.2016.1427", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "9e574eaf673f37475c38c8fd44ea91446610508a", "s2fieldsofstudy": [ "Linguistics" ], "extfieldsofstudy": [ "Psychology" ] }
237464503
pes2o/s2orc
v3-fos-license
STEM Approach: The Development of Optical Instruments Module to Foster Scientific Literacy Skill STEM is an approach that seemly considered helpful in online learning because it can facilitate students to learn about 21st-century skills. This study aimed to develop module-based STEM in improving scientific literacy skills in distance learning through e-learning. Because this research developed module-based STEM, this study was named developmental research. The research procedure follows the stages of the R & D method developed by Barg and Gall, including preliminary, product planning, development, validation (validation by experts), revision, and field testing. Modules that have been declared valid by experts are field-tested using one group pretest-posttest design in an experimental class. The results show that the feasibility of module-based STEM in the distance learning model to increase students' skills within scientific literacy was valid with an average score of 4.28 by two experts. In addition, the application of integrated STEM teaching materials through e-learning has a moderate effect on students' scientific literacy skills since the N-gain score was 0.6. INTRODUCTION The 21st century is the era where technological development is very rapid and caused a significant impact, especially on the changes in the education sector. Therefore, Rahayu (2017) stated that 21st-century skills are necessary to ensure the students be able to overcome the 21st-century's challenges, it consists of the literacy of digital-age, creative thinking, communication effectiveness and high productivity. The skills of digital-age literacy are in line with NCREL, and it comprises basic, scientific, economic, technological, visual, information, and multicultural literacy (van Laar et al., 2017). Scientific literacy is one of the crucial competencies to increase knowledge and problem-solving skills (Nurtanto et al., 2018). Scientific literacy is directly connected with the scientific understanding concepts and processes required for personal decision-making, civic and cultural affairs participation, and economic productivity (Rusilowati et al., 2016). Scientific literacy needs to be delivered to the current generation because it can direct them to have strong scientific thoughts and attitudes and effective communication to the general public (Dewi et al., 2019). In PISA | 93 (Program for International Students Assessment), scientific literacy means three competencies: students can recognize the science issues (problems), phenomena scientifically explanation, and apply scientific evidence. The government has carried out various policies in order to improve students' skills of scientific literacy in Indonesia, including the change of the curriculum from the KTSP to the 2013 curriculum until the High Order Thinking Skills (HOTS) questions type application in the National Examination. However, the results are not very encouraging because, according to the outcomes of the PISA finding in 2018, the low scores of Indonesian students' skills in scientific literacy are still founded. Mostly, Indonesian student's score in scientific literacy is still below the determined score of 489, the Indonesian student's score is about 396. In science assessment, Indonesia is in the ninth rank from the bottom compared to the other OECD countries (OECD, 2019). This prompted the government to make a new policy in the form of eliminating the National Examination and changing it to the Minimum Competency Assessment or called AKM (Asesmen Kompetensi Minimum) starting in 2021. As stated by the Minister of Education and Culture Nadiem Makarim, quoted from kompas.com in 2019, the AKM is an assessment that measures the minimum skills needed by students to be able to learn and is a form of simplification of the National Examination, which is so complex. The material consists of only three, namely literacy, numeracy, and strengthening character education (Muliani et al., 2021). The literacy and numeracy material in the AKM questions refers to PISA. Given that the AKM questions measure students' scientific literacy, the teacher should practice scientific literacy skills in science learning so that students are ready to face AKM. However, currently, the world is being hit by the Covid-19 (Coronavirus Disease 2019) pandemic, because of it, many sectors are impacted in Indonesia, with no exception for the educational sector. During this pandemic, the Minister of Education and Culture issued circular number 36962 / MPK.A / HK / 2020, which obliged all learning to be carried out online. However, some students considered online learning boring (Oliveira et al., 2018) and only focused on understanding the students' concepts (Picciano, 2017). Teachers rarely practice scientific literacy skills for students. The optical instruments topic is one of the materials accommodated within physics learning in high school on basic competencies 3.11 Examining the optical instruments working by applying the light reflection and refraction properties through mirrors and lenses and 4.11 Creating works which use the reflection principle or a refraction towards mirrors and lenses. The concepts of optical instruments are closely related to everyday life. Still, various educational research shows the difficulty and obscurity of the light topic and optical instrument (Galili & Hazan, 2000). The concept is complex and difficult, leading to the student's misconceptions (Ling, 2017). In addition, based on the basic competencies in the 2013 curriculum, students are expected to understand concepts and work on making a simple optical device. However, most teachers only focus on the cognitive realm (Anindya & Wusqo, 2020), so basic competence in skills is rarely achieved. Referring to the solution of problems about online learning that do not train scientific literacy skills and lack of student skills in making an optical instrument product, something is needed that can make students active in learning and practice scientific literacy skills, namely by Module-based STEM. The STEM integrated teaching materials application can increase the competencies of scientific literacy | 94 (Widayoko et al., 2018). STEM learning is seen as an approach capable of providing significant changes in the 21st century (Widya et al., 2019). Scientific literacy skills, which are part of the 21st-century skills, by using STEM-based science learning, it could be achieved (Afriana et al., 2016;Ismail et al., 2016;Wahyu et al., 2020;Lestari et al., 2021). HACIOĞLU et al., (2016) stated that to increase the interest of the Indonesian young generation in technology, engineering, science, and mathematics, learning in schools must be carried out according to educational patterns that include STEM aspects, namely technology, engineering, science, and mathematics. In the development of the 21st century, appropriate learning is learning with the STEM approach, where STEM integrates technology, engineering, science, and mathematics in learning that is carried out (Widarti et al., 2020). Meanwhile, Kelley and Knowles (2016) define STEM as an approach that can train STEM science two or more fields, namely technology, engineering, science, and mathematics. Through this approach, the learning process discuss not only science but also its practice in real life. In order to help students learn optical instruments more easily with the STEM approach, a module is needed. The module itself is a teaching material that trains students' independence in learning (Risdianto et al., 2020) Arifa. Several previous researchers stated that the module integrated with the STEM approach supported the achievement of scientific literacy skills (Prasetyo et al., 2021). One of the themes of the module integrated STEM that has been studied is Microscope Smartphone by measuring students' understanding of concepts (Dewati et al., 2019). In this study, the module was developed on the theme of the Smartphone Projector from Cardboard to train students' scientific literacy. Therefore, the purpose of the study is to describe the Module-based STEM and explain the effect of Module-based STEM that is implemented in e-learning on students' scientific literacy skills. Research Design and Procedures This research is a research development because Borg and Gall developed the Module-based development with the R and D (Research and Development) method (Buchori & Setyawati, 2015). here was two main objectives of development research, namely, product development and the product effectiveness tested in order to achieve the goals. This study's research and development procedures include the studies of preliminary, design, development, validation, revision of the product, field testing, and final products. Preliminary studies are used by researchers to analyze needs, review literature, and identify factors that cause problems so that there is a need to develop Modul-based STEM. Furthermore, the researchers began to determine the design of Module-based STEM including designs, components, materials, drawings that are discussed with other research members. Product development is the stage of making Module integrated STEM and also scientific literacy tests. Two experts then validated the modules and scientific literacy tests that were compiled. The final product produced was then subjected to field trials through e-learning involving 30 students of class XI-D IPA for the 2020/2021 academic year to determine the effect of Module-based STEM on students' scientific literacy. The research design when trying out the module in the classroom is described as follows. Experiment class O1 X O2 In this study, the students' scientific literacy skills were measured at the field testing stage 2 times, namely before and after the implementation of the Module-based STEM. Instruments This study used a scientific literacy test as the instrument that was prepared based on three scientific literacy indicators, consisting of scientific reasoning and inquiry, problem solving, and the basic competencies of optical instrument material. The grid of scientific literacy skills instruments shows in Table 1. Before the scientific literacy test instrument was used in this research, the validation of the instrument was done by two experts. Data Analysis The criterion that is written in Table 2 is the criteria used to analyze the validation results of the module and scientific literacy test (instruments). Besides providing the grades, the experts also offer comments for module improvements. RESULTS AND DISCUSSION The following descriptions are the gained results and discussions of this study consisting of the feasibility of the development of Module-based STEM and the clarification of the Module effectiveness to the scientific literacy skills. The feasibility of The Module One of the famous private high schools in Sidoarjo is where the research was conducted, on 2020/2021 school year in the even semester. The analysis stage is carried out by analyzing the basic competencies of physics subjects to be integrated with the STEM approach. The basic competencies selected are 3.11 optical instrument workings analyse using light reflection and refraction properties through lenses and mirror and 4.11 Applying the reflection and refraction principle towards lenses and mirrors to create works. In the analysis of the basic competency 3.11 and 4.11, then it is broken down into six indicators which are then analyzed referring to the disciplines of technology, engineering, science, and mathematics in the learning to be designed, can be seen in Table 3. The developed module contains material for optical instruments, including eyes, cameras, loops, projectors, microscopes and telescopes. Each discussion of an optical instrument starts with the components of an optical instrument, the function of each optical instrument component, the working mechanism of the optical instrument, calculating the magnification of the image produced by an optical instrument. There is also a student worksheet about making a simple optical instrument in the developed module, namely a smartphone projector from cardboard. Each section on the student worksheet contains the STEM approach integration, they are the disciplines of technology, engineering, science, and mathematics. The STEM attribute in the module shows in Table 4. The students' scientific literacy skills who gained STEM learning will automatically increase because they will be more literate in STEM aspects (Khaeroningtyas et al., 2016). After the Module-based STEM with Optical Instrument has been developed, an expert validation test is conducted. The aspects assessed in the expert validation test step are the material's content, language, and appearance-The results of validation are displayed in Table 5. There are three aspects based on Table 5 validation results, namely substantial content, language, and appearance which got scores respectively 4.35; 4.00; 4.50. The average validation score is 4.28. So it can be stated that the Module-based STEM that has been validated is categorized as valid or suitable to be implemented in online learning. The aspects assessed in the feasibility component of material content are material coverage, material accuracy, up-to-date, stimulating curiosity, and developing scientific literacy skills. These aspects lead to how the module presents concepts scientifically, has novelty, and trains scientific literacy skills. According to Puspitasari et al. (2019), teaching materials should have the following characteristics: (1) presenting factual, conceptual, and procedural learning materials; (2) integrated; (3) student-centered; and (3) oriented to scientific literacy. Students' scientific literacy is significantly improved at each domain through the quality science textbooks implementation in teaching and learning activities. (Sinaga et al., 2017) The second component of feasibility was language in terms of accuracy in the use of language and according to the General Guidelines for Indonesian Spelling. Based on the score obtained on the language aspect, it is known that the Module-based STEM has used the correct Indonesian language rules. According to Panjaitan et al., (2021), the accuracy of language in the textbook is ensured that double interpretation will not be occurred, and the information can be learned and understood by the readers. Sinaga et al., (2017) also stated that textbooks with high readability would facilitate students processing and understanding the textbooks' information or the content easily. When students can easily understand the information in the module, it will accommodate students to learn about scientific literacy skills. The last component of feasibility was the appearance of the module. In this component, there are three indicators, namely cover design, systematic arrangement of modules, as well as clarity of display of images and tables. The first indicator is the presentation of the cover design. The cover design of this module is adjusted to the combination of colors, images, shapes, and sizes of letters that match to describe the contents of the module. Muswita et al., (2020) stated that the harmonious combination of colors, letters, and cover images can provide an overview of the content so that it can attract readers' interest. The second indicator is a systematic arrangement of module. This module was arranged sequentially from the cover, list contents, pictures, information about optical instruments, student worksheets integrated STEM and references. The sequence arrangement of the module components aims to build the readers' thinking power easier while reading the module. In line with Rizawayani et al., (2017) When read an object, the readers' thinking power in receiving the information or knowledge will be built by applying the systematic presentation. The last indicator is the clarity of the images and tables display. The display of pictures and tables has to be understandable so that learning is carried effectively (Paramita et al., 2018). Modules that have been validated are still being corrected based on the expert's suggestions, which are displayed in Table 6. 3. Clarify the image formation of the image on the loop. 4. Enlarges the projector image size 5. Changing the word "under a microscope" to "using a microscope". Student's Scientific Literacy An n-gain score analysis is performed to determine the effect of Module-based STEM on Students' Scientific Literacy. Before the N-gain test, the data obtained were tested whether the data was distributed normally. In accordance to the Kolmogorov-Smirnov test, the significance value result was 0.218 greater than 0.05. In other words, the data were normally distributed. Therefore, this research used N-Gain to measure the improvement of students' skill on scientific literacy. The pre-test and post-test scores of students on scientific literacy skills are revealed in Table 6. The results of the scientific literacy test illustrate the extent to which students master the optical instrument material. Regarding the pre-test and post-test results revealed in Table 6, it can be seen that the Module-based STEM has a moderate effect in helping students master scientific literacy skills. This study measured some scientific literacy indicators, namely scientific reasoning, scientific investigation, and problem-solving. On Table 5, the students' pretest scores can be known that the three classes' score were low. In other words, before the learning implementation with the STEM approach, students' scientific literacy skills were in a low category. Students are less able to explain scientifically a phenomenon related to optical devices, solve problems about the implementation of optical devices in everyday life, and identify problems and variables that will be studied in an experiment. In contrast to the pre-test results, the mean scores of post-test scientific literacy in both the experimental and the replication clasand replication classes 1 and 2 describe that the students' skills in scientific literacy are decent. Students are capable to define the exact solution of a given optical instrument problem, provide reasons in accordance with scientific concepts and determine research problems and variables in an experiment to find the magnification of optical instruments. This proves that the module-based STEM implemented in e-learning, can help students master the concepts and skills of scientific literacy in the material of optical instruments. Figure 1. Designing the Projector from Cupboard Module-based STEM, which is implemented in e-learning, plays a crucial part in training students' scientific literacy skills. Apart from the module providing scientific concepts about optical instruments, the module also facilitates students to practice being engineers. Students are asked to design a simple projector from cardboard based on a smartphone (Figure 1). Based on the designs that have been made, students are then asked to make the product and test it whether the projector product can function properly. In other words, the projector is said to be working if it is capable of producing a clear and enlarged image. Students are trained to have problem problemsolving abilities at this stage because when the projectors that are made are not able to produce a clear image, students will try to find alternative solutions so that the projectors that are made work well. After the projector is able to produce a clear and enlarged image, students are required to calculate the magnification produced by the projector and determine the focus size of the convex lens used. Listiana et al., (2019) explained that The STEM learning approach could facilitate students to develop their scientific literacy skills because the engineering techniques design procedure knowledge is given to the students. Afriana et al., (2016) also revealed that in the PjBL STEM learning, students were directed to design and make the project, use material and tools, arrange solutions, and calculate the results mathematically, influencing students scientifically literacy skills. In STEM Education, the student will be capable to literate scientifically and carried out the problem-solving procedures when the student has high problem-solving skills and affects the student's scientific literacy . Meanwhile, Idawati et al., (2019) explained that Learning through an inquiry-based authentic learning approach in the STEM program holds higher literacy skills to students who have low problem-solving abilities than the student using the conventional learning concept. CONCLUSION The research results discussion concludes the validity and feasibility of the STEM-based module on optical instrument material used in distance learning (elearning) are declared by the experts. Moreover, the implementation of STEM-based modules on optical instrument material through e-learning on students' scientific literacy skills has a moderate effect. RECOMMENDATION In further research, the development of STEM-based science teaching materials is needed in other materials. The STEM approach is considered very suitable for both online and offline learning. Moreover, one of the main obstacles to implementing STEM in distance learning is that some students do not work on projects due to a lack of motivation. Therefore, to overcome the weaknesses in distance learning (elearning), teachers' motivation is also necessary to ensure that students participate actively in learning.
2021-09-09T20:46:06.937Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "cdbe96f4d25063ff9e43ae93d651f155c0a6902e", "oa_license": "CCBYSA", "oa_url": "https://journal-center.litpam.com/index.php/e-Saintika/article/download/388/266", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1f46e0fdaff923167ba3645100e8bf243c634006", "s2fieldsofstudy": [ "Education", "Physics" ], "extfieldsofstudy": [] }
215549236
pes2o/s2orc
v3-fos-license
The role of allopregnanolone in depressive-like behaviors: Focus on neurotrophic proteins Allopregnanolone (3α,5α-tetrahydroprogesterone; pharmaceutical formulation: brexanolone) is a neurosteroid that has recently been approved for the treatment of postpartum depression, promising to fill part of a long-lasting gap in the effectiveness of pharmacotherapies for depressive disorders. In this review, we explore the experimental research that characterized the antidepressant-like effects of allopregnanolone, with a particular focus on the neurotrophic adaptations induced by this neurosteroid in preclinical studies. We demonstrate that there is a consistent decrease in allopregnanolone levels in limbic brain areas in rodents submitted to stress-induced models of depression, such as social isolation and chronic unpredictable stress. Further, both the drug-induced upregulation of allopregnanolone or its direct administration reduce depressive-like behaviors in models such as the forced swim test. The main drugs of interest that upregulate allopregnanolone levels are selective serotonin reuptake inhibitors (SSRIs), which present the neurosteroidogenic property even in lower, non-SSRI doses. Finally, we explore how these antidepressant-like behaviors are related to neurogenesis, particularly in the hippocampus. The protagonist in this mechanism is likely the brain-derived neurotrophic factor (BFNF), which is decreased in animal models of depression and may be restored by the normalization of allopregnanolone levels. The role of an interaction between GABA and the neurotrophic mechanisms needs to be further investigated. Introduction Depression (also referred to as 'major depression' or 'major depressive disorder') is a highly prevalent mental illness that is estimated to affect up to nearly four percent of the world population and is the leading cause of disability worldwide (Rehm and Shield, 2019). It is clinically characterized mainly by its core symptoms of depressed mood and anhedonia, but a wide array of accompanying secondary symptoms render depression a rather heterogeneous disorder regarding its phenotype (Bentley et al., 2014). This heterogeneity is also reflected in its etiology, which is likely responsible for the disappointingly low success rate (around 50%) of widely prescribed antidepressants that act by increasing brain monoamine levels when these drugs are confronted against placebo in clinical trials (Vries et al., 2018). Thus, it becomes evident that additional neurotransmitter systems are deeply involved with depression, and that molecular targets other than monoamine modulators must be pursued in order to achieve a more complete efficacy in the pharmacological treatment for depression. Evidence of the relationship of the GABAergic system with mood disorders dates as far back as 1980 when treatment with valproic acid was reported to show positive effects for bipolar disorder (Emrich et al., 1980). In fact, GABA levels are diminished in the brain of depressed patients (Sanacora et al., 2004), and the stimulation of GABAergic transmission has been proposed as a novel strategy for the treatment of depression, particularly through stimulation of the GABA type A receptor (GABA A R) (Lüscher and Möhler, 2019). The GABA A R is one of the main receptors for GABA and consists of an ion-gated channel that hyperpolarizes the postsynaptic neuron when activated (Chua and Chebib, 2017). Post-mortem studies in suicidal individuals have demonstrated epigenetic alterations in the expression and resulting composition of GABA A Rs in suicidal individuals (Poulter et al., 2008;Yin et al., 2016), while in vivo imaging experiments have revealed functional dysfunctions in GABA A Rs in the brain of depressed individuals (Klumpers et al., 2010). Neurosteroidsendogenous molecules synthesized in the central nervous system from cholesterolact as positive allosteric modulators of GABA A Rs (Baulieu et al., 2001), placing this group of substances in a prominent position regarding the development of novel pharmacotherapies for depression. Extensive research has been conducted in this field for the last 20 years and has recently culminated with the approval of brexanolone, an intravenous formulation of allopregnanolone, as a new strategy for the treatment of severe postpartum depression by the United States Food and Drug Administration (Meltzer-Brody et al., 2018;Scott, 2019). The neurosteroid allopregnanolone (3α,5α-tetrahydroprogesterone, often abbreviated as 3α,5α-THP) presents a particularly high potency of positively modulating both synaptic and extrasynaptic GABA A Rs (Carver and Reddy, 2013). Like other neurosteroids, its synthesis from cholesterol begins in the mitochondria with the cleavage of its side-chain, which gives origin to the neurosteroid precursor pregnenolone. In the cytoplasm, the action of the 3βhydroxysteroid dehydrogenase (HSD) makes the conversion of pregnenolone to the widely distributed steroid hormone progesterone, which can then be metabolized to allopregnanolone by the successive action of two enzymes: 5α-reductase and 3α-HSD (Mellon et al., 2001). Importantly, the synthesis of allopregnanolone is downregulated in depressed individuals, as evidenced by its diminished levels in the cerebrospinal fluid (CSF) (Uzunova et al., 1998) and plasma (Schüle et al., 2006). A significant portion of the research regarding the antidepressant effects of allopregnanolone has been conducted in experimental animals. More importantly, these preclinical studies allowed the exploration of specific mechanisms of action by which allopregnanolone might exert its antidepressant effects. In addition to detailing its interaction with GABA A Rs and to which subunits it binds with higher affinity, many studies provide valuable insights into the mechanisms by which neurogenesis is related to depressive manifestations and to the antidepressant effects of allopregnanolone and other antidepressants, with the brain-derived neurotrophic factor (BDNF) as the main agent (Nin et al., 2011). These studies in animals took advantage of the possibility of measuring or infusing allopregnanolone in key regions of the limbic system and generated an extensively rich literature on the physiopathological and therapeutic role played by allopregnanolone in depressive-like behaviors across several experimental models of depression. Taking this rationale into account, this review presents and discusses studies that explore the role of allopregnanolone on depressivelike behaviors in rodents. We examined reports of antidepressant-like effects of exogenous allopregnanolone or its regulation in several animal models of depression. Furthermore, we explore the evidence that links the depression modulating properties of allopregnanolone with neurogenesis, particularly mediated by the neurotrophic protein BDNF. Brain allopregnanolone levels in animal models of depression Several animal models of psychiatric disorders have used rodents to study the role of allopregnanolone in emerging depressive-like behaviors. A common strategy to reach this goal has been to induce a depression-like state in laboratory animals and quantify the levels of allopregnanolone in brain regions of interest (i.e., that integrate the neurocircuit known to be involved in the regulation of mood), comparing them to non-intervened controls. These models are based on what is known of the etiological aspects of depression, namely internal susceptibility (genetic construct) and external agents (environmental stressors). Though some models have been generated based on the genetic/heritable aspect of depression, most are based on the induction of a depression-like state through the application of stressors. The successful induction of this depression-like state is frequently confirmed by applying behavioral tests that measure ethological manifestations analogous to depressive symptoms. In this section, we review (Table 1) and discuss the most common models used to these ends and what they reveal about the role of allopregnanolone in the neurobiology of depression. Forced swim test Most of the behavioral data that will be presented in the following sections come from the forced swim test (FST), an animal model widely used to detect antidepressant-like activity across different classes of both potential and well-established antidepressant agents. The FST is based on the quantification of the time spent immobile by the rodent while being forced to swim (in rats, 24 h after a previous, longer exposure), which is interpreted as a depressive-like behavior (Detke and Abbreviations and legends: increases (↑); decreases (↓); does not change (≌); followed by (→); tendency (tend.); whole brain (WB); cerebral cortex (CTX); hippocampus (HPC); dentate gyrus (DG); prefrontal cortex (PFC); frontal cortex (FC); amygdala (AMY); basolateral amygdala (BLA); striatum (STR); cerebellum (CBL); olfactory bulb (OB); olfactory bulbectomy (OBX); ovariectomy (OVX); orchiectomy (ORX); testosterone propionate (TP); diabetes mellitus type 1 (DM1); chronic unpredictable stress (CUS); high ultrasonic vocalization line (H-UVL); low ultrasonic vocalization line (L-UVL); social isolation (SI); group housed (GH); foot-shock stress (FSS); single prolonged stress (SPS); time-dependent sensitization (TDS); handling (HDL); maintenance (maint.); contextual fear conditioning test (CFCT); elevated plus maze (EPM); sucrose preference test (SPT); novelty-suppressed feeding test (NSFT); forced swim test (FST); open field test (OFT); resident-intruder test (RIT); defensive burying test (DBT); social interaction test (SIT); staircase test (SCT); not applicable (N/A); not informed (N.I.); *except if noted otherwise. Unless otherwise specified, changes in allopregnanolone levels refer to all interventions used (model or drugs), doses, or brain areas analyzed in each study. Lucki, 1995;Porsolt et al., 1977). It has excellent predictive validity and reproducibility, as well as a significant translation between the clinical potency and the potency of antidepressants detected in the test (Slattery and Cryan, 2012). Although this model fails in aspects of face validity (e.g.: the detection of acute antidepressant effects of monoamine modulators seldom translate to what is observed in the clinical setting) (Nestler and Hyman, 2010), longer immobile behaviors are seen in animal models with concomitant depression-like states such as diabetes (Gomez and Barros, 2000) and a heritable genetic component has been proposed to influence depressive-like manifestations (Almeida et al., 2018). Interestingly, the initial stress induced by the forced-swim session is accompanied by an acute increase in brain allopregnanolone that lasts from 10 min to 2 h after a 10-min exposure, as measured in whole or frontal cortex of male rats (Purdy et al., 1991;Vallée et al., 2000). Further experiments corroborated this finding by reporting an increase in the 5α-reductase enzyme in the prefrontal cortex of male rats (Sánchez et al., 2008). Brain allopregnanolone is known to surge around 30 min after exposure to acute stressors including CO 2 inhalation (Barbaccia et al., 1996), fixation stress (Higashi et al., 2005) and foot-shock (Pisu et al., 2013;Serra et al., 2002), which is likely what drives the observations in the FST. Though the aforementioned findings refer to male rats only, some studies indicate that there seems to be a sex influence, but they point to contradictory conclusions. Pisu et al. (2016) were able to replicate the foot-shock findings in male but not female rats, while Sze et al. (2018) found the opposite, namely a lack of effect in males and an increase in brain allopregnanolone in females after a 2-min FST. Finally, the stress-induced increase in allopregnanolone levels was not replicated in the brains of male mice after exposure to the FST, being that its levels were actually decreased in selective limbic brain areas of these animals (Maldonado-Devincci et al., 2014). Taken together, these results demonstrate that allopregnanolone levels rise in the brain of male rats after exposure to the forced swim test, though this is significantly less certain for female rats or mice. After the initial surge, allopregnanolone levels tend to return to those of unstressed controls, at least in those studies with longer endpoints of up to 2 h (Barbaccia et al., 1996;Purdy et al., 1991), though any further modulations remain unknown. Thus, the FST is a reliable tool to study the antidepressant-like effects of allopregnanolone in the preclinical setting (as reviewed in Section 1.8). However, it is essential to point out that the FST is not a model of depression per se, but rather a model to quantify behaviors that could be considered as being analogous to symptoms of depression in humans, particularly in the context of assessing the effectiveness of antidepressant drug therapies (Nestler and Hyman, 2010). Therefore, even though its application to this end has been successfully established, it is not immediately possible to determine the mechanisms of action by using this model due to some limitations regarding construct validity. Furthermore, several anxiolytic drugs have been long been shown to reduce immobility in the FST (Flugy et al., 1992;Gomez and Barros, 2000) and, since allopregnanolone also presents anxiolytic-like effects (as reviewed in Schüle et al., 2011), it is difficult to distinguish between anxiolytic-and antidepressant-like effects by observing a decrease in immobility. One other important point to consider is that the findings reported with the FST are mostly obtained in naïve rodentsthat is, the animals were not submitted to a long-term protocol aiming to induce a lasting state analogous to depression -, which ultimately does not translate to the target population in humans (i.e., clinically depressed patients). Despite the fact that the detection of antidepressant-like effects in nondepressed-like rodents does not necessarily contradict clinical observations (Serretti et al., 2010), it is reasonable to postulate that the induction of a depression-like state would grant a higher translational value to studies that investigate the role of allopregnanolone on depressive-like behaviors. Social isolation One prominent model used to induce a depression-like state is the social isolation paradigm, which is a protocol typically used to model post-traumatic stress disorder (PTSD). In this model, the long-term deprivation of social interaction acts as a powerful stressor that results in a robust and consistently reproducible PTSD-like state in rodents, mainly characterized by the emergence of anxiety-like and aggressive behaviors (Guidotti et al., 2001). In fact, several studies have shown a social isolation-induced increase in aggression against a same-sex intruder in male mice (Pibiri et al., 2006;Pinna et al., 2005Pinna et al., , 2003. As recently reviewed by Pinna (2019), the social isolation model may also reflect in some typical depressive-like behaviors due to the overlap between PTSD and depressive disorders. Though a review by Bogdanova et al. (2013) has reported mixed evidence on the effects of social isolation in the FST, it failed to mention contemporary studies that showed a clear immobility-inducing effect by social isolation in rats (Djordjevic et al., 2012;Evans et al., 2012) which certainly tips the scales in favor of the use of social isolation within a depression-like paradigm. Additionally, social isolation has been shown to induce other depressive-like behaviors in rats, namely a decreased preference for sucrose and an increased ejaculation latency (Wallace et al., 2009). There are significantly fewer studies investigating these parameters in female rodents, but the extant literature is complete enough to demonstrate that their response to social isolation is rather distinct from males. A study by Pinna et al. (2005) specifically compared the effects of social isolation in both sexes, showing that this model failed to increase aggression and to reduce allopregnanolone in the olfactory bulb of females. Interestingly, concomitant daily treatment with testosterone propionate in females resulted in increased aggression and reduced olfactory bulb allopregnanolone levels similar to what was observed in males. These findings in females were later replicated and the same modulatory effect was observed in the frontal cortex (Pibiri et al., 2006). In rats, a more recent study showed that, even though social isolation reduced cerebrocortical allopregnanolone in both sexes, the decrease was greater in males than in females (Pisu et al., 2016). These differences are likely related to the distinct dynamic hormonal profile of females, which is intimately involved with progesterone metabolism and behavioral manifestations related to mood and emotions (reviewed by Frye, 2009). The downregulation of allopregnanolone in brain areas involved with the corticolimbic system after social isolation strongly suggests that this neurosteroid plays an important role in mood disorders and in the emergence of associated depression-like behaviors. Moreover, it indicates that the fluctuations of other hormones, and thus of other neurosteroids in the brain, may exert complementary mood regulation in rodents. Chronic unpredictable stress Another model that stands out because of its long history as a classical animal model of depression is the chronic unpredictable stress paradigm (CUS; also called "chronic mild stress", "chronic variable stress" and other variations). Originally proposed by Paul Willner in 1987, this model consists of the application of a series of variable, unpredictable stressors for a long period of time (5-9 weeks) that results in a depression-like state characterized mainly by anhedonia-like behaviors (Willner, 2017a). There are reports of decreased preference for sucrose Qiu et al., 2017;Zhang et al., 2017) and of increased immobility in the FST in male rats submitted to CUS. Anxiety-like behaviors measured in tests such as the elevated plus-maze and of novelty suppressed feeding were also present in animals submitted to CUS Qiu et al., 2017;Xu et al., 2018b;Zhang et al., 2017). Notably, all of these behavioral findings were associated with allopregnanolone downregulation in the hippocampus Qiu et al., 2017;Xu et al., 2018b;Zhang et al., 2017), prefrontal cortex Xu et al., 2018b;Zhang et al., 2017) and amygdala . These reductions are further explained by the downregulation of hippocampal and amygdalar mRNA expression of neurosteroidogenic enzymes of importance to the biosynthesis of brain allopregnanolone, such as the 3α-HSD, 3β-HSD and 5α-reductase . Though some uncertainty regarding the reproducibility of the CUS model has been frequently raised over the years, a significant portion of this apparent problem might derive from factors such as excessively short exposures (two weeks or less) (Willner, 2017b). In fact, the aforementioned studies used three to four week protocols of unpredictable stressors to observe depressive-like behaviors associated with brain allopregnanolone downregulation. The above findings provide additional neurobiological mechanisms for this model, namely allopregnanolone downregulation in key limbic regions associated to behavioral changes induced by several stressors, and highlight the need to further investigate the role of allopregnanolone and other neurosteroids in this paradigm. Other rodent models Reports of changes in brain neurosteroid levels in other animal models of depression date as far back as 2003, when Uzunova and colleagues provided the initial evidence that the rat olfactory bulbectomy modulates brain allopregnanolone levels depending on the brain region and the time after the intervention (Uzunova et al., 2003). Later, a model in which rats were selectively bred for high or low infantile ultrasonic vocalizations after maternal separation showed a linedependent modulation of brain allopregnanolone that was associated with depression-like behaviors in males and proestrus females (Zimmerberg et al., 2005). Other studies using procedures such as the single prolonged stress (2 h-restraint + 20 min-forced swim + loss of consciousness by ether vapor) (Lee et al., 2018;Su et al., 2019;Xu et al., 2018a), time-dependent sensitization (single sequence of 15 inescapable foot-shocks) (Zhang et al., 2014(Zhang et al., , 2018 and even streptozotocininduced type 1 diabetes combined with a high fat diet (Qiu et al., 2016), have consistently demonstrated depression-like states in male rats and decreased allopregnanolone levels in the prefrontal cortex and hippocampus. All of the studies aggregated in this section demonstrated that there is a consistent decrease in brain allopregnanolone levels in corticolimbic regions associated with a plethora of long term stress-based animal models of psychiatric disorders that induce a depression-like state in rodents. Taking all these behavioral results associated with allopregnanolone fluctuations into account, the next stages in experimental research have been to investigate potential therapies/interventions targeted to increase allopregnanolone in the brain regions of interest in order to elicit antidepressant-like effects. Antidepressant-like action of allopregnanolone In the next sections of our review, we gather the evidence that demonstrates the capability of allopregnanolone to decrease depressivelike behaviors in rodents when submitted to preclinical screening tests used to assess antidepressant activity, such as the FST. We begin by exploring the increase in brain allopregnanolone elicited by certain classical antidepressants believed to exert their effects through a different mechanism: the upregulation of brain neurosteroidogenesis. Stimulation of allopregnanolone biosynthesis by antidepressants The selective serotonin reuptake inhibitor (SSRI) fluoxetine has widespread use in medicine and is one of the antidepressant drugs that are most commonly used as a positive control during preclinical experiments. This is because it consistently decreases immobility time by increasing swimming in the FST (as originally described by Detke and Lucki, 1995), which is a parameter related to serotonergic mechanisms. Interestingly, the first evidence of a drug-induced modulation of brain allopregnanolone in the context of depression was provided by Uzunov and colleagues in the following year, and also had fluoxetine as the main comparable substance (Uzunov et al., 1996). In this report, a single injection of fluoxetine or paroxetine increased allopregnanolone levels in corticolimbic brain regions of rats for at least 2 h after treatment. These observations sparked interest regarding the capability of other SSRIs to act as selective brain steroidogenic stimulants (SBSSs), a possibility that was further supported by subsequent in vitro experiments that demonstrated the increased activity of neurosteroidogenic enzymes associated with fluoxetine, paroxetine and sertraline (Griffin and Mellon, 1999). Furthermore, in vivo increases in cerebrocortical allopregnanolone levels induced by numerous SSRIs were later reported after a three-week treatment with paroxetine in male mice (Nechmad et al., 2003) and with fluoxetine, desipramine, sertraline and venlafaxine in olfactory bulbectomized male rats (Uzunova et al., 2004). It is worth mentioning that other classes of drugs have also been reported to increase brain allopregnanolone levels, namely the atypical antipsychotics clozapine and olanzapine (Marx et al., 2006(Marx et al., , 2003 and the benzodiazepines midazolam (Qiu et al., 2015) and estazolam (Xu et al., 2018b). On the other hand, a single injection of the tricyclic antidepressant imipramine at behaviorally active doses has consistently been shown not to change brain allopregnanolone levels after 30 min (Pinna et al., 2003(Pinna et al., , 2004bUzunov et al., 1996), probably due to its reported lack of effect on neurosteroidogenic enzymes (Griffin and Mellon, 1999). This is relevant because it is one of the lines of evidence that indicates that not all antidepressants act by this mechanism and that allopregnanolone upregulation is not simply a side effect of monoamine reuptake, but rather a direct effect of SSRIs like fluoxetine. The precise mechanism by which fluoxetine increases brain allopregnanolone levels has not yet been fully determined, but early evidence pointed to the direct activation of the 3α-HSD enzyme which converts 5α-dihydroprogesterone into allopregnanolone (Griffin and Mellon, 1999). This finding could not be replicated, however (Trauger et al., 2002), and more recent evidence indicates that, at least in female rats, the inhibition of a steroid microsomal dehidrogenase may consist in a more robust mechanism by avoiding allopregnanolone oxidation to 5α-dihydroprogesterone (Fry et al., 2014). Importantly, fluoxetine has been shown to increase brain neurosteroid content at doses remarkably lower than those needed to inhibit serotonin reuptake, as determined by ex vivo uptake measurements on brain slices or in vivo serotonin detection by microdialysis (Devall et al., 2015;Pinna et al., 2003Pinna et al., , 2004b. Thus, these findings granted further support for a direct action of fluoxetine on steroidogenesis, as outlined above. Additionally, they raised the question of whether the administration of fluoxetine at such low doses is capable of eliciting a similar antidepressant-like action by upregulating brain allopregnanolone while being devoid of any significant serotonergic action. Early studies have failed to detect an antidepressant-like effect in the mice FST with low-range doses (1 and 5 mg/kg) , but a later work achieved this goal in rats using doses as low as 1 and 2 mg/kg (Molina-Hernández et al., 2005). This indicates that a possible antidepressant-like effect of non-SSRI doses of fluoxetine hovers just over the detectable threshold in the FST, and different species or subtle modifications in the protocol might affect its results. Given the difficulty generated by such a low and narrow dose window, most of the studies that investigated the antidepressant-like action of allopregnanolone opted to directly administer the neurosteroid instead of upregulating its levels by using drugs that act indirectly and might exert the same target effect by confounding mechanisms. Exogenous allopregnanolone administration The seminal studies on the antidepressant-like effect of exogenous allopregnanolone were conducted by Khisti and colleagues in 2000, in which immobility in the FST was reduced 30-60 min after a single intraperitoneal administration in male mice, in doses that ranged between 0.5 and 2 mg/kg . The same effect was also detected in ovariectomized female mice in a dose range of 0.5-5 mg/kg (Rodríguez-Landa et al., 2007). To the best of our knowledge, only one study reported a lack of behavioral effects after systemic allopregnanolone injection in male rats, in spite of its neurochemical effects (Naert et al., 2007). However, it is important to point out that even though this study evaluated a comprehensive timeframe after treatment, the only dose tested (0.05 mg/kg) is ten times lower than the lowest dose ever reported to elicit antidepressantlike effects after systemic acute treatment. Chronic treatment regimens with subcutaneous allopregnanolone, over several weeks, have also shown antidepressant-like activity in rats, whether in ovariectomized females (Molina-Hernández et al., 2005) or in males (Evans et al., 2012). It must be mentioned that allopregnanolone easily crosses the blood-brain barrier (Hellgren et al., 2014), and the systemic administrations of the neurosteroid allow the assessment of its whole-brain effect. However, a common and more cost-effectively approach has been to infuse microdoses of the neurosteroid directly into the cerebral ventricles, from where it diffuses across the brain with similar distribution and antidepressant-like effects to the systemic injection (Almeida et al., 2018;Shirayama et al., 2011). Other studies directed these microinjections to specific areas of the limbic system, with the ultimate goal of assembling a more detailed map of the probable specific brain sites of action where allopregnanolone evokes its antidepressant-like effect. These studies targeted areas where allopregnanolone levels were downregulated in animal models of depression (see Sections 1.3, 1.4 and 1.5) and used the same dose range as in the intracerebroventricular protocols. The brain area that presents the most replicated results is the hippocampus (Nin et al., 2008;Rodríguez-Landa et al., 2009;Shirayama et al., 2011), with the amygdala (Shirayama et al., 2011) and the nucleus accumbens (Molina-Hernández et al., 2005;Nin et al., 2012) contributing as important regions for the antidepressant-like effects in the FST. However, in contrast with the well-reported allopregnanolone downregulation in the prefrontal cortex associated with animal models of depression (see Sections 1.3, 1.4 and 1.5), the few studies that have infused allopregnanolone in this brain area did not detect important behavioral changes in the FST (Almeida et al., 2019;Shirayama et al., 2011). All of these results (compiled in Table 2) support the hypothesis that allopregnanolone, in a similar manner as some neurotransmitters involved with depression, exerts its antidepressant role through pathways associated with the limbic system, probably acting on all the main brain sites responsible for the balance that promotes depressive/antidepressive regulation. A few studies have reported behavioral changes following exogenous allopregnanolone administration to rodents submitted to animal models of depression. In mice, a single intraperitoneal injection of allopregnanolone reduced the social isolation-induced aggressive behavior in males and females (Pibiri et al., 2006). Additionally, in socially isolated rats, a subcutaneous insertion of allopregnanolone-containing pellets normalized immobility time in the FST, both when treatment was started at the onset of isolation or six weeks into the protocol (Evans et al., 2012). More recently, the intracerebroventricular infusion of allopregnanolone has been reported to reduce depressive-like behaviors in a line of rats that were selectively bred to present high immobility in the FST, while having no effect on the line bred to present low immobility in the FST (Almeida et al., 2018). Apart from these studies, researchers have generally preferred to treat depressed-like animals with neurosteroidogenesis-modulating drugs, verifying their behavioral effects, and then quantifying allopregnanolone in brain regions of interest. Pharmacological allopregnanolone upregulation: behavioral effects in animal models of depression SSRIsspecially fluoxetine and norfluoxetinehave been the most frequently used drugs to attempt a reversion of brain allopregnanolone downregulation in animal models of depression. A particularly relevant subset of these studies has tested the effect of a single administration of these drugs at non-SSRI doses (Table 3). In socially isolated male mice, for instance, low doses of fluoxetine have been shown to normalize allopregnanolone levels in the frontal cortex of mice without eliciting the same effect in group-housed animals (Matsumoto et al., 1999). Posterior studies have detailed the stereotypic-specific neurosteroidogenic action of fluoxetine and its active metabolite norfluoxetine in this model, demonstrating that the S-isomers of both drugsbut especially norfluoxetineshowed much higher potency in reducing aggressive behavior (Pinna et al., 2004b) and increasing allopregnanolone levels in the frontal cortex and olfactory bulb (Pinna et al., 2003(Pinna et al., , 2004b). S-norfluoxetine at similar non-SSRI doses exerted the same effect in female mice treated with testosterone propionate , and in male mice when infused directly into the basolateral amygdala (Pibiri et al., 2006). The presence of a depression-like state seems to be important for this neurosteroidogenic effect to take place, since low doses of fluoxetine have been reported not to increase whole brain allopregnanolone in naïve male rats (Fry et al., 2014). Regarding the classic SSRI doses (Table S1), long-term oral treatment with fluoxetine has also successfully restored brain allopregnanolone levels in models of social isolation (Evans et al., 2012), chronic unpredictable stress , single prolonged stress (Lee et al., 2018), and streptozotocin-induced diabetes combined with high fat diet (Qiu et al., 2016), in regions such as the prefrontal cortex and hippocampus. This normalization was accompanied by antidepressant- (Evans et al., 2012;Qiu et al., 2016) and anxiolytic-like (Lee et al., 2018) effects, but the doses applied in these studies were high enough to also inhibit serotonin reuptake. Chronic treatment (10-30 days) with sertraline (15 mg/kg) has also normalized Su et al., 2019;Xu et al., 2018bXu et al., , 2018aZhang et al., 2018) or at least shown a tendency to increase (Zhang et al., 2014) brain allopregnanolone levels in stress-induced animal models of depression while exerting antidepressant-like or anxiolytic-like Su et al., 2019;Xu et al., 2018bXu et al., , 2018aZhang et al., 2018) effects. These reports come mainly in the context of positive controls from studies that demonstrated that some biologically active compounds purified from natural extracts are able to increase brain allopregnanolone Lee et al., 2018;Qiu et al., 2017;Su et al., 2019;Xu et al., 2018b), suggesting that the group of SBSSs may be larger than currently estimated (Table S2). However, it is important to point out that these latter findings lack independent replication and their neurosteroidogenic mechanism of action remains even more elusive than that of SSRIs. A specific molecular target unrelated to SSRIs that has received crescent attention in the past years has been the 18 kDa translocator protein (TSPO), a five transmembrane domain protein that is mainly located in the mitochondria of glial cells (Costa et al., 2012). This protein is believed to play a major role in the transport of cholesterol to the inner mitochondrial membrane (Schüle et al., 2011), though this has been disputed given that TSPO-knockout mice present normal steroidogenesis, at least in non-cerebral tissue Tu et al., 2014). What is known is that once cholesterol enters the mitochondria, it can then be converted to neurosteroid precursor pregnenolone by P450scc, making its transport a putative rate-limiting step for brain neurosteroidogenesis (Schüle et al., 2011). In fact, lentivirus-induced overexpression of this protein in the dentate gyrus has been shown to increase hippocampal allopregnanolone concentrations in mice and to normalize its levels in the dentate gyrus of mice exposed to a foot-shock-induced PTSD model (Zhang et al., 2018). These modulations were accompanied by reductions in depressive-like behaviors and anxiolytic-like effects (Zhang et al., 2018), making TSPO a promising therapeutic target for psychiatric disorders, especially in depression. Because TSPO expression is also increased in damaged neural tissue and is involved with inflammatory responses associated with neurodegenerative disorders (Dupont et al., 2017;McNeela et al., 2018), the specificity of this target must be further investigated and its role in other disorders must be considered as well. In pharmacological approaches, long-term treatment (18 days-6 weeks) with TSPO agonist YL-IPA08 at a dose of 0.3 mg/kg/day has been shown to normalize brain allopregnanolone levels while abolishing depressive- and anxiety-like behaviors (Zhang et al., , 2014 induced by stress-induced models of depression. Similar protocols using another TSPO ligand, AC-5216, have also reversed depressive- (Qiu et al., 2016;Zhang et al., 2017) and anxiety-like behaviors by increasing allopregnanolone in the prefrontal cortex and hippocampus. TSPO is also a molecular target for benzodiazepines (hence its previous denomination of "peripheral-type benzodiazepine receptor"), explaining why midazolam, for instance, decreases immobility in the FST while increasing brain allopregnanolone levels (Qiu et al., 2015). Taken together, all these data suggest that fluoxetine and norfluoxetine, in doses which only act via neurosteroidogenic pathways (that is, as SBSSs agents), but also in the classical posology that elicits serotonin reuptake inhibition, exert antiaggression, antidepressant, and anxiolytic-like effects. The aggressive-like induction is a specific indicator of stress and is highly associated with the PTSD-like behaviors induced by social isolation, which can also evoke depressive and anxiety-like states. Also, despite the relatively small number of articles addressing the antidepressant effect of TSPO manipulation, the results seem to be highly replicable and robust (synthesized in Table 3), indicating that pharmacological induction of this neurosteroids synthesis show satisfactory results regarding its antidepressant-like effects in rodents. Allopregnanolone and neurotrophic pathways The protagonism regarding the mechanism of action by which allopregnanolone elicits its antidepressant effects has classically been attributed mainly to the GABAergic system. Like several other similar neuroactive steroids, allopregnanolone acts as a positive allosteric modulator on the main inhibitory receptor of the nervous system, the GABA A R (for a more detailed review on this topic, see the paper by Zorumski et al., 2019). Other neurotransmitter systems have also been implicatedthough with remarkably less evidencein that behavioral response, either directly or indirectly. Serotonin, as the main neurotransmitter involved with mood regulation, has been shown to modulate antidepressant allopregnanolone effect , as well as dopamine (Dʼ; Aquila et al., 2010;Frye et al., 2004), and some neurosteroidogenic enzymes (Espallergues et al., 2012;Pinna et al., 2005). Table 3 Behavioral effects of the pharmacological normalization of brain allopregnanolone in animal models of depression. Going beyond the neurotransmitter systems, animal models of depression induced by chronic stress, such as those explored in Sections 1.3, 1.4 and 1.5, have been shown to modify a distinct aspect of neurobiology, namely the brain neurogenesis. This is evidenced by the reduction of both the production and survival of adult-born hippocampal granule cell neurons in animals submitted to these models. Because the effectiveness of antidepressants is largely dependent on these cells, neurogenesis has been considered an important mechanism by which depressive behaviors are modulated (reviewed by Samuels and Hen, 2011). In fact, hippocampal allopregnanolone downregulation is associated with impaired neurogenesis in the same brain region (Evans et al., 2012). In the same direction, robust evidence has been presented to support a role for neurotrophic agents in the antidepressant effect of allopregnanolone and SBSSs. Here, we gather the data showing that the treatment with allopregnanolone or SBSSs alters neurotrophic protein levels in the limbic system of depressed-like rodents by interacting directly with or in parallel to GABAergic modulation, discussing the participation of neurogenesis in depressive-like states. BDNF: a key neurotrophic protein involved in the behavioral effects of allopregnanolone Complex interactions between genetics, hormones, neurotransmitters, and environmental factors are involved in depression. BDNF is a crucial mediator of neuronal plasticity, which regulates synaptic composition, neuronal maturation, neurotransmitter release, survival and excitability in the adult nervous system (Huang and Reichardt, 2001). The pro and mature isoforms of BDNF can be synthesized by and released from neurons, being widely distributed in the limbic system. They bind to the tropomyosin receptor kinase B (TrkB), which has a higher affinity for the mature isoform of the neuropeptide (Nagahara and Tuszynski, 2011). Recently, the BDNF-TrkB signaling has been pointed out as a likely mediator between antidepressant agents and the improvement of depressive symptoms (Björkholm and Monteggia, 2016). Stress and depression have been widely documented to reduce the expression of BDNF in both animal and clinical studies. Two meta-analyses have shown that serum BDNF concentrations are low in untreated depressed patients and normalized by antidepressant treatment, and the greater decrease in symptom alleviation was accompanied by a greater increase in serum BDNF concentrations (Molendijk et al., 2014;Sen et al., 2008). Such observations are more likely to be seen in women than in men (Huang et al., 2008), but the lack of experimental studies in females makes it difficult to verify if these findings are replicable in the brain. One study found that social isolation reduced BDNF in the cerebral cortex of male but not female rats (Pisu et al., 2016), which points to the opposite direction of clinical findings. In fact, though not assessing BDNF specifically, other studies in rodents that used social isolation have revealed a sex-dependent response in aggression-like behavior and brain allopregnanolone levels (being lower in females), as discussed in Section 1.3. However, studies that took a different approach by knocking out the BDNF gene in mice found pro-depressant effects in females but not males, when matched to wild-type controls (Autry et al., 2009;Monteggia et al., 2007). Given the contradicting findings in the literature, the need for more studies that investigate sex-dependent differences on the role of BDNF on depression becomes even more evident. In humans, these BDNF modulations become especially apparent in the context of pregnancy, since its serum levels decline considerably throughout pregnancy with a subsequent postpartum increase. Moreover, an inverse relationship between depressive symptoms and serum BDNF during the 3rd trimester (Christian et al., 2016) and postpartum (Gazal et al., 2012) is observed. This supports the role of this neurotrophin in the development of postpartum depression, a very serious complication following delivery that may affect 10-15% of Unless otherwise specified, changes in allopregnanolone levels refer to all interventions used (model or drugs), doses, or brain areas analyzed in each study. women within the first 3 months postpartum with important consequences to both mother and child (Christian et al., 2016;Gazal et al., 2012). Due to their apparent superior efficacy, SSRIs are the first line of treatment for severe cases of postpartum depression (Kim et al., 2014). As reviewed by Nin and colleagues in 2011, "the pharmacological actions of SSRIs are induced by their ability to act as SBSSs, which suggests a novel and more selective mechanism for the behavioral action of this class of drugs". In fact, this review summarizes the association of depression and decreased cerebral and systemic BDNF, and also that SBSSs succeed to reverse these BDNF decreased levels (Nin et al., 2011). In a more recent review, Kojima et al. (2019) offered a possible explanation for the decreased BDNF expression in patients with major depressive disorder and in animal models of depression. Considering that BDNF expression is controlled by neuronal activity, low BDNF pro-peptide levels in the CSF may be the result of lower neuronal activity in the brain of depressed individuals. In fact, there seems to be an important connection between BDNF (both in its pro and mature isoforms) and GABAergic activity, though the specific mechanisms by which this interaction takes place are still being elucidated. Some evidence points to a net excitatory effect in the superior colliculus by postsynaptic inhibition of the GABAergic currents (Henneberger et al., 2002), but since this is not seen in the visual cortex (Abidin et al., 2008) or amygdala (Meis et al., 2019), it seems to be a region-dependent effect. In the hippocampus, BDNF is thought to increase cell surface expression of GABA A Rs by TrkB activation-induced inhibition of receptor endocytosis, enhancing GABAergic inhibition (Porcher et al., 2018). Several studies have demonstrated a general downregulation of BDNF in the hippocampus and frontal cortex in stress-based animal models of depression (Phillips, 2017). As already discussed in this review, such animal models have been shown to decrease allopregnanolone levels in these and other brain areas relevant to the neurobiology of depression (see Sections 1.3, 1.4 and 1.5). In the social isolation protocol, for instance, the reduction in cerebrocortical allopregnanolone is accompanied by decreased hippocampal BDNF in male rats, though not in females (Pisu et al., 2016). Similar effects on BDNF have been observed after exposure to CUS (Rudyk et al., 2019), with a greater magnitude in those animals that present more accentuated depressive-like behaviors (Tornese et al., 2019). This stress-induced downregulation appears to have long-lasting effects since hippocampal BDNF is decreased until seven days after a single prolonged stress protocol (Lee et al., 2018). Additionally, long-term treatment with allopregnanolone (Evans et al., 2012), fluoxetine (Evans et al., 2012;Lee et al., 2018), or other potential SBSSs (Lee et al., 2018) restores the low hippocampal BDNF levels back to normal. Indeed, the hippocampus appears to be the main region involved with the neurotrophic regulation of allopregnanolone. When the pregnane xenobiotic receptor (PXR)a protein involved in cholesterol metabolismis knocked down in rats, a downregulation in hippocampal allopregnanolone and in BDNF is observed, suggesting that PXR may influence the allopregnanolone synthesis by neuroplasticity mechanisms (Frye et al., 2014). Conversely, BDNF levels in that region are increased 3 h following a single low-dose of allopregnanolone i.p. administration (Naert et al., 2007). Moreover, 1 h after sub-acute intraprefrontal cortex infusion, it is increased both in the left and right hippocampus, with a tendency to be higher in the right hemisphere (Almeida et al., 2019). Interestingly, these rapid hippocampal BDNF regulations were observed even in the absence of an associated antidepressant-like effect in the FST. There is some evidence indicating that the hippocampus is not the only relevant brain region implicated in the BDNF mediation of antidepressant effects. The prefrontal cortex, for instance, has been associated with BDNF reduction after depressant manipulations (Lee et al., 2018;Zhang et al., 2017) and the infusion of allopregnanolone in this area increases BDNF mRNA expression in the left hemisphere of the same region (Almeida et al., 2019). However, given that intra-prefrontal cortex allopregnanolone infusion is without antidepressant-like effects in rats (Almeida et al., 2019;Shirayama et al., 2011), the role of BDNF in this region is significantly less clear. It is possible that these frontal BDNF alterations in depressed animals are consequences of hippocampal effects, and that local (frontal) allopregnanolone-induced BDNF increases play no role in its antidepressant effect. However, the lack of studies investigating the direct infusion of BDNF in the prefrontal cortex renders these ideas merely speculative. Nevertheless, BDNF is not the only protein that is likely involved with depressive-like states and neurogenesis in brain limbic areas. Other growth factors participate in cell proliferation, migration, and differentiation, especially in the nervous system. Besides BDNF, some other neurotrophic proteins have been associated with depression and the antidepressant effect of classical antidepressants, especially the nerve growth factor (NGF) (reviewed by Mondal and Fatima, 2019). To our knowledge, there is a single in vitro experiment that demonstrated, after exposure to moderate concentrations of allopregnanolone, a decrease in the toxicity of NGF-treated cells (Afrazi et al., 2014). Another neurotrophic protein studied was reelin, which was proposed by Pinna and colleagues as another potential neurogenic protein involved with neurosteroid behavioral attenuation (Pinna et al., 2004a). In that paper, it is shown that there is an increase in aggression in male and female mice associated with a decrease of brain allopregnanolone, and this behavior is reversed with concomitant reelin modulation. Furthermore, in socially isolated male mice, aggression can be prevented by treatment with L-methionine, which has also been shown to decrease reelin (Pinna et al., 2004a). Moreover, other markers of neurogenesis have provided evidence concerning allopregnanolone's role in neurotrophy. The changes in hippocampal allopregnanolone induced by time-dependent sensitization and TSPO overexpression in the dentate gyrusdiscussed in Section 2were directly associated with the proliferation of progenitor cells as shown by bromodeoxyuridine immunohistochemistry (Zhang et al., 2018). This effect is robust enough to be also observed in a neurodegenerative model induced by chronic treatment with lipopolysaccharide, where an increase in newborn neurons by TSPO overexpression is additionally reported (Wang et al., 2016). The administration of exogenous allopregnanolone has also been shown to restore cell proliferation and rescue cell survival in the subgranular zone of the dentate gyrus after social isolation (Evans et al., 2012), probably through its BDNF-mediated neurogenic effects. These findings indicate that this particular structure, the dentate gyrus, is probably the main functional area within the hippocampus responsible for the neurogenesis mediated by allopregnanolone. And, in addition to influencing neurogenesis, allopregnanolone apparently also inhibits neurodegeneration by suppressing extracellular signal-regulated kinases (EKRs) phosphorylation in vitro (Mendell et al., 2018). On the other hand, chronic exposure to exogenous allopregnanolone may evoke the opposite effect, since a regimen of three times/week subcutaneous injection has been shown to decrease recruitment of hippocampal progenitor cellsthough one injection/week did increase neurogenesis (Chen et al., 2011). This, in association with the observation that long-term continuous allopregnanolone administration leads to memory decline and hippocampus shrinkage (Bengtsson et al., 2016), demonstrates that the effects of allopregnanolone on neurogenesis is likely dependent on treatment duration and frequency. All these data suggest that BDNF participates as an important player in the antidepressant effect induced by allopregnanolone, and that its manipulation arises as a promising alternative for the pharmacological approach of depression. In addition, the papers reviewed suggest a wide field to be explored regarding the relationship between allopregnanolone and other neurotrophic proteins, regarding their role in the neurotrophic antidepressant-like effect. Environmental interventions to increase neurotrophy: what role do neurosteroids play? Because neurogenesis is a process that is intimately linked to a wide array of external factors, animal models of depression represent only a small fraction of the environmental conditions that importantly modify this aspect of brain biology. The input of adequate or inadequate stimuli, particularly during the developmental phases in life, may significantly contribute to a higher or lower pattern of neurogenesis, respectively. One example is maternal care, a complex set of nursing actions that, if executed poorly, may reflect in neurochemical and behavioral deficits in adulthood (Nephew and Murgatroyd, 2013). In fact, one of the conditions that may result in poor maternal care (mainly characterized by grooming and licking of the pups) is the early age social isolation of the mothers, which is also associated with low circulating allopregnanolone levels (Pisu et al., 2017). This factor exerts an important effect on the dams since rats from low licking/grooming dams present more anxiety-like behaviors and lower hippocampal allopregnanolone levels in adulthood (Borrow and Cameron, 2017). An interesting observation is that these rats were compared to animals from high licking/grooming dams, which presented comparatively higher brain allopregnanolone levels. Thus, it suggests that "positive" life experiences might also exert an effect in neurosteroidogenesis, perhaps mediated by the action of neurotrophic agents. One strategy to model positive stimuli is to expose the animalspreferably at a young age, generally just after weaningto an enriched environment. In the laboratory setting, this means to provide a richer housing condition that normally focuses on three main pillars: greater social interaction, diversified sensory input, and incentive to voluntary exercise (van Praag et al., 2000). Environmental enrichment has been associated with neurotrophic changes in the brain, which is supported by a recently published systematic review of animal studies that demonstrated a robust neurogenic effect associated with this paradigm, having BDNF as one of its main regulating agents . Thus, this paradigm becomes a provocative possibility to investigate the neurogenesis-related upregulation in brain steroid synthesis. Munetsuna et al. (2011) provided the first evidence of modulation in brain steroidogenesis by environmental enrichment in a comprehensive experiment that quantified the hippocampal mRNA expression of 19 different enzymes involved with brain steroid metabolism. They showed that male rats reared in an enriched environment for eight weeks presented a higher expression of the neurosteroidogenic enzymes 5α-reductase type 1 and 3α-HSD when compared to standard housing-reared animals. Of particular note, these two enzymes are the main ones involved in allopregnanolone biosynthesis, strongly suggesting its participation in the neuroprotective effects induced by this paradigm. In aged females, environmental enrichment also increased these and other neurosteroidogenic enzymes in the hippocampus (Rossetti et al., 2015). A recent experiment studied the sensory and motor aspects of environmental enrichment separately in a 10-day protocol, showing similar results in young female adults with a much more modest effect in aged rats (Rossetti et al., 2019). In males, four weeks of environmental enrichment reverses the deficits in neuronal plasticity induced by previous social isolation for the same amount of time by restoring hippocampal BDNF, NGF and activity-regulated cytoskeletal associated protein levels, as well as dendritic spine density and other markers of neurotrophy (Biggio et al., 2019). Also, some studies have reported an antidepressant-like effect in the FST elicited by environmental enrichment in male rats (Ashokan et al., 2018;Possamai et al., 2015). Similar neurotrophic changes have been associated with the emergence of resilient phenotypes against chronic stress, preventing the establishment of a depression-like state. In this case, these changes were induced by knocking out a proapoptotic gene (Bax) in mice, which increased the survival of hippocampal cells (Anacker et al., 2018). A resilient phenotype may also be induced by environmental enrichment, which prevents the emergence of depressive-like behaviors induced by social defeat stress, presumably by also increasing hippocampal neurogenesis (Lehmann and Herkenham, 2011). However, this resilience has not yet been experimentally linked to brain allopregnanolone levels, though this hypothesis has been raised by Biggio et al. (2014). Other studies have attempted to generate resilient rats by selective breeding rats that presented high or low traits related to depression. A study conducted by Frye's research group that bred rats with high or low rates of infantile ultrasonic vocalization (USV) after maternal separation found that the low USV line presented higher brain allopregnanolone levels associated with lower depressive-like behaviors, which suggests an associated resilient profile (Zimmerberg et al., 2005). More recently, we bred rats in our laboratory that presented high or low immobility in the FST and showed that allopregnanolone was only able to reduce immobility in the high immobility line (Almeida et al., 2018). This limited but important assortment of data suggests that neurosteroids play a crucial role on environmental stimulus during early ages in animals. It remains to be established whether the stress resilience achieved in these studies is dependent on the neuroprotective effects of allopregnanolone, whichif proven truewill broaden the importance of this neurosteroid in the neurobiology of depression. Conclusion All of the aforementioned evidence discussed in this review demonstrates that allopregnanolone plays a major role in depressive-like manifestations in humans and in animal models. The systemic downregulation of allopregnanolone seen in depressed individuals is wellreproduced across several animal models of depression, which provide an additional level of detail regarding its brain regulation. In depressedlike animals, allopregnanolone levels are consistently downregulated in areas of the corticolimbic system that are responsible for mood regulation. Moreover, its infusion in these areas exerts antidepressant-like effects, which evidences its importance in the neurobiology of depression. These preclinical observations led to the development of a formulation fit for intravenous infusions in humans, brexanalone, that demonstrated efficacy for the treatment of postpartum depression and is currently approved for clinical use. Also, the amelioration of symptoms observed after treatment with widely prescribed antidepressants, particularly SSRIs, is at least partially due to the capacity of these substances to increase brain allopregnanolone content, as largely demonstrated in animal studies. The main drugs that upregulate allopregnanolone levels are SSRIs, which present this neurosteroidogenic property even in lower non-serotonergic doses, which are known to exert an SBSS action. This druginduced upregulation of allopregnanolone reduces depressive-like behaviors in models such as the FST, which is also achieved with other agents that increase brain neurosteroidogenesis levels by different mechanisms. Furthermore, among the varied mechanisms by which allopregnanolone might exert its antidepressant effects, the increase in hippocampal neurogenesis by the upregulation of neurotrophic proteins is proving to be a relevant pathway for this antidepressant action, giving origin to a crescent and vibrant field of research. This rationale is associated to the fact that hippocampal neurogenesis is lower in depressed-like animals, and reversed predominantly by increases in BDNF in antidepressant-treated animals. There is plenty of evidence pointing to the role of altered GABAergic function and of altered BDNF in major depressive disorder and in allopregnanolone effects. It is still needed to understand if and how these two mechanisms might be related to the quick, effective and lasting antidepressant effects of neurosteroid antidepressants such as brexanolone. Is one of these mechanisms more important for the clinical effects of brexanolone than the other, or is there a synergism or potentiation between the GABAergic and neurotrophic systems that better explain the effects seen in the clinical setting? An interplay between allopregnanolone's effects on GABA and on neurogenesis might bring a dual response that has to be investigated regarding brexanolone's rapid and long lasting clinical effects. Also, one may assume that the increase in neurosteroidogenesis by interventions such as the environmental enrichment points to a mechanism through which allopregnanolone is involved with stress resilience. Future studies should further investigate if and how allopregnanolone is able to improve resilience and whether genetic factors play a significant role in this particular pathway of neuroprotection. With the very recent authorization for the use of brexanalone for the treatment of postpartum depression, it becomes evident that allopregnanolone and other neurosteroidogenic agents may be an important tool for the treatment of affective disorders, and may prove to be effective for the treatment of major depression disorder and bipolar disorders in areas were other, more classical antidepressants have failed.
2020-04-09T09:26:39.817Z
2020-04-09T00:00:00.000
{ "year": 2020, "sha1": "18075a73f6be82dc72afa414e497fadbe7bed85c", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.ynstr.2020.100218", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bb610c74cb5e294a1b68debf7604c28d3e07f400", "s2fieldsofstudy": [ "Biology", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
229318168
pes2o/s2orc
v3-fos-license
Estimating the false-negative test probability of SARS-CoV-2 by RT-PCR Background Reverse-transcription PCR (RT-PCR) assays are used to test for infection with the SARS-CoV-2 virus. RT-PCR tests are highly specific and the probability of false positives is low, but false negatives are possible depending on swab type and time since symptom onset. Aim To determine how the probability of obtaining a false-negative test in infected patients is affected by time since symptom onset and swab type. Methods We used generalised additive mixed models to analyse publicly available data from patients who received multiple RT-PCR tests and were identified as SARS-CoV-2 positive at least once. Results The probability of a positive test decreased with time since symptom onset, with oropharyngeal (OP) samples less likely to yield a positive result than nasopharyngeal (NP) samples. The probability of incorrectly identifying an uninfected individual due to a false-negative test was considerably reduced if negative tests were repeated 24 hours later. For a small false-positive test probability (<0.5%), the true number of infected individuals was larger than the number of positive tests. For a higher false-positive test probability, the true number of infected individuals was smaller than the number of positive tests. Conclusion NP samples are more sensitive than OP samples. The later an infected individual is tested after symptom onset, the less likely they are to test positive. This has implications for identifying infected patients, contact tracing and discharging convalescing patients who are potentially still infectious. Summary model output This is a representation of the fitted values in the final model for test sensitivity as returned by mgcv::summary in R: Estimating the false-negative error rate in cohorts of tested individuals Using the GAMM model, we estimated the aggregate false negative rate for hypothetical cohorts of tested patients.To do this, we considered a range of Gamma distributions as parameterised by the mode and standard deviation.These distributions were used to describe the time between the onset of symptoms and patients being tested.The shape (S) and rate (R) parameters were written as functions of the mode (M) and standard deviation ( ) [23] : We explored arrival time distributions with modes ranging from 0.1 to 5 days and standard deviations ranging from 0.5 to 5. We discretised the arrival time distribution ( ) to give the proportion of patients (x) Γ in a cohort being tested on a given day.These fractions were then multiplied by the estimated probability of a false negative predicted by the GAMM function (f(x)) for a single nasal swab on that day; summing these together gave the aggregate false negative rate ( P(Neg|Inf) ) for cohorts tested according to this particular arrival time distribution.To get the probability of 2 false-negatives 1 day apart, we simply took the product f(x).f(x+1)and used this in place of f(x). Estimating the time to test We assumed the test has perfect sensitivity, such that since all individuals with positive tests must be infected, and so we estimated this for each day using the distribution of time to positive test results for symptomatic individuals from Bi et al. [13] (a gamma distribution with shape 2.12 and rate 0.39).We discretised this distribution (such that [0, 0.5) corresponds to 0 days from symptom onset, [0.5, 1.5) corresponds to 1 day after symptom onset etc) and truncated it to 31 days, which is the maximum number of days from symptom onset present in the data we analysed.This truncation has no practical impact because > 99.99% of the density of this particular gamma distribution is accounted for at this point. Meanwhile is the probability of a positive test result for infected individuals given the day of (ψ |τ ∩ η) P i the test, which is exactly what we estimated in this study.Of course, is unknown.This gives us (ψ) P but as we assumed that individuals are tested only once then for which (τ ∩ η) means that we can easily retrieve: and then the unknown appears in every term on the RHS and so vanishes.(ψ) P Estimating the true prevalence in a cohort of tested individuals Supposing that all tests were performed the same number of days after symptom onset; we defined: • as the (unknown) true prevalence among those tested α • as the false-negative rate for tests done on that day i.e.P(negative test | infected) γ • T is the total number of tests done on that day, of which a fraction q are positive Then the true prevalence among those tested for infection is equal to the sum of (a) P(infected|positive test) multiplied by the number of positive tests and (b) P(infected| negative test) multiplied by the number of negative tests (i.e sum of the true positives and false negatives).These conditional probabilities can be separately rearranged via Bayes' Theorem and then added together to give: When rearranging, this as a quadratic in then we discover it has 2 roots: And so the first root allows us to estimate the true prevalence among the test cohort, while accounting for the false-negative test probability for those tested on that day. In reality, however, individuals are tested on different days on which the false negative test probability depends, which makes it much harder to estimate in this way.One way it can be done is to use the α distribution for time to test to calculate the average false-negative test probability across all tests conducted, again assuming that all tests are done by nasal swab -here this gives a false-negative test probability of 16.71%.If we do this, then we can still apply the same equations as above and explore how accounting for the false-negative and false-positive test probabilities affects the consequent estimates of the true prevalence among those tested, which we illustrate for some different scenarios in the main text. Importantly, this only tells us about prevalence in the test cohort and not in the wider population i.e. this does nothing to correct for not finding and not testing mild/asymptomatic cases (as discussed in the main text). Sensitivity of Zou et al estimates We utilise data fromZou et al. (2020)who use a combination of mid-turbinate and nasopharyngeal swabs to constitute nasal samples.To determine if there is an effect of using this combination of different swab types on results, we coded the "swab type" variable to have a separate level corresponding to the nasal samples for Zou et al, then compared it to the best fitting model with only two levels in the swab type variable(AIC = 805.31).The inclusion of a Zou-specific correction was not supported (AIC = 805.81,ΔAIC = 0.50).
2020-12-19T06:18:18.848Z
2020-12-01T00:00:00.000
{ "year": 2020, "sha1": "39d8e6457614e3014efebf16326de7bdb3f414df", "oa_license": "CCBY", "oa_url": "https://www.eurosurveillance.org/deliver/fulltext/eurosurveillance/25/50/eurosurv-25-50-4.pdf?containerItemId=content/eurosurveillance&itemId=/content/10.2807/1560-7917.ES.2020.25.50.2000568&mimeType=pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "398537255ca7588742a245ea366da3b33843d858", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
78090888
pes2o/s2orc
v3-fos-license
First-trimester fasting plasma glucose as a predictor of gestational diabetes mellitus and the association with adverse pregnancy outcomes Objective: To evaluate the usefulness of a fasting plasma glucose (FPG) at the first trimester in predicting gestational diabetes mellitus (GDM) and the association between FPG and adverse pregnancy outcomes. Methods: The levels of FPG in women with singleton pregnancies were measured at 9-13+6 weeks. A two hour 75-g oral glucose tolerance test (OGTT) was completed at 24-28 weeks and the International Association of Diabetes and Pregnancy Study Groups (IADPSG) criteria was used. Adverse pregnancy outcomes were assessed and recorded. Results: Among 2112 pregnant women enrolled in the study, 224 (10.6%) subjects were diagnosed with GDM. The AUC for FPG in predicting GDM was 0.63 (95% CI 0.61- 0.65) and the optimal cutoff value was 4.5 mmol/L (sensitivity 64.29% and specificity 56.45%). Higher first-trimester FPG increased the prevalence of GDM, large for gestational age (LGA) and assisted vaginal delivery and/or cesarean section (all P < 0.05). Conclusion: FPG at first trimester could be used to predict GDM and higher first-trimester FPG was associated with adverse pregnancy outcomes. INTRODUCTION Gestational diabetes mellitus (GDM) is one of the most common complications of pregnancy and the incidence of GDM is increasing globally. 1,2 Women with GDM are associated with many maternal (preeclampsia, cesarean section, birth injuries) and fetal consequences (macrosomia, hypoglycemia, shoulder dystocia). 3,4 Commonly, GDM can be diagnosed by using the oral glucose tolerance test (OGTT) during 24-28 weeks of gestation. However, maternal metabolic status at the early stage of pregnancy may affect maternal and perinatal outcomes. 5 appropriate diet and medication interventions can reduce the incidence of GDM. 6,7 Therefore, early detection of women at high risk of GDM is clinically important. Most researches focused on identifying risk factors at the first trimester for GDM development, including family predisposition, increased maternal age, cultural background, high Body Mass Index (BMI), elevated C-reactive protein levels and history of fetal macrosomia. 8 Fasting plasma glucose (FPG) is a predictive index for type 2 diabetes. It is easy to administer, well tolerated, inexpensive and reproducible. GDM is like type 2 diabetes in many aspects. The efficiency of FPG in predicting GDM is no universally agreed as different criteria are applied for the diagnosis and various gestational weeks or races are chosen. Previous studies had showed that FPG could be used to predict risk for GDM in later pregnancy. 9,10 In our study, we tried to determine the accuracy of first-trimester FPG in predicting GDM using the International Association of Diabetes and Pregnancy Study Groups (IADPSG) criteria 11 and find out whether FPG at early trimester was associated with the maternal and neonatal adverse outcomes. METHODS This retrospective study was approved by the Human Research Ethics Committee of the third affiliated hospital of Sun Yat-Sen University. Medical records of 2112 singleton pregnant women were collected from the third affiliated hospital of Sun Yat-Sen University, Guangzhou, China, from January 2016 to June 2017. Women with already diagnosed pregestational diabetes were excluded. All women had the first prenatal visit during 9-13 +6 gestation weeks and then received regular prenatal services and delivered in this hospital. All patients were underwent FPG test during 9-13 +6 gestation weeks after at least 8 hours fasting and glucose oxidase method were used to assay. A two hour 75-g OGTT was performed between 24-28 weeks and the diagnosis criteria was based on the IADPSG 12 (i.e., one or more plasma venous glucose values ≥ 0 h, 5.1 mmol/L; 1 h, 10.0 mmol/L; or 2 h, 8.5 mmol/L). We recorded patients' baseline characteristics when FPG test was done, including age, parity and pregestational body mass index (BMI) (BMI=weight (kg) / height 2 (m 2 )). After delivery, obstetric and neonatal data were collected, including gestational age at delivery, delivery mode, birth weight and one and five minute Apgar score of neonate. Adverse pregnancy outcomes were assessed and recorded, including preterm delivery, premature rupture of membranes (PROM), pregnancy induced hypertension (PIH), intrauterine growth restriction (IUGR), polyhydramnios, postpartum hemorrhage (PPH), macrosomia, large for gestational age (LGA) and low Apgar score. Preterm delivery was defined as a birth before 37 weeks gestation. IUGR was defined as a fetal weight less than 10th percentile for gestational age. PPH was defined as postpartum hemorrhage more than 500ml for natural birth or more than 1000ml for cesarean section. Polyhydramnios was defined as amniotic fluid index (measure of four quadrants) higher than 95th percentile for gestational age. Macrosomia was defined as a birthweight higher than 4.0 kg. LGA was defined as a birthweight larger than the 90th percentile for gestational age by gender. Low Apgar score was defined as Apgar score less than 7 at one or five minutes. Statistical Analysis: SPSS-19.0 (SPSS, Inc., Chicago, IL) was used for analysis. Continuous variables were presented as mean (SD), skewed variables as medians (interquartile range) and categorical variables as proportions. Difference in variables between groups was analyzed using t test, Mann-Whitney test or Chi-square test. The area (AUC) under the receiver operating characteristic (ROC) curve was used to evaluate the performance of FPG to predict GDM and the optimal cut-off point was calculated. DeLong test 13 was used to compare areas under ROC curves. The sensitivity, specificity, positive (PPV) and negative (NPV) predictive values for different threshold values of FPG along with likelihood ratios of positive (LR+) and negative (LR-) tests were calculated. Multivariate logistic regression analysis was utilized to explore the independent associated factors of GDM (backward method was used). P < 0.05 was considered statistically significant. Ethical Approval: All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. RESULTS A total of 2112 women were included in this study. Of them, 224 (10.6%) subjects were diagnosed with GDM. The characteristics of participants were shown in Table-I. Compared with normal group, subjects in GDM group were older and more multiparous (P < 0.001). They also delivered earlier (39.00 vs. 39.29, P=0.001), needed more assisted vaginal delivery or cesarean section (43.8% vs. 35.0%, P=0.010) and had more postpartum hemorrhage volume (P=0.001). The first-trimester FPG was higher (P < 0.001) but maternal BMI gain was lower (P=0.003) in GDM group. Low Apgar score (≤7 at 1 or 5 minutes) was also more prevalent in GDM group than that in normal group (3.1% vs 1.2%, P=0.022). The independent risk factors for predicting GDM by using multivariate logistic regression analysis, including the pre BMI, first-trimester FPG, maternal age and parity as confounders are shown in Table-II. First-trimester FPG and maternal age were independent risk factors and the odd ratios were 2.847 (95% CI 1.508-5.374) and 1.156 (95% CI 1.20-3.72). Fig.1 shows the ROC curves for determining the screening accuracy of first-trimester FPG for GDM and the AUC was 0.63 (95% CI 0.61-0.65). Table-III selected threshold values for FPG and the associated sensitivity, specificity, PPV and NPV, and LR+ and LR-. The optimal cutoff point of FPG was 4.5 mmol/L in ROC curve which provided the highest combination of sensitivity (64.29%) and specificity (56.45%). The associations between first-trimester FPG and adverse pregnancy outcomes are presented in Table-IV by dividing into two groups according to the lower and upper quartiles (<4.19 or >=4.67 mmol/L) of FPG. It showed that the prevalence of GDM were significantly increased as FPG elevated at the first trimester (17.5% vs 6.8%, χ 2 = 28.503, P < 0.001). The prevalences of LGA (19.0% vs. 11.8%, χ 2 = 10.602, P = 0.001) and the assisted vaginal delivery / cesarean section (39.9% vs. 29.2%, χ 2 = 13.510, P<0.001) were also increased. The prevalences of PIH, IUGR, polyhydramnios, PPH and low FPG predicting for GDM Apgar score were higher in the upper quartile group though there were no statistical differences between them. DISCUSSION In the present study, we have demonstrated that FPG at the first trimester could be used to predict GDM in Chinese women. According to the ROC curves, a FPG level ≥4.5 mmol/L showed an optimal combination of sensitivity (64.29%) and specificity (56.45%) for predicting GDM. Multivariate logistic regression analysis also revealed that first-trimester FPG was an independent risk factor for GDM development. In addition, higher first-trimester FPG was associated with adverse pregnancy outcomes. The performance of first-trimester FPG as a predicting index for GDM is still controversial. The potential problems are highly dependent on the diagnostic criteria for GDM 14,15 and ethnic difference. In Bhattacharya's study, they found that FPG did not predict GDM in later pregnancy using the "two-step approach" for GDM diagnosed. 16 In Riskin's study, 9 by using a 3h 100-g glucose tolerance test and the Carpenter and Coustan criteria, 17 they concluded that higher first-trimester fasting glucose could be used as a predictor for the development of GDM among young pregnant women in Israel. In the study of Sacks, 18 though they concluded that the specificity of FPG for screening GDM in the first trimester was poor by using a one hour 50-g glucose challenge test (GCT), the AUC was 0.7 which meant FPG still had the diagnostic accuracy for predicting GDM (AUC >0.5). In China, using a 2h 75-g OGTT and the IADPSG criteria, Min Hao et al. 19 found FPG could be used in predicting suspicious GDM patients in the first trimester. An extensive study by Zhu et al., 20 involving 17186 women from China using the IADPSG criteria, showed that the first prenatal visit FPG correlated strongly with GDM at 24-28 weeks gestation. In our study, we also found similar diagnostic accuracy of FPG for predicting GDM when the IADPSG criteria was used. Though there is no uniform worldwide optimal cut-off point for first-trimester FPG in predicting GDM, the results from different studies are still very close. Riskin-Mashiah 9 found that the optimal Ping Li et al. Further studies need to be conducted for the optimal threshold of FPG. Maternal metabolic status at the early stage of pregnancy may affect maternal and perinatal outcomes. 5 Riskin-Mashiah et al. 10 reported that mild hyperglycemia during early pregnancy could lead to adverse outcomes. They found a strong association between first-trimester maternal fasting glycemia and the development of GDM. Large for gestational age (LGA) and/or macrosomia were also increasing with increasing fasting glycemia category. However, no significant associations were found between fasting glucose and either preterm delivery (< 37 weeks) or neonatal intensive care unit admission. Another large study found that first-trimester fasting glucose was associated with adverse pregnancy outcomes including GDM, LGA and/or macrosomic neonate and primary cesarean section. 16 In accordance with these studies, we found that higher firsttrimester FPG was strongly associated with the development of GDM. Higher fasting glycemia level was also association with LGA and assisted vaginal delivery and/or cesarean section. Our study adds to the literature by showing that FPG at first-trimester could be used as a valuable tool for predicting GDM in a large Chinese population. In addition, we have explored the associations between FPG and adverse pregnancy outcomes. Limitations of the study: Firstly, it was a retrospective design and unavoidable selection bias. Secondly, it was a single-center and this may restrict the worldwide application. Third, as the sample size was not large enough, we just used the LGA, large for gestational age. lower and upper quartiles but not stratified analysis to evaluate the associations between first-trimester FPG and adverse pregnancy outcomes. Thus, further evaluation and studies are still necessary. CONCLUSION Based on our study, we recommend that firsttrimester FPG could be used to predict GDM by using the IADPSG criteria. Higher first-trimester FPG was associated with adverse pregnancy outcomes. However, further studies are needed to evaluate the value of first-trimester FPG as a predictor for GDM at multicenter and the usefulness of timely interventions on pregnancy outcome.
2019-03-18T14:05:26.114Z
2019-01-04T00:00:00.000
{ "year": 2019, "sha1": "29a18712891fa0e1837a72669622f38ffc7db596", "oa_license": "CCBY", "oa_url": "http://pjms.org.pk/index.php/pjms/article/download/216/35", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "29a18712891fa0e1837a72669622f38ffc7db596", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
262601136
pes2o/s2orc
v3-fos-license
Behavioral Effects of Repetitive Transcranial Magnetic Stimulation in Disorders of Consciousness: A Systematic Review and Meta-Analysis Traumatic brain injury, cardiac arrest, intracerebral hemorrhage, and ischemic stroke may cause disorders of consciousness (DoC). Repetitive transcranial magnetic stimulation (rTMS) has been used to promote the recovery of disorders of consciousness (DoC) patients. In this meta-analysis, we examined whether rTMS can relieve DoC patient symptoms. We searched through journal articles indexed in PubMed, the Web of Science, Embase, Scopus, and the Cochrane Library until 20 April 2023. We assessed whether studies used rTMS as an intervention and reported the pre- and post-rTMS coma recovery scale-revised (CRS-R) scores. A total of 207 patients from seven trials were included. rTMS significantly improved the recovery degree of patients; the weighted mean difference (WMD) of the change in the CRS-R score was 1.89 (95% confidence interval (CI): 1.39–2.39; p < 0.00001) in comparison with controls. The subgroup analysis showed a significant improvement in CRS-R scores in rTMS over the dorsolateral prefrontal cortex (WMD = 2.24; 95% CI: 1.55–2.92; p < 0.00001; I2 = 31%) and the primary motor cortex (WMD = 1.63; 95% CI: 0.69–2.57; p = 0.0007; I2 = 14%). Twenty-hertz rTMS significantly improved CRS-R scores in patients with DoC (WMD = 1.61; 95% CI: 0.39–2.83; p = 0.010; I2 = 31%). Furthermore, CRS-R scores in rTMS over 20 sessions significantly improved (WMD = 1.75; 95% CI: 0.95–2.55; p < 0.0001; I2 = 12%). rTMS improved the symptoms of DoC patients; however, the available evidence remains limited and inadequate. Introduction Disorders of consciousness (DoC) refers to alternations in arousal or awareness, which are commonly caused by traumatic brain injury, cardiac arrest, intracerebral hemorrhage, and ischemic stroke [1].The levels of DoC are distinguished using the coma recovery scalerevised (CRS-R) score.Patients with DoC are categorized as being in a coma, vegetative state (VS; known as unresponsive wakefulness syndrome (UWS)), and minimally conscious state (MCS) [2].Some studies have shown that compared with other behavioral assessment scales, the CRS-R scale is more sensitive than other scales and contains evaluation criteria for DoC patients [3][4][5].Therefore, in this study, the CRS-R scale was used as a standard for the improvement of the consciousness level in the diagnosis of patients with DoC. It has been shown that the thalamus-based consciousness system is disrupted in patients with DoC.The functional connectivity between the bilateral thalamus and the whole brain is damaged [6].A study showed that a lesion of the pontine tegmentum is significantly Brain Sci.2023, 13 related to DoC [7].Two cortical regions between the ventral anterior insula and pregenual anterior cingulate cortex become disconnected in patients with DoC [7].Regardless of the cause, the physiological mechanism is the widespread extinction of excitatory synaptic activity in the cerebral cortex [8].The loss of direct structural input or the reduction in neuronal input in the neocortex and thalamus result in a process called "disfacilitation", which leads to a decrease in the neuronal firing rate [1].The development of diagnostic and prognostic techniques provides new opportunities to detect consciousness recovery and increase the treatment potential of patients diagnosed with DoC [1].Some studies have shown that most patients with DoC regained consciousness and functional recovery during rehabilitation, proving that effective rehabilitation intervention methods are meaningful for consciousness improvement in patients with DoC.They also suggested that caution is necessary in deciding whether patients with DoC should discontinue treatment [9][10][11][12]. Nevertheless, the available intervention strategies for facilitating the restoration of consciousness are restricted in scope.Insufficiently recognized are the suitable and efficacious therapies for patients diagnosed with DoC.In recent times, there has been a growing utilization of noninvasive brain stimulation methods to enhance the process of regaining consciousness in individuals who have experienced significant brain injury. Repetitive transcranial magnetic stimulation (rTMS) is a pain-free and indirect method used to induce excitability changes in the cortex via a wire coil generating a magnetic field.The stimulation frequency of rTMS is divided into high-frequency and low-frequency.Lowfrequency stimulation (<1 Hz) has inhibitory effects and induces long-term depression-like plasticity, while high-frequency stimulation (≥5 Hz) causes a brain excitation effect and induces long-term potentiation-like plasticity [13]. The dorsolateral prefrontal cortex (DLPFC) is one of the main target stimulation areas of rTMS [14].A study indicated that high-frequency rTMS over the DLPFC elicits an enhancement in cortical activity in patients with obsessive-compulsive disorder, but lowfrequency rTMS over the DLPFC induced an opposite impact [15].However, the recovery of consciousness is not just associated with a single brain region.Research has demonstrated that individuals with DoC have disturbances in the brain's functional networks responsible for processing internal thoughts and external stimuli.These disruptions are observed both during periods of rest and when engaged in goal-oriented tasks, and are found to be distinct from those observed in healthy individuals [16].One potential etiological factor of DoC could be the disruption of the default-mode network (DMN) [17].The extent of modified brain functional network connectivity may be correlated with the degree of impaired consciousness and the restoration of connectivity could be linked to the recovery of consciousness [16].In addition, the primary motor cortex (M1) is another possible helpful target stimulation area of rTMS for the motor rehabilitation of patients with neurological disorders.For example, the M1 on the affected side undergoes substantial alterations and impairs motor function in patients with stroke [18].The M1 was shown to be suppressed on the afflicted side and overactivated on the unaffected side [19].In order to alleviate patient symptoms, the excitation of the M1 on the affected side and the inhibition of the M1 on the unaffected side are frequently employed [20,21].Low-frequency rTMS has not only been shown to effectively decrease the cortical excitability of the unaffected M1, but also increased that of the affected M1 at the same time, while high-frequency rTMS increased the cortical excitability of M1 on the affected side in patients with stroke [22].Some studies reported that rTMS over the M1 could affect other brain regions and remedy patients with DoC [23][24][25][26].A recent study showed that rTMS can be used over the posterior parietal cortex (PPC) to treat patients with DoC; however, the effectiveness requires further studies [27]. This meta-analysis aimed to systematically assess the efficacy of (1) rTMS on improving the symptoms of DoC patients and (2) the potential of different rTMS stimulation regions, stimulation frequencies, and stimulus durations on improving the symptoms of patients with DoC using subgroup analyses. Search Strategy The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines were utilized to conduct this systematic review on the effectiveness of utilizing rTMS as an intervention for patients with DoC.We searched through journal articles indexed in PubMed, Embase, Cochrane, Scopus, and the Web of Science until 20 April 2023.To identify studies using rTMS as an intervention for DoC patients, the following search terms were used: "minimally conscious state", "vegetative state", "disorder of consciousness", or "transcranial magnetic stimulation".Lastly, a thorough search was conducted on the list of references of the articles included in the study to identify any new trials that were pertinent to the research.The search procedure is depicted in Figure 1. ing the symptoms of DoC patients and (2) the potential of different rTMS stimulation regions, stimulation frequencies, and stimulus durations on improving the symptoms of patients with DoC using subgroup analyses. Search Strategy The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines were utilized to conduct this systematic review on the effectiveness of utilizing rTMS as an intervention for patients with DoC.We searched through journal articles indexed in PubMed, Embase, Cochrane, Scopus, and the Web of Science until 20 April 2023.To identify studies using rTMS as an intervention for DoC patients, the following search terms were used: "minimally conscious state", "vegetative state", "disorder of consciousness", or "transcranial magnetic stimulation".Lastly, a thorough search was conducted on the list of references of the articles included in the study to identify any new trials that were pertinent to the research.The search procedure is depicted in Error!Reference source not found.. We registered the protocol on the international prospective register of systematic reviews (https://www.crd.york.ac.uk/prospero/ (accessed on 22 August 2023)), for which the registration number is CRD42023453758. Inclusion and Exclusion Criteria This meta-analysis included seven randomized controlled trials (RCTs) and one nonrandomized trial that investigated the effectiveness of rTMS in individuals diagnosed with DoC.Due to the limited number of RCTs, the search was expanded to include one nonrandomized trial.The inclusion and exclusion criteria were detailed below. Studies were included if: (1) patients were diagnosed with MCS or VS/UWS; (2) rTMS was used as an intervention; (3) a sham stimulation was performed as the control group; We registered the protocol on the international prospective register of systematic reviews (https://www.crd.york.ac.uk/prospero/ (accessed on 22 August 2023)), for which the registration number is CRD42023453758. Inclusion and Exclusion Criteria This meta-analysis included seven randomized controlled trials (RCTs) and one nonrandomized trial that investigated the effectiveness of rTMS in individuals diagnosed with DoC.Due to the limited number of RCTs, the search was expanded to include one nonrandomized trial.The inclusion and exclusion criteria were detailed below. Studies were included if: (1) patients were diagnosed with MCS or VS/UWS; (2) rTMS was used as an intervention; (3) a sham stimulation was performed as the control group; and (4) the pre-and post-rTMS CRS-R was measured in the DoC patients.Studies were excluded if: (1) they were published as a meta-analysis, review, case report, guideline, or book; (2) no control trials or single-group pre-and post-test experiments were performed; (3) not published in English; and (4) animal trials. Quality Assessment Two authors assessed the quality of the eight included articles independently.The Cochrane collaboration tool, which has seven domain biases, was utilized.Three levels of bias risk (high, low, and unclear) were applied to grade the included studies.The articles included three high-quality studies with a low-level bias risk [27][28][29], two studies with a high level of bias risk [30,31], and three with a moderate risk of bias [24][25][26].The risk of bias is shown in Figure 2. Two independent authors rated each study and extracted the information.Any disagreement was resolved through discussion between the two authors.and (4) the pre-and post-rTMS CRS-R was measured in the DoC patients.Studies were excluded if: (1) they were published as a meta-analysis, review, case report, guideline, or book; (2) no control trials or single-group pre-and post-test experiments were performed; (3) not published in English; and (4) animal trials. Quality Assessment Two authors assessed the quality of the eight included articles independently.The Cochrane collaboration tool, which has seven domain biases, was utilized.Three levels of bias risk (high, low, and unclear) were applied to grade the included studies.The articles included three high-quality studies with a low-level bias risk [27][28][29], two studies with a high level of bias risk [30,31], and three with a moderate risk of bias [24][25][26].The risk of bias is shown in Error!Reference source not found.. Two independent authors rated each study and extracted the information.Any disagreement was resolved through discussion between the two authors. Data Extraction The relevant data extracted from each study included: (1) the authors and publication year; (2) the sample size and participant characteristics; (3) details of the stimulation protocol, such as the specific brain target region, stimulation frequency, stimulation duration, and design of the sham stimulation; and (4) the outcome measurements, encompassing behavioral outcomes, if they were provided. Data Analysis All results of this meta-analysis were calculated using Review Manager software (Review Manager 5.4).CRS-R was utilized to measure the degree of consciousness.The weighted mean difference (WMD) was employed to integrate the effect size when the outcomes were measured utilizing the same scales.In instances where there was a significant level of heterogeneity or a lack of consistency in the units of weights and measurements, as well as the techniques of measurement, the standardized mean difference (SMD) was employed to integrate the effect size.In addition, the effect sizes of the experimental and sham control groups were synthesized through the changes between the post-intervention values and the baseline.We also obtained the change values (mean ± standard deviation).The value of I 2 less than or equal to 50% suggested a low degree of heterogeneity, indicating that the results should be combined using a fixed-effect model.On the other hand, I 2 values ranging from 50% to 75% and greater than 75% indicated moderate and high levels Data Extraction The relevant data extracted from each study included: (1) the authors and publication year; (2) the sample size and participant characteristics; (3) details of the stimulation protocol, such as the specific brain target region, stimulation frequency, stimulation duration, and design of the sham stimulation; and (4) the outcome measurements, encompassing behavioral outcomes, if they were provided. Data Analysis All results of this meta-analysis were calculated using Review Manager software (Review Manager 5.4).CRS-R was utilized to measure the degree of consciousness.The weighted mean difference (WMD) was employed to integrate the effect size when the outcomes were measured utilizing the same scales.In instances where there was a significant level of heterogeneity or a lack of consistency in the units of weights and measurements, as well as the techniques of measurement, the standardized mean difference (SMD) was employed to integrate the effect size.In addition, the effect sizes of the experimental and sham control groups were synthesized through the changes between the post-intervention values and the baseline.We also obtained the change values (mean ± standard deviation).The value of I 2 less than or equal to 50% suggested a low degree of heterogeneity, indicating that the results should be combined using a fixed-effect model.On the other hand, I 2 values ranging from 50% to 75% and greater than 75% indicated moderate and high levels of heterogeneity, correspondingly [26].In such cases, the appropriate approach for combining the data would be to use a random effect model.p < 0.05 was deemed to indicate statistical significance. Search Results According to the above search strategy, we found 388 articles in the initial search, and 197 articles remained after excluding duplicate articles.Of these, 66 articles were meta-analyses or reviews and 2 articles were not written in the English language.In addition, six case reports and two guidelines were also excluded.After reading the abstracts and titles, articles unrelated to the main content of this meta-analysis were excluded.Finally, if the articles did not conform to the content of this study after reading the complete text, the articles were excluded.A total of eight articles were ultimately included. Characteristics of Included Studies The characteristics of the included studies are shown in Table 1.The eight articles included rTMS as the intervention, with the placebo group as the control group.All studies assessed behavioral outcomes using the CRS-R score.A recent study demonstrated that patients with DoC after brain injury may benefit from continuous functional monitoring and new rehabilitation programs for the first decade [9].Therefore, the participants included after brain injury less than ten years. Heterogeneity Analysis To demonstrate the reliability of this study, we excluded the literature one by one, and found that one of the articles was highly heterogeneous.The total heterogeneity analysis reported that the study of Shen et al. had a significant effect on the result of this metaanalysis; the heterogeneity was 67% (p = 0.004) [29].If we removed this article, the total heterogeneity was reduced to 20% (p = 0.08), and the heterogeneity of the subgroup was reduced to 0% (p = 0.999).In the study of Shen et al., all patients in the experimental group were administered conventional rehabilitation interventions, which consisted of a 20 min Brain Sci.2023, 13, 1362 6 of 12 session of electrical stimulation targeting the median nerve, followed by a 30 min period of passive limb movement, and concluding with a 40 min hyperbaric oxygen treatment.This might have been the reason for the large heterogeneity.Therefore, to improve the accuracy of this meta-analysis, we excluded this study, and the meta-analysis was performed on the remaining seven studies.In the subgroup of 10 Hz, there was another article with high heterogeneity [31]; the heterogeneity of the subgroup was 79% (p = 0.008).If we removed this article, the heterogeneity of the subgroup reduced to 0% (p = 0.71); thus, we excluded this study in the subgroup of 10 Hz.Therefore, due to the high heterogeneity of this paper, subgroup analyses could not be carried out.There was only one article in the one-session group, so we did not analyze this subgroup. Meta-Analysis in All Protocols The heterogeneity was tested in the seven articles (I 2 = 20%, p = 0.28); thus, the metaanalysis could be conducted with a fixed effects model.This meta-analysis demonstrated that the WMD in the change in the CRS-R score was 1.89 (95% confidence interval (CI) 1.39-2.39;p < 0.00001) between the rTMS and sham control group (Figure 3). analysis; the heterogeneity was 67% (p = 0.004) [29].If we removed this article, the total heterogeneity was reduced to 20% (p = 0.08), and the heterogeneity of the subgroup was reduced to 0% (p = 0.999).In the study of Shen et al., all patients in the experimental group were administered conventional rehabilitation interventions, which consisted of a 20 min session of electrical stimulation targeting the median nerve, followed by a 30 min period of passive limb movement, and concluding with a 40 min hyperbaric oxygen treatment.This might have been the reason for the large heterogeneity.Therefore, to improve the accuracy of this meta-analysis, we excluded this study, and the meta-analysis was performed on the remaining seven studies.In the subgroup of 10 Hz, there was another article with high heterogeneity [31]; the heterogeneity of the subgroup was 79% (p = 0.008).If we removed this article, the heterogeneity of the subgroup reduced to 0% (p = 0.71); thus, we excluded this study in the subgroup of 10 Hz.Therefore, due to the high heterogeneity of this paper, subgroup analyses could not be carried out.There was only one article in the one-session group, so we did not analyze this subgroup. Meta-Analysis in All Protocols The heterogeneity was tested in the seven articles (I 2 = 20%, p = 0.28); thus, the metaanalysis could be conducted with a fixed effects model.This meta-analysis demonstrated that the WMD in the change in the CRS-R score was 1.89 (95% confidence interval (CI) 1.39-2.39;p < 0.00001) between the rTMS and sham control group (Error!Reference source not found.). Discussion This meta-analysis was performed on seven articles to examine the efficacy of the rTMS intervention, as compared to sham controls, in improving the symptoms of 207 patients with DoC.The results indicated that rTMS improved the recovery of patients with DoC, but different stimulation frequencies, stimulation sessions, and stimulation brain regions produced different effects. High-frequency rTMS could enhance the expression of brain-derived neurotrophic factors by activating the Ca 2+ signaling pathway [32].rTMS induced synaptic plasticity and improved brain functional connectivity to promote the recovery of patients with DoC [33].A study showed that high-or low-frequency rTMS applied to the M1 increased the cortical excitability on the stimulated side, which would increase the patients' CRS-R Discussion This meta-analysis was performed on seven articles to examine the efficacy of the rTMS intervention, as compared to sham controls, in improving the symptoms of 207 patients with DoC.The results indicated that rTMS improved the recovery of patients with DoC, but different stimulation frequencies, stimulation sessions, and stimulation brain regions produced different effects. High-frequency rTMS could enhance the expression of brain-derived neurotrophic factors by activating the Ca 2+ signaling pathway [32].rTMS induced synaptic plasticity and improved brain functional connectivity to promote the recovery of patients with DoC [33].A study showed that high-or low-frequency rTMS applied to the M1 increased the cortical excitability on the stimulated side, which would increase the patients' CRS-R scores and promote the recovery of motor function [22].The results of our meta-analysis showed that 20 Hz rTMS positively promoted the recovery of DoC patients, but 10 Hz rTMS did not induce significant changes. The research results additionally demonstrated that 4 weeks of rTMS improved the CRS-R scores among individuals diagnosed with disorders of consciousness, but five stimulation sessions of rTMS did not significantly improve the symptoms of patients with DoC.This could be because the stimulation sessions affected the efficacy in improving the symptoms of patients with DoC because all studies using the 20 Hz rTMS intervention stimulated for more than five sessions.Zhang et al. reported that 40 sessions of rTMS worked better than 20 sessions of rTMS intervention, suggesting that patients with DoC require long-term rTMS therapeutic intervention [26].It may be that brain plasticity can only be observed after a long period of rTMS treatment [34].However, there was a lack of studies comparing long-term versus short-term rTMS intervention; therefore, we could not be sure if long-term rTMS was more effective.However, rTMS was suggested to be a viable treatment for patients with DoC. The results showed that rTMS targeting the DLPFC and M1 improved the symptoms of patients with DoC.The result of this meta-analysis was consistent with Feng et al., who reported that noninvasive brain stimulation over the DLPFC could improve the recovery of consciousness among individuals diagnosed with DoC [35].Several studies demonstrated that there is a disruption in the functional networks of the brain that are essential for processing internal thoughts and external stimuli in patients diagnosed with DoC [16].The interruption of the DMN may be a cause of DoC [17].Some studies reported that the DLPFC is a core hub and rTMS targeting the DLPFC can regulate DMN connectivity [36][37][38].The M1 is also a target region for improving the symptoms of patients with DoC.However, a study reported that rTMS over the M1 was ineffective [24].One possible reason is that the cortical connections in patients with DoC are completely or almost completely disordered [1], which may result in the lack of a neural network capable of reacting as an effective matrix to the effects of rTMS applied to the M1 [39].This hypothesis was consistent with demonstrating the severe functional impairment of brain interregional connectivity in VS tested simultaneously with transcranial magnetic stimulation and electroencephalograph recordings [40].Therefore, rTMS applied to the M1 may not be the most appropriate target area in patients with DoC; therefore, more studies are needed to prove that rTMS targeting the M1 is an effective stimulation region. The PPC holds growing significance in the clinical rehabilitation of individuals with DoC.It serves as a vital connection within the DMN and assumes a pivotal role in the recovery process of patients with DoC [41].The DMN is commonly acknowledged as a collection of cortical regions, such as the left and right middle frontal gyrus, bilateral medial frontal, left and right middle temporal, occipital gyrus, and bilateral precuneus, among other locations [42].The recovery of individuals with DoC is not only dependent on a single brain region.The integrity of the default-mode network (DMN) has been found to potentially have a correlation with the levels of residual consciousness in individuals diagnosed with disorders of consciousness [43].Hence, the utilization of the rTMS on PPC may potentially contribute to facilitating the rehabilitation process of individuals diagnosed with DoC.To date, there has been limited research conducted on the impact of rTMS specifically targeting the PPC in ameliorating the symptoms experienced by individuals with DoC.This area of investigation has been explored in a single paper, thus far [27].The research results of this study showed that the left PPC holds significant potential as a target for rTMS interventions aimed at enhancing functional recovery in patients who have a positive response.The study found a substantial rise in the total score in the CRS-R in the group that received rTMS compared to the group that received a sham treatment.This suggests that applying rTMS over the left PPC can significantly enhance the consciousness in patients diagnosed with DoC.This phenomenon could perhaps be attributed to the left PPC, which is an integral component of the DMN.Enhancing the activity of the left PPC is essential for the restoration of consciousness and has been linked to various etiologies of injury in patients [41]. The research findings indicated that rTMS was associated with a significant increase in the possibility of experiencing minor adverse effects, including headache, pain, dizziness, drowsiness, and dry mouth [44][45][46].It is important to point out that these symptoms tended to rapidly diminish upon the withdrawal of rTMS [45].rTMS is generally considered to be a safe and well-tolerated intervention [44].Hence, rTMS may be consider a safe practice. In the case of rTMS as an intervention, the investigator must check the patient for a variety of risk factors, including, but not limited to, potential effects on brain function and the occurrence of seizures [47].In addition, the effects of rTMS on medical devices, such as pacemakers, brain devices, and hearing aids, warrant careful consideration.If the patient is pregnant, it is crucial to analyze the potential consequences of rTMS on the unborn [48].If a patient is determined to be susceptible to negative outcomes, it becomes necessary to conduct supplementary safety investigations and undertake vigilant safety monitoring throughout the process of clinical trials.When a patient exhibits a heightened susceptibility to experiencing severe adverse effects, it is imperative to conduct a meticulous assessment of the risk-benefit ratio and take precautions in the administration of rTMS [47,48].The safety of stimulation levels may vary based on the specific cortical region being targeted and the corresponding risk-benefit ratio.Furthermore, the criteria for determining an acceptable level of risk may also differ [47]. This meta-analysis also had some limitations.Firstly, we did not include unpublished studies and could have overlooked relevant research publications in languages other than English.Secondly, due to the limited number of relevant studies, six articles were RCTs and one article was not randomly assigned.Finally, our analysis of the stimulation duration did not include a subgroup due to there only being one study. It is the hope that future treatment studies explore the impact of various stimulation parameters, such as the stimulation frequency, stimulation duration, and stimulation target region, in order to identify a more suitable dosage of stimulation.Furthermore, it is expected that further research explores the utilization of rTMS as a potential intervention to enhance the level of consciousness among those afflicted with disorders of consciousness. Conclusions This meta-analysis found that the rTMS intervention is safe and effective in improving the symptoms of patients with disorders of consciousness.It was proposed that in order to enhance the effectiveness of rTMS, a frequency of 20 Hz should be delivered to either the DLPFC or the M1 for a total of 20 sessions.This approach aims to facilitate the restoration of consciousness in individuals diagnosed with DoC. Figure 1 . Figure 1.The flowchart of the search procedure.rTMS: repetitive transcranial magnetic stimulation; CRS−R: coma recovery scale-revised. Figure 1 . Figure 1.The flowchart of the search procedure.rTMS: repetitive transcranial magnetic stimulation; CRS-R: coma recovery scale-revised. Figure 3 . Figure 3. Meta−analysis of all protocols on the change in CRS−R score in patients with disorders of consciousness [24-28,30,31]. Figure 3 . Figure 3. Meta−analysis of all protocols on the change in CRS-R score in patients with disorders of consciousness [24-28,30,31]. Table 1 . Characteristics of included studies.
2023-09-26T15:02:44.161Z
2023-09-23T00:00:00.000
{ "year": 2023, "sha1": "3f3a1cc6d0c8fdc51c2f9627ac2db8d0fc5716c3", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3425/13/10/1362/pdf?version=1695453423", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "7a9d41eb08baa2efc5fd662bba2a476aa7009db3", "s2fieldsofstudy": [ "Psychology", "Biology" ], "extfieldsofstudy": [] }
19694909
pes2o/s2orc
v3-fos-license
Safety and long-term immunological effects of CryJ2-LAMP plasmid vaccine in Japanese red cedar atopic subjects: A phase I study ABSTRACT Japanese Red Cedar (JRC) pollen induced allergy affects one third of Japanese and the development of effective therapies remains an unachieved challenge. We designed a DNA vaccine encoding CryJ2 allergen from the JRC pollen and Lysosomal Associated Membrane Protein 1 (LAMP-1) to treat JRC allergy. These Phase IA and IB trials assessed safety and immunological effects of the investigational CryJ2-LAMP DNA vaccine in both non-sensitive and sensitive Japanese expatriates living in Honolulu, Hawaii. In the Phase IA trial, 6 JRC non-sensitive subjects and 9 JRC and/or Mountain Cedar (MC) sensitive subjects were given 4 vaccine doses (each 4mg/1ml) intramuscularly (IM) at 14-day intervals. Nine JRC and/or MC sensitive subjects were given 4 doses (2 mg/0.5 ml) IM at 14-day intervals. The safety and functional biomarkers were followed for 132 d. Following this, 17 of 24 subjects were recruited into the IB trial and received one booster dose (2 mg/0.5 ml) IM approximately 300 d after the first vaccination dose to which they were randomized in the first phase of the trial. All safety endpoints were met and all subjects tolerated CryJ2-LAMP vaccinations well. At the end of the IA trial, 10 out of 12 JRC sensitive and 6 out of 11 MC sensitive subjects experienced skin test negative conversion, possibly related to the CryJ2-LAMP vaccinations. Collectively, these data suggested that the CryJ2-LAMP DNA vaccine is safe and may be immunologically effective in treating JRC induced allergy. Introduction As a major source of environmental allergens in Japan, Japanese red cedar (JRC) pollen causes pollinosis (JCP) in 30-35% of the Japanese population during early spring. 1 Cry j 1 and Cry j 2 proteins are the 2 major allergenic components in JRC pollen. [2][3][4] T cell responses and IgE antibodies specific for these 2 proteins have been found in most JCP patients. 5,6 Because of the high sequence identity and cross-reactivity between JRC pollen and Japanese cypress pollen, which is dispersed after the season of JRC pollen, the pollinosis symptoms might last as long as 4 months in some patients. [7][8][9] The quality of life of such patients is greatly affected. Meanwhile, identifying an effective therapy for JRC allergy remains an unmet need. JRC is not native to North America, but is found as an ornamental tree across the Southeastern and Southwestern United States. Furthermore, pollen from Mountain Cedar (MC), a close relative native of JRC in North America, cross-reacts to a high degree with JRC pollen. It has been found that Jun a 1 and Jun a 2 proteins from MC share 80% and 71% identity with Cry j 1 and Cry j 2, respectively. [10][11][12][13] MC pollen causes a severe respiratory tract allergy in Texas during winter months. 14 Similarly to JCP, there is no effective immunotherapy available for MC allergy. The concept of a DNA vaccine was described in early 1990s. 15,16 One unique feature of DNA vaccination is its ability to rapidly induce strong CD4 C and CD8 C T cell and antibody responses. Therefore, DNA vaccination has been substantially studied in a wide range of diseases including allergy, cancer, infectious diseases, and autoimmune diseases. 17 Type-I allergic diseases, including JRC and MC pollens induced allergy, are mediated by CD4 C Th2 cells, which help B cells produce IgE antibodies. 18 In several animal models for allergic diseases, it has been demonstrated that DNA vaccination can induce a Th1 type immune response, which could counterbalance the Th2 response. [19][20][21][22][23] Thus, DNA immunization represents a potential intervention for preventing or treating JRC/ MC induced allergy. Immunomic Therapeutics, Inc.'s research group developed a novel allergy immunotherapy, based on LAMP technology, to treat pollen induced allergies. Lysosomal Associated Membrane Protein 1 (LAMP-1 or LAMP) is a lysosomal residential protein. Its lysosomal targeting property has been initially used in the DNA vaccine fields in animal models for infectious diseases as well as in a variety of cell therapies for human oncology indications. [24][25][26][27][28][29] It has been shown that inclusion of LAMP in the DNA plasmids significantly enhanced both cellular and humoral responses in vaccinated animals. In a recent study, DNA plasmids encoding LAMP fused with Cry j 1 and Cry j 2 protein elicited a strong Th1 response in mice. After repeated allergen exposure, vaccinated mice were well protected, as indicated by a minimal level of allergen-specific IgE production. In contrast, the control mice exhibited a typical Th2 response. Based upon these data we believed that the LAMP based DNA vaccination skewed the allergic reaction from a Th2 toward a Th1 dominant response. 30 In the current Phase IA and IB clinical trials, we evaluated the safety and immunological effects of an investigational DNA vaccine encoding CryJ2-LAMP protein in human subjects. Cry j 2 was chosen as our first investigational product because it has been found that immunogenicity of Cry j 2 is stronger than that of Cry j 1. 31 Both JRC and/or MC atopic subjects were vaccinated with CryJ2-LAMP plasmid 4 times in the Phase IA trial and some subjects were boosted once in the Phase IB stage. The safety and immunologic biomarkers were assessed in these subjects for an accumulated time from Day 0 of the Phase IA to the end of the Phase IB, which ranged between 331-416 d. The results indicated that CryJ2-LAMP DNA vaccine is safe and has a potential as a therapeutic for JRC and/or MC sensitive subjects. Methods This protocol was reviewed and approved by the Sterling Institutional Review Board (Atlanta, Georgia), an independent review broad. This trial was conducted under an Investigational New Drug Application (IND) and registered on clinicaltrials. gov as NCT01707069 and NCT01966224. Subject recruitment Subjects were recruited from the Japanese community in Honolulu, Hawaii, favoring those who had more recently arrived from Japan within the last 5 y. The inclusion and exclusion criteria are given in detail in the Supplementary Documents. Subjects were identified as either non-atopic or had atopic reactivity to JRC or MC allergens. It should be noted that we were unable to identify in Honolulu and on the island any mature JRC trees; thus, any Japanese subjects exhibiting skin test reactivity were presumed to have naturally become sensitive to JRC in Japan. Subjects were also screened for atopic sensitivity to a variety of JRC unrelated allergens, including southern grasses, southern California tree, ragweed, and dust mites. Laboratory evaluations were performed by LabCorp for Phase IA, a local reference laboratory in Honolulu for Phase IB, and physical examinations were performed at the clinical site, East-West Medical Research Institute (Honolulu, HI). 41 subjects were screened, of which 11 subjects failed the inclusion/exclusion criteria and were not included in the trial. 30 subjects underwent the informed consent process, however, 6 of the 30 subjects withdrew their Informed Consent before entering into the first vaccination of this trial because of family issues or scheduling conflicts. One subject returned to Japan after the 4 th vaccination and was therefore considered lost to follow up. All remaining 23 subjects completed the 0-72 day initial trial protocol. An amendment to the protocol for a 60 day extension was requested but not authorized in time; hence, 9 subjects were not able to complete visit 7 (day 102). However, all of these subjects did return for visit 8 on day 132. During the Phase IB trial, 4 subjects from Cohort 1, 7 from the cohort 2, and 6 from Cohort 3 of the Phase IA trial were rerecruited. Subject #102 (Cohort 2) and subject #122 (Cohort 3) completed the trial without receiving a booster dose due to either anemia at screening or subject choice, respectively. Subject #120 from Cohort 2 did not finish the Phase IB trial because she moved to Japan. Trial design and treatment This study included a Phase IA and a Phase IB trials (Fig. 1). The primary end point of the study was to assess the safety as determined by self-reported AE's, vital signs clinical laboratory evaluations and changes in physical examination in non-atopic (no allergic sensitivities to CryJ2 allergen) subjects and atopic subjects with known allergy to JRC or MC allergen as identified by positive skin test reactivity. The secondary endpoints of this study were to examine whether there are increases in beneficial immunoglobulins -classes of CryJ2 specific IgG, as well as changes in the IgE antibody levels in the serum of nonatopic (no allergic sensitivities to CryJ2 allergen) subjects and in atopic subjects with known allergy to JRC CryJ2 allergen as identified by positive skin test reactivity and/or IgG specific antibody titers above baseline. The Phase IA was designed to assess acute and chronic toxicity up to 132 d after the first of 4 intramuscular (IM) immunizations. 24 subjects were divided into 3 cohorts. Subjects in Cohort 1 (JRC non atopic, n D 6) received a full 4 (4 mg dose) dosing regimen. Subjects (JRC and/or MC atopic) in Cohort 2 (n D 9) and Cohort 3 (n D 9) received a total of 4 half (2 mg dose) or 4 full (4 mg dose) dosing regimen, respectively. All subjects were treated with plasmid DNA by IM injection at 14 day intervals. The Phase IB trial was designed to continue evaluating the safety and immunological responses of subjects from the Phase IA trial. In the Phase IB trial, 15 out of 17 subjects were boosted with 2 mg plasmid DNA approximately 300 d after the 1 st vaccination in the Phase IA trial. Safety, skin prick test (SPT), and antibody measurement Methods for safety assessment, SPT, and antibody measurement are provided in the Supplementary Documents (Methods). Subject demographics 24 subjects were enrolled in the Phase IA trial. Their demographics and characteristics are described in Table 1. There were more females (n D 6), than males (n D 3) in Cohort 2 and 3, but this did not appear to influence the safety or immunologic data. Seventeen subjects from the Phase IA trial were recruited to the Phase IB trial. Their demographics and characteristics are described in the Supplementary Documents (Table S1). Again, more females than males were enrolled (Cohort 2, 5 females and 2 males; Cohort 3, 5 females and 1 male). There were no other differences among groups. Adverse events During the Phase IA trial (0 day-132 days), a total 88 treatment-emergent adverse events (TEAEs) were reported ( Table 2) over 8 trial visits. Twenty subjects reported at least one TEAE, 5 from Cohort 1, 6 from Cohort 2, and 9 from Cohort 3 (Supplementary Documents, Table S2). There was no reported early or late phase anaphylaxis or other systemic illness. The most frequently reported TEAE was injection site erythema which was considered definitely related. The majority of TEAEs were mild in severity. There were a total of 41 "definitely related" events of local injection site reactions, commonly expected with vaccinations. These local reactions did not change with time nor increase in severity after each vaccination. During the Phase IB trial a total of 9 TEAEs in 3 subjects were reported (Table 3). Only one definitely related TEAE was reported, which was itchiness at the injection site. In addition, fatigue was reported by one subject and was considered possibly related to the vaccine. The event was transient (4 hours duration) and there was no reoccurrence on further follow-up. None of these reported TEAEs required medical intervention. One subject who participated in both IA and IB trials experienced 3 raised systolic blood pressure results on Day 28. The subject had no prior or subsequent raised systolic blood pressure reading. All other subject's vital signs remained in the normal range throughout the trial period (data not shown). No subjects experienced significant changes in the clinical laboratory parameters (hematology, blood chemistry, urinalysis, and serology), which required medical intervention. Anti-LAMP antibodies were monitored before each vaccination and thereafter in total 8 time points in the Phase IA and 3 time points in the Phase IB study. Anti-LAMP IgG antibody in serum samples were measured by ELISA and spontaneous competitive inhibition ELISA. We did not find any anti-LAMP IgG positive samples throughout the study in any subjects (data not shown). FEV1 predicted (%): FEV1% (Forced expiratory volume in 1 second) of the subject divided by the average FEV1% in the population for any person of similar age, sex and body composition. a Heights/Weights were originally measured by using imperial system and converted into metric system for consistency. Skin prick test (SPT) for JRC/MC allergens The SPT for JRC/MC results for both Phase IA and IB trial are summarized in Table 4. Seventeen subjects in Cohort 2 and 3 completed the Phase IA trial. Of the 17 subjects, 12 were JRC atopic by SPT and 11 were MC atopic. At the end of the Phase IA trial, of these 12 JRC atopic subjects, 10 experienced SPT reaction conversion from positive to negative for JRC extract. Of the 11 MC atopic subjects, 6 experienced SPT negative conversion for MC extract. All 3 subjects (#105, #112, and #136), who were found skin test positive for Cry j 2 allergen at screening, showed conversion to negative for Cry j 2 on day 132 (data not shown). The SPT negative conversion for JRC/MC was either maintained or achieved for all subjects who enrolled in the Phase IB trial. The only exception was one subject that experienced MC SPT negative conversion but did not maintain the conversion at the end of trial. Also, this atopic subject at visit #12 had a positive SPT reaction for JRC for the first time. It is worth noting that 2 subjects #102 and #122, who did not receive the A treatment emergent adverse event is an adverse event that occurs after the subject receives any dose of the assigned study treatment. Skin prick test for allergens unrelated to JRC The SPT results for the unrelated allergens from all subjects who completed the Phase IA and/or Phase IB trials are summarized in Table 5. At day 132, 10 of the 17 subjects in Cohorts 2 and 3, who completed the Phase IA trial, experienced SPT negative conversion for at least one of the 4 unrelated skin test allergens. Eight out of these 10 subjects also experienced a shift from positive skin tests for JRC, MC, or CryJ2 at screening to negative skin tests at day 132. The majority of the subjects who enrolled in the Phase IB maintained the negative SPT conversion to the end of trial. At day 132, 3 subjects converted from SPT reaction negative to positive for southern grass and one was positive converted for western ragweed. Antibody detection Serological IgG and IgE antibodies for JRC, MC, and other allergens and total IgG, IgG1, IgG2, IgG3, and IgG4 antibodies for Cry j 2 were tested by the ImmunoCAP Allergen Specific IgG/E method. Cry j 2 specific IgE levels were examined by the Conventional RAST method. At the end of the Phase IA trial, we did not observe any significant changes in anti-JRC, -Cry j 2, and -MC IgG (total IgG and/or subclasses) antibodies (data not shown) and IgE antibodies (Supplementary Documents, Tables S3-S5) in any cohorts. At the end of the Phase IB trial, most subjects showed a trend of increasing of anti-JRC IgG production (Supplementary Documents, Table S6). No significant change was found in serological levels of IgG and IgE against unrelated allergens including southern grass, ragweed, dust mite and southern California tree (data not shown) and there were no concomitant changes in subjects' allergen-specific IgG and IgE antibody levels with the SPT results for these unrelated allergens. Discussion The primary objective of these Phase IA and IB trials was to determine the safety of the investigational CryJ2-LAMP DNA vaccine. The 4 initial and a boosting dose regimen of the CryJ2-LAMP vaccination was well tolerated by all subjects and the safety endpoints were met. A majority of the atopic subjects experienced conversions of skin test from positive to negative for JRC and/or MC at the end of the trial, possibly due to the CryJ2-LAMP vaccination. Accumulated evidence from clinical trials for infectious diseases and cancer indicate that DNA vaccines are safe in humans. [32][33][34][35] However, one concern with the administration of DNA products in allergic patients is that the plasmid encoded allergens might trigger or worsen allergic responses. The investigational CryJ2-LAMP DNA plasmid contains 3 major segments: the luminal domain of LAMP, the CryJ2 sequence, and the transmembrane/cytoplasmic signaling domain of LAMP. Theoretically, this strategy eliminates the release of free allergen into circulation; thus, allowing patient exposure to DNA vaccines without the fear of atopic reactions. The intracellular accumulation of LAMP fused target antigens in antigen presenting cells has been confirmed by using several viral proteins. 27,28 Our safety data support the concept as no anaphylactic/allergic response or other systemic illness was found during the trials. 42 out of 97 observed TEAEs were mild skin injection site reactions, which were expected. The total number and frequency of TEAEs from Cohort 2, which is the 2mg-dose group, was lower than those from the 4mg-dose groups, indicating a correlation between TEAEs and the amount of administrated DNA plasmid. However, there was no difference between groups in term of the severity of adverse events. Eighty six out of 88 of observed TEAEs occurred during the period from day 0 to day 72. The majority of these incidences were transitory and none of them required medication or medical attention. However, one limitation is that this study was conducted at Honolulu where we were unable to identify any JRC trees. Thus, we were unable to evaluate if natural exposure to JRC pollen simultaneously with CryJ2-LAMP vaccination. Because of a lack of mock control (empty LAMP plasmid), we were unable to determine whether the observed TEAEs were results of the expression of CryJ2 or of the LAMP technology. Nevertheless, no LAMP IgG antibody positive sample was detected at any of the up to elven time points in these trials. This correlates with the lack of clinical symptoms or data for induction of an adverse immune response to the LAMP vaccine. The physical and clinical laboratory results also support the conclusion that the CryJ2-LAMP vaccine is safe as no adverse events requiring medical intervention were found. Another advantage of LAMP technology is the induction of robust CD4 T cell responses. LAMP protein mediates the trafficking of its cargo antigens to the lysosomal/endosomal compartment and enhances the subsequent MHC class II presentation. [24][25][26] We demonstrated in an animal model that a robust antigen specific Th1 type CD4 C T cell response was induced upon CryJ-LAMP vaccination. 30 As a result, high levels of Cry j 1 or Cry j 2 specific IgG2a (Th1 type) antibody and low IgE antibody were found. In this Phase I clinical trial, we evaluated the immunological effects of CryJ2-LAMP vaccine by using skin prick test and antibody detection. SPT for JRC negative conversion was achieved in the JRC sensitive subjects (10/ 12 in Phase IA and 8/8 in Phase IB), with either 2mg-or 4mgdose of plasmid DNA. The JRC SPT negative reaction in these JRC sensitive subjects had been maintained until the end of IB trial. One subject (#140, Cohort 3) who was initially negative for JRC, transiently became to SPT positive on day 132, but returned back to SPT negative for JRC. It is surprising that 2 subjects had maintained SPT negative reaction until the end of the Phase IB trial even without receiving the single booster dose, suggesting a long-term effect of the initial CryJ2-LAMP vaccination. However, because of a lack of placebo control and natural exposition, we could not exclude the possibility that the negative skin test conversion is a result of subjects naturally outgrowing their JRC/MC sensitivity. It should be noted that these results were achieved by using only one allergen-encoding vaccine, indicating the potency of the LAMP-based DNA technique in inducing the beneficial effects in allergic patients. Because the primary goal of this study is safety evaluation and Cry j 2 has a strong immunogenicity, CryJ2-LAMP, but not the combination of CryJ2 and CryJ1 LAMP, was chosen as our first generation investigational product. Based on results from the current study, we predict that a combination of CryJ1-LAMP and CryJ2-LAMP vaccines, our next investigational products, could elicit a stronger immunological effect than the CryJ2-LAMP alone. Five subjects who were skin test negative for JRC but positive for MC were included in these trial. Because Jun a 2 is a high homologous protein of the Cry j 2, it is reasonable that these allergens share T cell and IgE epitopes. Indeed, CryJ2-LAMP vaccination resulted in SPT negative conversion in 6 out of total 11 MC sensitive subjects on day 132. For the remaining 5 subjects, 3 of them were enrolled in the Phase IB trial and all 3 subjects were found SPT test negative for MC during the Phase IB trial screening and before receiving the booster dose. These results suggest that in some subjects, the immunological effects might not be induced rapidly after vaccination, but can be delayed. This delay is consistent with the purported mechanism of action of LAMP vaccines since they require "re-education" of the immune system. Another possibility of this delayed conversion is that some of these MC positive SPT reaction (at screening) may be reactive to several of the MC pollen allergens, such as Jun a 1 and Jun a 2, and the single allergen encoding vaccine needs a long period to exert its effect. Only one subject (#118, Cohort 2), who experienced a MC negative conversion, was unable to maintain the negative SPT reaction at the end of trial. This subject at the last visit also had a positive skin test to JRC for the first time. Because this subject showed SPT reaction negative conversion in the previous visits, it is unlikely that this phenomenon was caused by the boosting vaccination. No other atopic and non-atopic subjects experienced this phenomenon. It has been found that immunotherapies can induce a bystander suppression to unrelated allergens. [36][37][38] In line with these findings, we did observed a substantial number of SPT conversion from positive to negative for tested unrelated allergens. Although we do not know the exact mechanism, it is possibly due to the bystander T cell help. In future studies, including a placebo control group or a control group which has subjects atopic to non-JRC/MC allergens, for example, ragweed, will help define whether the conversion of skin test to that unrelated allergens is related to the bystander effects caused by the CryJ2-LAMP vaccination. Though a few subjects experienced positive conversion for any of these unrelated allergens, there is no correlation between the SPT results for these allergens and for JRC/MC. For example, subject #125, who became SPT reaction positive for southern California tree and dust mite during the Phase IB trial, converted SPT reaction to negative for MC and then remained the negative reaction until the end of trial. Thus, it is possible that these subjects exposed to such unrelated allergens became sensitized during the trials. CryJ2-LAMP vaccinations did not induce any significant changes in the production of JRC and/or MC specific IgE antibodies, indicating that the CryJ2-LAMP is safe. However, it is worth noting that the JRC-specific IgE binding titers in this study are much lower (about 5-fold) than the typical titers in JRC allergic patients. 39 We speculate that the decrease in JRC-specific IgE titers was a result of lack of JRC pollen exposure in Honolulu. In addition, the binding titers of Cry j 2-specific IgE were lower than expected, indicating the subjects were sensitized to other allergens in the JRC pollen. Unlike the results from preclinical studies in which CryJ-LAMP plasmid DNA vaccinated mice exhibited a robust JRC specific IgG antibody production, herein we only observed a marginal increase of anti-JRC IgG antibody at the end of the Phase IB trial. Human subjects usually produce little or no antibodies by only DNA vaccination. 32,40,41 For example, even combined with an immunoregulatory cytokine granulocyte macrophage-colony stimulating factor encoding plasmid, DNA vaccines for malaria still failed to induce antibody response, although specific CD8 C T cell response was induced. 42 Nevertheless, recent studies indicate that DNA vaccines are excellent in priming the immune system, both cellular and humoral, if followed with a protein boost. 43,44 Thus, the DNA priming/protein boost immunization regimen has been used to improve the levels of neutralizing antibody responses, particularly in the infectious disease studies. 45,46 Therefore, we propose that in future clinical studies, vaccinated subjects might produce higher JRC specific IgG antibodies once they are exposed naturally to the JRC pollen. Considering the highly homology and crossactivities among the major allergens of JRC (Cry j 1), MC (Jun a 1), Japanese cypress (Cha o 1) and Cupressus arizonica (Cup a 1)and among Cry j 2, Jun a 2, and Cha o 2, 10,12,47,48 the CryJ-LAMP vaccines might have a potential as a therapeutic to individual allergic to such pollens, particularly to Japanese cypress, which follows the season of JRC pollen. Conclusions In summary, the investigational CryJ2-LAMP DNA vaccine was safe as all safety endpoints were met at the end of the 2 trials. The 4 dose vaccination regimen was well tolerated by all subjects and no serious safety issue was identified. Meanwhile, the investigational product showed immunological effects in these JRC/MC sensitive subjects, suggesting its potential as a therapeutic for JRC/MC allergic patients. However, these were open trials with a small number of subjects without placebo control group. A double-blind placebo-controlled study with more subjects, more control groups, and more biomarker measurement is needed to confirm the true efficacy. Disclosure of conflicts of interest This trial was supported by Immunomic Therapeutics, Inc. WH and TH are shareholders of Immunomic Therapeutics, Inc. YS, ER, and AA are employees of Immunomic Therapeutics, Inc. DF. has declared that he has no relevant conflicts of interest.
2018-04-03T04:20:55.636Z
2017-06-12T00:00:00.000
{ "year": 2017, "sha1": "e84f30b1f3617215a0313f5133d2b2d0832a9133", "oa_license": "CCBYNCND", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/21645515.2017.1329070?needAccess=true", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "e84f30b1f3617215a0313f5133d2b2d0832a9133", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257710091
pes2o/s2orc
v3-fos-license
Importance of tree diameter and species for explaining the temporal and spatial variations of xylem water δ18O and δ2H in a multi‐species forest Identifying the vegetation and topographic variables influencing the isotopic variability of xylem water of forest vegetation remains crucial to interpret and predict ecohydrological processes in landscapes. In this study, we used temporally and spatially distributed xylem stable water isotopes measurements from two growing seasons to examine the temporal and spatial variations of xylem stable water isotopes and their relationships with vegetation and topographic variables in a Luxembourgish temperate mixed forest. Species‐specific temporal variations of xylem stable water isotopes were observed during both growing seasons with a higher variability for beeches than oaks. Principal component regressions revealed that tree diameter at breast height explains up to 55% of the spatial variability of xylem stable water isotopes, while tree species explains up to 24% of the variability. Topographic variables had a marginal role in explaining the spatial variability of xylem stable water isotopes (up to 6% for elevation). During the drier growing season (2020), we detected a higher influence of vegetation variables on xylem stable water isotopes and a lower temporal variability of the xylem water isotopic signatures than during the wetter growing season (2019). Our results reveal the dominant influence of vegetation on xylem stable water isotopes across a forested area and suggest that their spatial patterns arise mainly from size‐ and species‐specific as well as water availability‐dependent water use strategies rather than from topographic heterogeneity. The identification of the key role of vegetation on xylem stable water isotopes has critical implications for the representativity of isotopes‐based ecohydrological and catchments studies. between patterns of tree water uptake (informed by the isotopic composition of tree stems) and variations of vegetation and topography in the landscape. Therefore, we sampled tree trunk water of around 350 trees scattered across a forested area in Luxembourg to study how water uptake differed between trees and how it was related to the vegetation (tree species and diameter) and the topography (e.g. elevation and slope). Global change poses a considerable challenge for forest and water resource management due to shifts in precipitation regimes and temperatures that may affect tree water uptake and plant water availability (e.g. Capell et al., 2013), species composition and distribution and forest productivity (Boisvenue & Running, 2006). Therefore, an improved understanding of the influence of biotic and abiotic factors on tree water uptake is needed to better evaluate the variations of tree water use (Frank et al., 2015) and manage forest ecosystems to enhance their adaptive response to environmental stressors. Stable isotopes of oxygen (ratio of 18 O to 16 O) and hydrogen (ratio of 2 H to 1 H) of water have been widely used to understand hydrological and ecohydrological processes in catchments (e.g. Bögelein et al., 2017;Brinkmann et al., 2018;Fabiani et al., 2021;Goldsmith et al., 2012Goldsmith et al., , 2018Penna et al., 2018;Sprenger et al., 2016Sprenger et al., , 2018Tetzlaff et al., 2020). They are an important tool for describing the movement of water through catchments and ecosystems (Kendall & McDonnell, 1998;. Stable water isotopes (SWI) have proved to be essential for investigating water fluxes in the soil-plant-atmosphere continuum (e.g. Fabiani et al., 2021;Goldsmith et al., 2012), flow paths and associated transit times of water in the subsurface (Asadollahi et al., 2022;Knighton et al., 2019;Sprenger et al., 2016), along hillslopes (Asano & Uchida, 2012) and in catchments (Kuppel et al., 2018;Rodriguez & Klaus, 2019;Sprenger et al., 2022;Wang et al., 2023) showing their potential to decipher water movements in the critical zone. Analysing SWI in plant stem water (usually assumed to be xylem water) is important for quantifying water sources and water use by plants (Penna et al., 2018). Xylem water is a mixture of the different water sources used by trees (i.e. soil water from different depths and groundwater) (Penna et al., 2018). The variations of xylem SWI are therefore related to variations in these sources and how they are taken up by trees. The temporal variability of xylem SWI is related to soil SWI that are themselves modified via mixing with infiltrating rainwater with its own interstorm and intrastorm variation in isotopic composition (Bertrand et al., 2014;Sprenger et al., 2018). Additional temporal drivers of xylem SWI are meteorological conditions (e.g. air temperature, net radiation and humidity) that influence evaporation-fractionation of soil SWI (e.g. Bertrand et al., 2014). Vegetation characteristics such as species, forest type, tree height, diameter and above ground biomass (e.g. Fabiani et al., 2021;Goldsmith et al., 2012Goldsmith et al., , 2018Snelgrove et al., 2021) also influence xylem SWI. The effect of vegetation variables on xylem SWI is attributed to possible species-specific differences in the timing and intensity of water use (Snelgrove et al., 2021) or depth of water uptake (e.g. Brinkmann et al., 2019;Fabiani et al., 2021;Goldsmith et al., 2022;Kahmen et al., 2021). Some studies addressed the spatial variability of xylem SWI at plot (Goldsmith et al., 2018), hillslope Goldsmith et al., 2012) and catchment (Gaines et al., 2016) scales relying on 12 to 60 trees sampled on the same day. Those studies showed that xylem SWI were influenced by soil depth at the tree location, vegetation variables (e.g. Gaines et al., 2016) and the depth and lateral distributions of soil SWI (Gaines et al., 2016;Goldsmith et al., 2018). Spatial variations of soil SWI can themselves be related to topography (e.g. aspect, slope and elevation) that affect water movement in soil, energy inputs for evaporation and in consequence soil SWI. Topography (e.g. elevation) can also influence patterns in vegetation, the depth and accessibility of tree water sources and, in turn, tree water uptake depth and xylem SWI. Beyer and Penna (2021) emphasised that spatial data of xylem and soil SWI are scarce. For this reason, we are lacking an understanding of the relationship between xylem SWI and topographic and vegetation variables. Particularly, it remains unclear what is the relative importance of topographic and vegetation variables in explaining the spatial variability of xylem SWI. In this study, we address this need by exploring the temporal and spatial variations of xylem SWI and their relationship with vegetation and topographic variables in a mixed beech-oak forest. We measured δ 18 O and δ 2 H of xylem water over two growing seasons (17 sampling campaigns) in $350 trees in the Weierbach catchment, Luxembourg. We went beyond the sampling strategy generally used in ecohydrological studies (i.e. four tree individuals per species on average on each sampling campaign; Goldsmith et al., 2018) and sampled on average, for each sampling campaign, 10 tree individuals per species. To the best of our knowledge, this was the first analysis with more than 300 xylem samples determining the relative importance of vegetation and topographic variables in explaining the spatial variability of xylem SWI and highlighting its interannual change. The high total number of xylem samples taken within the 42-ha forested area provided a high sampling density (>3.3 trees/ha per growing season) and variations in vegetation and topography on which we based the spatial analysis. Our research questions are guided by a perceptual model of the influence of vegetation and topography on xylem SWI. We know that species influence xylem SWI through its effect on tree water uptake depth (e.g. Fabiani et al., 2021). We also know that tree diameter at breast height (DBH) is associated with depth of tree water uptake (e.g. Schoppach et al., 2021), and we thus expect DBH to influence xylem SWI. Specifically, in the Weierbach catchment, trees rely on soil water with no significant uptake of groundwater . Soil water availability is linked to vertical and lateral redistribution mechanisms of infiltrating water along the hillslope Rodriguez & Klaus, 2019), and we therefore believe that elevation, TPI and slope influence tree water uptake depth and, in turn, xylem SWI. We also expect these variables, along with aspect, to influence xylem SWI by affecting evaporation-fractionation of soil SWI. Multiple landscape variables can therefore affect xylem SWI, and their possible interactions are challenging to untangle their respective influence on xylem SWI. We hypothesize that xylem SWI vary systematically following the perceptual model; we address this hypothesis by investigating the following research questions: 2 | MATERIALS AND METHODS | Study area The Weierbach is a 42-ha forested headwater catchment located in the northwest of Luxembourg (Figure 1). The region is characterised by gently sloping plateaus cut by deep V-shaped valleys. Two landscape units are distinguished depending on their subsolum type and their slope: plateaus (about 30 ha, slopes between 0 and 5 ) and hillslopes (about 12 ha, slopes between 5 and 44 ) (Martínez-Carreras et al., 2016). Furthermore, there is a small riparian zone of up to 3-m wide surrounding most of the stream network and representing about 0.4 ha (Glaser et al., 2020). Detailed topographic variables were calculated from a highresolution (1 m) digital elevation model (DEM) (Luxembourgish air navigation administration, 2017) and included aspect ( ), slope ( ), curvature (À) and drainage area (m 2 ) for each 1  1 m DEM pixel. The aspect represents the direction the downhill slope faces (measured clockwise from 0 (north) to 360 (north)), the slope represents the steepness, the curvature indicates if the surface is upwardly convex (positive value), concave (negative value) or flat (value of 0) and the drainage area indicates the area from where water flows downslope (Table 1). Based on these variables, the topographic position index (TPI; À) and topographic wetness index (TWI; ln(m)) were calculated (Wilson & Gallant, 2000) as follows: where E is the elevation at a specific location (m) and E avg50 is the average elevation in a 50 m radius circle around this location (m). The TPI value decreases from the catchment ridges to valley. The TWI characterises terrain-driven propensity for saturation and is calculated as follows: where A s is the specific area (i.e. drainage area per unit contour length) in m 2 m À1 and β is the slope in . The bedrock in the Weierbach catchment consists mostly of Devonian slate containing schist, phyllite, and quartzite (Juilleret et al., 2011). Pleistocene periglacial slope deposits cover the bedrock, and the soil developed from these deposits is Leptic Cambisol (Juilleret et al., 2011) according to the World Reference Base classification. The weathered and fractured bedrock starts on average at about 140-cm depth, with fractures closing at approximately 5-m depth . The average annual stream discharge is 478 mm Pfister et al., 2017) with lower base flow occurring from July to September due to higher losses through ET (potential ET annual average of 593 mm for the period 2006-2014; Pfister et al., 2017). Snow can accumulate for a few days in winter, but it generally melts within a few days. The vegetation in the Weierbach catchment is dominated by uneven-age deciduous hardwood trees (70% of the catchment area; European Beech Fagus sylvatica and Oak Quercus petraea x robur) and pure plantations of conifers (30% of the catchment area; European Spruce Picea abies and Douglas fir Pseudotsuga menziesii) Hissler et al., 2021) located in some areas of the catchment ( Figure 1). The deciduous hardwood trees rely on soil water with no significant groundwater uptake . Tree DBH of selected trees was measured within a 360 m  20 m inventory plot located in the beech-oak stand ( | Hydrometeorological monitoring and isotopic measurements Precipitation volumes (P) and air temperature (T) over the study period were measured every 15 min at the Roodt station ( Figure 1) following the World Meteorological Organization standards (Sevruk et al., 2009); we computed the daily total P and daily average T. Gaps in the daily P time series (6% over 2019-2020) were filled using a linear regression between daily P at the Roodt and Holtz (located about 2.6 km from the Roodt station; operated by the Water Agency of | Xylem sampling We focused our analysis on hardwood trees that dominated the catchment. We sampled sapwood xylem from beech and oak tree trunks with a Pressler corer across the catchment during the growing seasons 2019 and 2020 ( Figure 1). We transferred the sapwood xylem samples into 30-mL glass vials sealed with caps and Parafilm ® and kept them at À22 C until xylem water extraction. In each zone, we took the coordinates (X, Y) of randomly selected points. A similar number of uneven-aged beech and oak trees were then randomly selected and sampled within a 15-m radius circle around the points (X, Y). We sampled distinct trees during each campaign. For the spatial analysis, we later generated a unique partialrandom set of coordinates (X r , Y r ) for each sampled tree based on a measured angle and horizontal distance from the point (X, Y). For each campaign, we sampled on average 10 trees for each species (the number ranged between 2 and 18 individuals) ( Table 2). In total, 102 and 101 samples were taken from beech and oak, respectively, in 2019, while 69 samples were taken from both tree species in 2020 (Table 2). We measured the DBH of each tree sampled. The topographic variables at each of the sampled locations and the DBH of each of the sampled trees spanned the distributions observed in the Weierbach catchment (Table 1 and supporting information Figure S1). | Xylem water extraction and isotopes analyses We extracted water from xylem samples using the cryogenic vacuum distillation leak-tight line protocol Orlowski et al., 2016). We submerged the vials containing the xylem samples in a 100 C oil bath and collected evaporated water in U-shaped tubes submerged in liquid nitrogen (À197 C) for approximately 3 h. The lines were connected to a pump that applied a vacuum to reach the suction of 0.03 hPa below which there was no water left to extract. Extraction was stopped 1 h after the suction reached the constant value of 0.03 hPa. Water was then collected using a Paster pipette, stored in 2-mL threaded vials with fixed 300-μL glass inserts and kept at 4 C before laser spectrometry analysis. lab standards to avoid drift over the course of the analysis. The quality control lab standard water was 0.02‰ for δ 18 O and 0.3‰ for δ 2 H . The isotopic composition is given as the relative difference in the ratio of heavy to light isotopes of water samples (delta notation, ‰) to the Vienna Standard Mean Ocean Water (VSMOW). | Data processing and statistical analyses Data and statistical analyses were performed using R Studio Version Analysis of landscape drivers We carried out principal component regressions (PCR) (Liu et al., 2003) to reveal the landscape variables influencing xylem SWI using a combination of vegetation and topographic variables. We used species, DBH, elevation, aspect, slope, flow accumulation, curvature, TPI and TWI as independent variables ( p predictors) and the detrended xylem SWI as dependent variable (outcome). First, we recoded the categorical variable species using dummy coding and tested on a test set (30% of the original set) to assess model prediction error. Using the selected model, the raw data matrix X with p predictors columns was replaced by a smaller matrix T with k PCs columns: T ¼ X à P Finally, we fitted a multiple linear regression model using the noncorrelated k PCs of T as predictors and the detrended xylem water isotopic composition ŷ as the outcome. For each k PC, we calculated the percentage of variance in the outcome explained by each predictor as the product of the variance in the outcome explained by the PC and the loading of each predictor in the same PC. The predictors with a loading >j0.45j (Hair et al., 1998) were deemed to contribute largely to a PC. To determine the percentage of variance in the model outcome explained by each predictor, we summed the respective results of each k PCs of the model. We calculated the error in prediction for each sampled tree in space as the difference between the predicted and the measured value. We then tested for normally distributed errors in prediction (Shapiro & Wilk, 1965) and calculated the mean absolute error (MAE) of the model for further evaluation. | Precipitation and xylem SWI Over the 2019 sampling period, precipitation median δ 2 H and δ 18 O values were À37.6‰ and À5.9‰, respectively, while median values were À32.7‰ and À4.6‰ over the 2020 sampling period ( Figure 4a,b,e,f). The variability of precipitation SWI was higher over the 2019 than 2020 sampling period due to the 2019 sequential rainfall sampling that captured an isotopically depleted event in October. | Spatial autocorrelation In 2019, beech and oak xylem water δ 18 O and δ 2 H showed significant positive spatial autocorrelation, while only δ 18 O of beech xylem water was significantly and positively spatially autocorrelated in 2020 (Table 3). The low Moran's I indicated a weak spatial autocorrelation of these xylem SWI data. The empirical variograms showed a high variance in the data that was as much as the nugget size ( Figure S4); this prevented the fit of any function to the variograms and the estimation of the ranges. | Landscape drivers The | Species-specific temporal variations of xylem SWI The different temporal variations of xylem SWI (Figure 4 and supporting information Tables S1 and S2) observed between beech and oak (both having similar DBH ranges, Table 1) during both growing seasons suggest different water use strategies. The results suggest that beech exploit shallower and seasonally less stable (due to more exposition to the evaporation-fractionation process) water sources than oak, as observed by Fabiani et al. (2021) The results further suggest that beech may use more various water sources than oak in response to the varying hydrometeorological conditions observed throughout the growing seasons ( Figure 2). This is in line with the recent observation that beech can change its rooting patterns and water use strategy more easily than oak (Goldsmith et al., 2022). This is also consistent with the finding that beech has the same probability to use deep and shallow soil water, while oak has a higher probability to use deep than shallow soil water (Kahmen et al., 2021). These species-specific water use strategies support the existence of different water uptake niches between the two co-occurring tree species, as recently suggested by Fabiani et al. (2021) in the study area. These water use strategies also likely demonstrate the higher niche plasticity of beech, as shown in previous research (Goldsmith et al., 2022;Kahmen et al., 2021). | Dominance of DBH and species as landscape drivers of xylem SWI The overall higher importance of DBH, and to a lower extent species, in explaining the spatial variability of xylem SWI compared with topographic variables ( Figure 5) further supports that trees use a species-F I G U R E 5 Percentage of variance (%) in xylem water δ 18 O and δ 2 H explained by each measured variable of the optimal model for 2019 (a, b) and 2020 (c, d) ("Other" include the nonmeasured variables). The sign of the explained variance indicates if the variable is positively or negatively correlated with the isotopic value. specific mixture of water sources from different depths. The notable influence of DBH on xylem SWI is consistent with previous studies that showed that tree diameter was associated with the depth of water uptake, with larger trees using deeper water (Dawson, 1996;Goldsmith et al., 2012;Phillips & Ehleringer, 1995). Different depths of water uptake between larger and smaller trees can lead to differences in xylem SWI values due to vertical variations in soil SWI. These variations can result from shallow soil water mixing with recent precipitation (Bertrand et al., 2014;Sprenger et al., 2018) and evaporation-fractionation of soil SWI (Bertrand et al., 2014). The smaller influence of species in explaining the spatial variability of xylem SWI is nevertheless in line with earlier research. As discussed above, previous studies reported species-specific vertical root access (e.g. Kahmen et al., 2021) and lateral root elongation and proliferation that could lead to a greater access to soil water pools (Poot & Lambers, 2003) with depth-specific isotopic compositions (Goldsmith et al., 2012). Similar to our finding based on xylem from tree trunks, a species-specific spatial variability of xylem SWI in tree branches has previously been observed (Goldsmith et al., 2018). The remarkable lower influence of topographic variables on xylem SWI, compared with vegetation variables, is consistent with previous findings of Gaines et al. (2016) who found a relationship between xylem SWI and tree DBH and height but no effect of the slope position on xylem SWI. Similarly, in the same study area, Fabiani et al. (2021) did not observe significant differences in xylem SWI between hillslope positions. However, these studies were carried out in areas with a small elevation range (about 50 m), and we may expect a higher contribution of topographic variables in explaining the spatial variability of xylem SWI in areas with higher topographic variations. Indeed, topography influences plant water status (Looker et al., 2018) and may, in turn, affect tree water source partitioning and associated xylem SWI in steeper areas, in addition to elevation effects of precipitation SWI. The variance in δ 2 H explained with the PCR models was much higher than the explained variance with models of δ 18 O ( Figure 5). The low spatial variability of xylem water demonstrates the ability of the PCR models to reproduce the overall spatial patterns of xylem SWI across the Weierbach catchment well. Despite our sampling density that was not high enough to reveal the extent to which xylem SWI were spatially autocorrelated, the low Moran's I suggest that there is little spatial structuration of xylem SWI. This appears to be independent of the water availability for trees as we observed low Moran's I for the wetter and the drier growing seasons. Future investigations of the spatial patterns of xylem SWI should however preferably follow a spatially nested sampling design. | Influence of water availability on the temporal and spatial variations of xylem SWI The clear lower temporal variability of xylem SWI (Figure 4 and supporting information Tables S1 and S2) observed during the drier (2020) than the wetter (2019) growing season suggests that trees adapted their water uptake depths in response to drier hydrological conditions. This change is supported by the respective presence, although limited, and quasi absence of spatial autocorrelation of xylem SWI in the wetter and drier growing season (Table 3). This adaptation is also in line with the respective increase and decrease of the influence of vegetation and topographic variables on xylem SWI across space observed between the wetter and the drier growing season ( Figure 5). During the wetter growing season (2019), precipitation was approximately 100 mm higher than during the drier growing season (2020) and the water volume available for tree uptake was higher (average SWC was equal to 0.129 m 3 m À3 in 2019 and 0.114 m 3 m À3 in 2020) and more evenly distributed across the catchment. With these conditions, trees had more easily access to shallow soil waterthat is seasonally more variable- (Goldsmith et al., 2012) and were less water-limited, leading to a higher temporal variability of xylem SWI compared with drier conditions. The higher use of shallow soil water by trees during wetter than drier conditions also led to a higher influence of topography on xylem SWI and to the spatial autocorrelation of these isotopes, although limited. Topographic variables such as slope influence the amount of rainwater infiltrating in soils and, in turn, soil water mixing with rainwater and in consequence soil SWI (Bertrand et al., 2014;Sprenger et al., 2018). Elevation, slope and aspect can influence air temperature and humidity that affect shallow soil SWI evaporation-fractionation (Bertrand et al., 2014). It has also been observed that the topographic index was spatially autocorrelated over a wide range of spatial extents (Cai & Wang, 2006); this may explain the spatial autocorrelation of xylem SWI observed during wetter conditions. On the opposite, with drier conditions, trees had to adapt their water uptake strategies (e.g. depth of water uptake, related to DBH; Dawson, 1996;Goldsmith et al., 2012;Phillips & Ehleringer, 1995) and use a higher fraction of deeper and seasonally more stable water sources (Goldsmith et al., 2012). This shift in tree water source led to a lower temporal variability of xylem SWI and a higher influence of vegetation variables on xylem SWI in drier than wetter conditions. The higher use of deeper soil water by trees during the drier growing season is also in line with the quasi absence of spatial autocorrelation of xylem SWI. These observations are consistent with previous studies demonstrating that trees could shift their water sources from shallow to deep soil water (Brinkmann et al., 2019;Lanning et al., 2020) depending on the water availability to meet water requirements and regulate water status. Brinkmann et al. (2019) showed that beech was particularly able to adapt its water uptake depth depending on the SWC. | Isotopes-based ecohydrological studies Our results suggest that, in the Weierbach catchment, tree species and size (DBH) explained more variations in xylem SWI than the topographic variables we evaluated. Similarly, a recent study in the same area showed that species and DBH were the main drivers of the spatial variability of sap velocity . Robust descriptions of the spatial variability in xylem SWI in ecohydrological studies are critical to provide reliable isotope-based estimates of water sources for vegetation root water uptake (Beyer & Penna, 2021 | Isotopes-based catchment transit times studies Currently, the isotopic signals of evaporation and transpiration as well as the age composition (i.e. transit time distributions) from state-ofthe-art model applications (e.g. Hrachowitz et al., 2015;Rinaldo et al., 2015;Rodriguez et al., 2018Rodriguez et al., , 2020Soulsby et al., 2016;van der Velde et al., 2015;Wang et al., 2023) are more often only indirectly constrained by calibrating models to observed isotopic signals of stream flow. This implies that the relationship between the isotopic composition of the water in storage and the water that is evaporated and/or transpired from this storage remains unclear . Recent development in the calibration of such models involved the use of soil and/or xylem SWI from a limited number of tree individuals, species or locations (e.g. Asadollahi et al., 2022;Knighton et al., 2019;Kuppel et al., 2018;Sprenger et al., 2022). Our results suggest that, to better reflect the vegetation behaviour compared with these studies and to account for the importance of species in influencing xylem SWI across space, the spatially distributed xylem SWI data from our study can be used to determine species-specific isotopic signals associated with transpiration. These signals can be exploited in lumped model approaches (e.g. using a weighted mean isotopic signal of the species-specific signals) to further constrain lumped hydrological models developed for the Weierbach catchment and calibrated so far exclusively using discharge and stream SWI data (Rodriguez & Klaus, 2019). Species-specific isotopic signals associated with transpiration can also be exploited in semidistributed models (e.g. separating beech/oak, douglas and spruce zones) (e.g. Kuppel et al., 2018). For these purposes, a good understanding of the forest composition and structure (species, DBH) in the Weierbach catchment is required to design an optimal sampling strategy that provide a representative isotopic signature. Further work is needed in other catchments to improve the quality of the xylem SWI data used in hydrological models. | CONCLUSION In this study, the measurement of beech and oak xylem SWI during several campaigns in the Weierbach catchment revealed a higher variability over the growing season of the beech xylem water isotopic signature compared with oak. Using spatially distributed xylem SWI measurements and PCR, we identified DBH and species as the dominant variables influencing the spatial variations of xylem SWI; topographic variables had a minor role in explaining these variations. We also noted a minor presence of spatial autocorrelation between xylem samples, but our sampling density was not high enough to reveal its extent. By way of sampling xylem over two growing seasons, we observed a respective increase and decrease of the influence of vegetation and topographic variables in explaining the spatial variations of xylem SWI between the wetter and the drier seasons. Our results suggest that, in the study area, the spatial variations of xylem SWI arise mainly from size-and species-specific as well as water availability-dependent water use strategies rather than from topographic heterogeneity. Trees can also adapt their water use strategies in response to lower water availability. Overall, our findings highlight the importance of vegetation variables in influencing xylem SWI. We demonstrate the importance of accounting for different tree diameters and species in field sampling protocols to accurately capture the isotopic variability existing within a study area. However, there is still a need for evaluating the role of additional vegetation and forest structure variables to refine our understanding of vegetation influences on xylem SWI. This information will improve the accuracy of the lumped isotopic signal associated with transpiration used in hydrological models and will help to better predict how catchments will respond to future changes in land cover, vegetation or stand properties associated with global change. for their input that helped to improve this work and manuscript. We thank the Editor and two anonymous reviewers for their comments that helped to improve this manuscript. DATA AVAILABILITY STATEMENT The data used in this study are the property of the Luxembourg Institute of Science and Technology (LIST) and can be obtained upon request to the corresponding author, after approval by LIST.
2023-03-24T15:08:03.878Z
2023-03-22T00:00:00.000
{ "year": 2023, "sha1": "d0a931f49d466bdfa3fac5606d7536cdc9146245", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/eco.2545", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "3d1ee08d20f4fcb69dcf6dd2d4035c0dab71e613", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
234007469
pes2o/s2orc
v3-fos-license
Thermal conductivity of coconut shell particle epoxy resin composite Coconut shell is one of the agricultural wastes that are widely available in Indonesia and other tropical countries. Unfortunately, the coconut shell waste has not been used optimally for new materials, especially for thermal insulation material. In this work, the composite from coconut shell particles using epoxy resin adhesives has been prepared. The particle sizes of coconut shells for composite samples were 60, 80, 100, and 120 mesh. The compositions of samples for each particle size (ratio between coconut shell and epoxy resin) were 70/30, 75/25, 80/20, and 85/15 (vol %). The thermal conductivity of each composite sample has been examined by using a single-plate method. The density of each sample has also been measured. The results showed that the thermal conductivity of composite sample for 60 mesh of particle size with 70/30 vol.% composition was 0.071 W/m K. As the composition of coconut shell was increased to 85 vol.%, the value of thermal conductivity decreased to 0.062 W/m K. It was found that the thermal conductivity of composite sample decreased as the composition of coconut shell was increased. For the composition of 70/30 vol.%, the thermal conductivity increased to 0.078 W/m K for 120 mesh of particle size. This behavior was also the same for other compositions, where the value of thermal conductivity of composite increased as the coconut shell particle size decreased. The density of the composite was found in the range of 0.799 – 0.938 g/cm3. As the particle size of the coconut shell was reduced, the density of the composite increased. This study revealed that both thermal conductivity and density of coconut shell epoxy composite are dependent on the particle size of the coconut shell. The thermal conductivity of all samples was less than 0.1 W/m K indicating that the coconut shell epoxy composite is quite potential for thermal insulation. Introduction Indonesia is the world leader in coconut production. In 2018, the production of coconut in Indonesia was 19 million tons [1]. Coconut shells are agricultural waste from coconut production, which is quite a lot in Indonesia. Until now, coconut shells have not been used optimally. On the other hand, several studies have shown that coconut shells have the potential to be used for composites [2,3,4,5,6,7,8]. Bledzki et al. found that the barley husk and coconut shell could be used as alternative fillers for composites material [2]. Leman et al. argued that the coconut shell powder could be used as filler in concrete [3]. A very recent study found that nanoparticles of coconut shell significantly improved the [8]. So, it is very interesting to study composites from coconut shells. Heat insulation materials are very important in automotive, aircraft, electronic equipment, industry, and daily life. Researchers are continuously looking for heat insulation materials, which are cheap, strong, and safe for the environment. Those materials are from renewable materials such as composites made of agricultural wastes. Several studies on the thermal conductivity of composites have been conducted [9,10,11,12,13,14]. Jolanta et al. found that the thermal conductivities of Bentgrass, Reed, and Cattail were 0.060 W/m K, 0.080 W/m K, and 0.055 W/m K, respectively [9]. The thermal conductivity of bamboo composite was found to be 0.185 -0.196 W/m K [10]. Papyrus fiberboard had a thermal conductivity of 0.030 W/m K [11]. The lowest thermal conductivity of bamboo fiber composite using epoxy resin binder was 0.111 W/m K [12]. Kimura et al. showed that the coconut fiber composite using tapioca starch and polyvinyl alcohol as binders had a thermal conductivity of 0.104 W/m K [13]. The thermal conductivity studies of coconut shell composites are very few reported in the literature. Din et al. found that the thermal conductivity of coconut shell composite using epoxy as binders was 0.143 W/m K. However, this study did not report the effect of composition and the size of coconut shell particle on thermal conductivity [14]. Meanwhile, the previous studies found that the particle size and composition significantly influenced the properties of the composite [8,15]. Burger et al. argued that the thermal conductivity in composite was affected by certain parameters such as particle size and the ratio of the filler [16]. In this work, the composite made of coconut shell and epoxy resin has been prepared. Epoxy resin has been used as a binder since it provides a good bond between the fillers [8]. The composition and particle size of the coconut shell particles were varied. The thermal conductivity and density of the composite have been measured. Method The coconut shell waste was obtained from the market in Banda Aceh. The coconut shell was cleaned and dried under the sun for several days. After that, the dried coconut shell was chopped to a small size. Then, the chopped coconut shell was milled to get coconut shell particles. Furthermore, the coconut shell particles were sieved by using sievers to obtain the coconut shell particles with sizes of 0.250 mm (60 mesh), 0.177 mm (80 mesh), 0.149 mm (100 mesh), and 0.125 mm (120 mesh). The epoxy resin as the matrix was purchased from the Avian Company, Indonesia. The coconut shell particles were mixed with epoxy resin homogeneously by using a mixer with a speed of 300 rpm at room temperature for 30 minutes. The mixture of coconut shell particles and epoxy resin was poured into the sample mold with a size of 15 cm x 15 cm x 1 cm and then pressed at room temperature with a 9 tons load for 60 minutes to produce a composite sample. The fabrication process of the composite is shown in figure 1. The compositions of samples were varied for each particle size. The ratio of coconut shell particle to epoxy resin was 70/30, 75/25, 80/20, and 85/15 vol. %. The sample size for thermal conductivity measurement was 10 cm x 10 cm x 1 cm. The sample size for density measurement was 5 cm x 5 cm x 1 cm. A single-plate method has been used to measure the thermal conductivity of samples. The instrument was manufactured by Leybold. The thermal conductivity of the sample was calculated by using equation (1) [17]. Where Q ' is heat flow through the sample during the time t ' , A is the area of the sample, d is the thickness of the sample, T ' is the temperature difference between the sides of the sample. The density of samples was obtained by using the equation (2). Where V is the sample volume and m is the sample mass. The error-bar of thermal expansion and density was determined by using the standard deviation equation. Figure 1. The fabrication process of coconut shell composite Results and Discussion The thermal conductivity of coconut shell particle (CSP) epoxy resin composite for various compositions of coconut shells and various coconut shell particle sizes is shown in figure 2. For coconut shell particle size 60 mesh, the thermal conductivity of composite was found to be 0.071 W/m K for the composition 70 vol.% of CSP (30 vol.% of epoxy resin). For 75 vol.% of CSP (25 vol.% of epoxy resin), the thermal conductivity decreased to 0.069 W/m K. For CSP composition 85 vol.%, the thermal conductivity was 0.062 W/m K. The thermal conductivity of composite decreased as the composition of CSP was increased for 60 mesh of particle size. As the particle size became smaller (80 mesh), the thermal conductivity of composite increased to 0.072 W/m K for 70 vol.% of CSP. The thermal conductivity decreased to 0.070 w/m K for 75 vol.% of CSP. Similar to the 60 mesh particle size, the thermal conductivity of composite decreased, as the composition of CSP increased. This behavior is the same for other particle sizes (100 and 120 mesh), as displayed in Figure 2. The comparison between the thermal conductivity from this study and previous studies is displayed in table 1. Ramsaroop et al. found that the thermal conductivity of the coconut shell was in the range of 0.030 -0.125 W/m K [19]. The thermal conductivity from this study is in the range of these values. The thermal conductivity from this study is slightly lower than the result reported by Kimura et al. for the coconut fiber composite using tapioca and PVA matrix [13]. The thermal conductivity of coconut shell composite using epoxy resin reported by Din et al. was 0.143 W/m K [14] which is higher than the thermal conductivity from this study. The discrepancy between the result from this study and the result reported by Din et al. could be due to the particle size and composition of coconut shells. Din et al. did not report the particle size and composition of filler or matrix in their composite. Coconut fiber 0.048 -0.049 [18] Coconut fiber & epoxy resin 0.140 [14] Coconut shell & epoxy resin 0.143 [14] Coconut shell & epoxy resin 0.062 -0.078 this study The density of coconut shell composites for various compositions and particle sizes has been measured. The result is displayed in figure 3. For 60 mesh particle size, the density of composite was found to be 0.861 g/cm 3 for 70 vol. % of CSP composition. As the CSP composition was increased to 75 vol. %, the density decreased to 0.820 g/cm 3 . For 85 vol.% CSP, the density of composite was 0.799 g/cm 3 . The density of coconut shell nanoparticle composite using epoxy matrix was found to be 1.031 g/cm 3 [8]. The density of coconut shell composite reported by Bhaskar et al. was in the range of 1.17 -1.29 g/cm 3 [20]. The discrepancy of the density from this study and the density from previous studies was due to different particle size and composition of filler. The density of composite decreased, as the CSP composition was increased. This behavior was the same for other particle sizes (80, 100, and 120 mesh). As the particle size was decreased to 80 mesh, the density was found to be 0.891 g/cm 3 . For the particle sizes of 100 mesh and 120 mesh, the densities of the composite were 0.893 g/cm 3 and 0.938 g/cm 3 , respectively. The density of the composite increased, as the size of the coconut shell particle was reduced. This trend is similar to other compositions, as shown in figure 3. This behavior was observed in previous studies [8,15]. Figure 2 shows that the thermal conductivity of composite reduces as the composition of coconut shell is increased. This trend is the same for density, as displayed in figure 3. This behavior could be related to the porosity of the composite sample. The epoxy matrix is dense, while the coconut shells have some pores. As the composition of the coconut shell is increased, the porosity of the composite increases. Then, the density of the composite decreases. The air is trapped in the pores, where the thermal conductivity of air is 0.024 W/m K [17]. Thus, the thermal conductivity decreases because of the porosity. As the particle size is reduced, the thermal conductivity of the composite increases, as shown in figure 2. This behavior could be related to the increased total surface area of filler (coconut shell particles). The density of the sample also increases as the particle size is reduced, as shown in figure 3. This behavior could be due to the porosity of composite, where the small particle size of coconut shell could reduce the porosity. Figure 4 shows the thermal conductivity against the density of the composite for various compositions of coconut shells. The result obtained is very interesting, where there is a correlation between thermal conductivity and density of composite samples. Its relation is linear, as shown (dashed line) in figure 4. As the density of coconut shell composite increases, its thermal conductivity increases. This finding suggests that the thermal conductivity of coconut shell composite is influenced by its density. This behavior was observed in the previous study of fiber reinforced concrete reported by Nagy et al. [21]. The thermal insulation materials commonly used are glass wool and styrofoam. The thermal conductivity of glass wool and styrofoam are 0.038 and 0.033 W/m K, respectively [17]. The thermal conductivity of concrete or building wall is 0.760 W/m K [17]. The thermal conductivity of the composite from this study is 0.062 -0.078 W/m K that is much smaller than the thermal conductivity of the building wall. Therefore, the composite from this work has the potential to be used as a thermal insulator material for building to reduce the heat flow from outside to inside building during the daytime. Conclusion The composite made of coconut shell and epoxy resin as a binder has been fabricated for various compositions and particle sizes of coconut shells. The thermal conductivity and density of the composite were influenced significantly by the composition and the particle size of the coconut shell. The thermal conductivity of composite decreased as the composition of coconut shell was increased. The density of composite had similar behavior, where the density also decreased as the composition of coconut shell was increased. However, the thermal conductivity of composite increased as the size of the particle of the coconut shell was reduced. This behavior was similar to the density of the composite. This finding suggests that there is a correlation between the thermal conductivity and density of composite. The thermal conductivity of coconut shell composite was found to be in the range of 0.062 -0.078 W/m K which is less than 0.1 W/m K. Thus, the coconut shell composite using epoxy resin adhesive is a good thermal insulator.
2021-05-10T00:03:41.561Z
2021-02-01T00:00:00.000
{ "year": 2021, "sha1": "9796021ab8035aee6302f102d8da73ceb65fc5c9", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1816/1/012031", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "f21f30e0271ef4effd5369786683529f9997ad6d", "s2fieldsofstudy": [ "Materials Science", "Engineering", "Environmental Science" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
239183121
pes2o/s2orc
v3-fos-license
Reply on RC1 The article is an excellent methodological approach to an increasingly important theme: the study of past flood occurrences and how it serves “to increase public confidence in any proposed solution that ultimately involves a large economic or social expense for hazard mitigation” (lines 708-709). I consider that the main contribution of this article is the systematic use of proxy and instrumental long-term data, crossing the two dimensional of hydraulic modelling (referred in 3.3). I considered this article a pilot project of a methodology application that could be replicate in other cases. REVIEWER COMMENT: The article is an excellent methodological approach to an increasingly important theme: the study of past flood occurrences and how it serves "to increase public confidence in any proposed solution that ultimately involves a large economic or social expense for hazard mitigation" (lines 708-709). I consider that the main contribution of this article is the systematic use of proxy and instrumental long-term data, crossing the two dimensional of hydraulic modelling (referred in 3.3). I considered this article a pilot project of a methodology application that could be replicate in other cases. RESPONSE: We appreciate very much the words of the Prof. Amorim as a recognised expert on the study of historical flood events and social perception of flood risks. REVIEWER COMMENT: A first suggestion to clarify the extent of this article is the inclusion, in the title, the mention to the space: the river and place (River Douro and Zamora, Spain). Indeed, the article is mainly an approach to the analysis of the river Douro floods in the Spanish area, between Zamora and the first dams, that were constructed in 1960s after Zamora town. The article is a case study, a methodological research about a particular section of Douro River, with its own characteristics (as was explain in the contain). RESPONSE: Thanks for this comment. We initially omitted the river name and place to avoid a consideration of this work as local study. We wanted to highlight the methodological approach and the message of the importance of understanding long-term flood frequency/magnitude changes at decadal and centennial scale for improving flood hazard assessments. However, after the reviewer suggestion we agree to include details of the study site in the title as follow: "Enhanced flood hazard assessment beyond decadal climate cycles based on centennial historical data in Zamora (Duero River, Spain)" COMMENT: A second suggestion is the possibility to add extreme dates in the title, even if is difficult to fixed the scope, because sometimes are mentioned (and effectively was study) the last 500 years, and in other occasions the period between 1250-1871, maybe because they faced «a non-continuous dataset between 1250 and 1545» (I understand the expression «centennial» in the title). RESPONSE: Thank you for this comment. We think that the expression centennial already indicates the use of data records of several hundreds of years. Moreover, the historical temporal framework differs when considering isolated flood data and continuous flood registers. The period between CE 1250 and 1511 has some isolated non-continuous flood data, particularly on extreme events, and therefore, it cannot be included in the flood frequency analysis. Because only reading the paper, the reader will get an idea of the different temporal frameworks for the continuous/discontinuous datasets, we think that is better not to indicate the time framework in the title. COMMENT: The authors made a remarkable comparative approach putting its case in a larger frame. I suggest another comparative analysis using an article that tried to estimate the frequency of extraordinary floods of Douro River in the Portuguese territory, till its estuary. (Silva, J.D. da; Oliveira, Manuel de Sousa -As cheias na parte portuguesa da bacia hidrográfica do rio Douro, ps/p/. Available https://grupo.us.es/ciberico/.../porto2diasdasilva.pdf.). RESPONSE: Sorry, I tried to download this pdf but it seems there is not available at this moment, and I couldn' find. Nevertheless we have used other available publications by Loureiro, and consulted some internal reports by the Instituto da Agua (Rui Rodrigues and colleagues) providing historical flood discharges in Regua and Porto. We think that a comparative analysis between the Portuguese Douro and the Spanish Duero sites is outside the framework of this study, but it was avoided on purpose because the paper is already quite long. COMMENT:It could be important to insist in a comparison between rainfall contrast characteristics consequences (line 181 and further), i.e., the flood peaks contrast between Régua and Porto at the lower basin of Douro, with Valladolid and Zamora and its consequences. The period 1250-1871, in which were identified 69 floods (including ordinary ones), is a number very low if compared with the floods of the Douro in Porto just for the period between 1727 and 1799, in which were found 54 floods (see the quoted article by the authors). Perhaps this increasing number of floods was related to the tidal peaks and the siltation of the estuary, but this contrast of occurrences could open an outlook about long Douro River course behavior, before and after dams' construction. RESPONSE: We agree with Prof. Amorim about the strong contrasts on flood peaks between the Portuguese and the Spanish Douro/Duero river and their consequences. It is uncredible that some large peaks in Porto were not recorded as large floods in Zamora, but the data shows that the rainfall characteristics are very different. We agree that this topic should be addressed in the future and perhaps an opportunity to collaborate with Portuguese colleagues. Regarding the difference on the number of floods, it is difficult to compare using relative flood classifications, as the ones applied by Alcoforado et al., 2021 and in our paper. In the case of Alcoforado et al., 2021 the frequency of extraordinary floods is once in each 1.33 years, which is approximately the bankfull discharge of rivers in temperate climates. In the case of Zamora the conditions are drier and probably the historical accounts of high flows without damages are lower than in Porto, where even small floods had important influence in the navegation and port operations. However, the number of large floods are not so different at both sides; in Porto there were six catastrophic floods between 1727 and 1799 (as refered in Alcoforado and colleagues paper) whereas in Zamora we recorded seven floods within relative categories of catastrophic and extraordinary floods in both cases causing moderate to severe damages. As Prof. Amorim knows the tidal conditions were critical in Porto in terms of water stage reached at the lower Douro. COMMENT: A final remark: line 575, authors wrote "includes the largest flood on record (Dec 4-6 1739) that reached a stage of 12 m at Dom Luiz I Bridge (Loureiro, 1904;Taborda, 2006)". The bridge doesn't exist in 1739 (only constructed between 1881-1886) but Loureiro (and Taborda quoting Loureiro), used the currently existing bridge as a mark (rebuild the sentence will be enough). RESPONSE: Thank you very much for this comment, and indeed it is a mistake in the way in which it was indicated the observation site. Indeed, the observation point on stage and flow velocity used by the Porto and Leixôes port authorities directorate is located at a rocky river bank on the right margin, immediately upstream of the D. Luiz bridge, and as the reviewer said was brougth to ther flood marks as a reference to the Dom Luiz Bridge. "that includes the largest flood on record (Dec 4-6, 1739) that reached a stage of 12 m in a bedrock section at the right margin just upstream of the Dom Luiz I Bridge (Loureiro, 1904;Taborda, 2006)."
2021-10-21T15:25:11.971Z
2021-09-10T00:00:00.000
{ "year": 2021, "sha1": "a3b93ee9bd2c6a608318437bdde7c4a137c98610", "oa_license": "CCBY", "oa_url": "https://hess.copernicus.org/preprints/hess-2021-320/hess-2021-320.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "64f352529ce479995525dd5b6fc66abcde286f44", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [] }
187549659
pes2o/s2orc
v3-fos-license
Psycholinguistic Aspect of Studying the Text as a Product of Speech Activity The psycholinguistic aspect of study of the text as a product of speech activity is analyzed in the article. The relevance of the ideas of multidimensional space of life in the linear plane of the undertaken research consists in the consideration of the text as the difficult semantic and syntactical formation possessing a number of psycholinguistic characteristics. The authors of the research claim that in the psycholinguistic aspect of studying the text the concept of perception of the text treated as the act of knowledge, experience and creativity is important. In the given work perception of the text is correlated to the projection of the text which in psycholinguistics is considered as fundamental property of the text formation. In the article phases of the perception of the text are described in detail; for the definition of the bases of the psycholinguistic approach to the study of the text the comparative analysis of the text review in psycholinguistics and linguistics is submitted. Besides, the authors investigate the features of the psycholinguistic analysis of the text on the example of the scientific works of the famous linguists. According to the researchers, the leading place in a psycholinguistic projection of the text is taken by the reader the recipient whose activity is connected with his spiritual activity: he tries to understand a statement, gives additional clarity to the speech, finds the hidden meaning. Introduction In the modern linguistics the question of psycholinguistic aspect of studying the text as a product of speech activity is one of the central questions. Within the text-centric concept the attention of the scientists is concentrated on the characteristic of the text as of a speech work. The creator of the domestic psycholinguistics A. A. Leontiev considered that "a subject of the psycholinguistics is the structure of processes of a speech production and a speech perception in their ratio with the structure of a language (any or certain national)" (Leontiev, 1997). The psychology represents theoretical knowledge of a ratio of language and consciousness, of thinking and speech, of processes of perception and properties of memory, of mental activity of the person and of speech activity as one of the activity kinds of a person. In psycholinguistics an important role is played by a statement which is a unit of speech communication and is focused on the participants of speech communication. The expanded statements act as the main unit of speech communication; the main form of their expression is the text (Harley, 2005). The text, according to N.S. Valgina's definition, is "a dynamic unit, organized in the conditions of a real communication, having extra-and intra-linguistic parameters" (Valgina, 2003). The text is created to transfer the thought of the author, to embody his creative plan; that is the text represents a product of a speech and mental activity of the subject. Considering the text from a pragmatic position, it is a material for the perception and interpretation (Nadezda et al., 2016;Nurgaliyeva et al., 2018;Sadanyan et al., 2017;Yusupova, 2017;Zulfiya and Muslima, 2017). Such a multidimensional consideration of the text allows to maintain the text (speech and mental) activity representing a structure which includes the following components: the author (the addresser of the text), the reader (the addressee), the displayed reality itself, knowledge of which is imparted in the text, and language system which the author chooses the language means from, allowing him to embody adequately the creative plan (Dem'yankov, 1992). Мethods Creation of a projection of the text is connected with the structure of the text. For the characteristic of the process of production of the text in psycholinguistics the model of three-phase structure of activityorientation, implementation, control -is used. The orientation phase of creation of the text represents intellectual and cognitive activity of understanding of a problem situation of communication and of a subject of the speech. In this phase the author of the text carries out the communicative intention in the form of a purpose and of a general plan of the text. The phase of implementation of the text consists in a language materialization of the idea of the speech message with the attraction of sign means necessary for this purpose (for example, means of inter-phrase communication and signaling devices of its composite integrity (in particular, sign-signals of the beginning and of the end of the text). The phase of control assumes semantic working off of a plan of the text and correction of the verbal expression (verbalization) of the main idea of the speech message. An important role is played at that time by the need of ensuring thematic-and semantic integrity of the text. Results In the psycholinguistic aspect the text possesses a big degree of "interpretativeness" (options of the interpretation of the semantic contents by a listener or a reader). V.Z. Demiyankov considers that the concept of interpretation can be revealed through a concept of a cognition which "includes not only exquisite occupations of a human spirit (such as knowledge, consciousness, reason, thinking, representation, creativity, plans and strategies development, reflection, symbolization, logical conclusion, solution of problems, making evident, classification, correlation, imagination and dreams) … but also processes more terrestrial, such as organization of motility, perception, mental images, reminiscence, attention, recognition …" . In relation to the interpretation of the speech it is a type of a cognition the direct object of which is the product of speech activity. When producing the speech the inner world of a man in the form of a speech is interpreted, while when perceiving the speech itself is interpreted. The main features distinguishing psycholinguistics from linguistics are the factor of a situation in which speech statements are designed and perceived, and the factor of the person making or perceiving the speech (Hatzidaki, 2007). For the definition of a basis of psycholinguistic aspect of studying the text it is necessary to mark out the lines distinguishing consideration of the text in psycholinguistics and linguistics (See table 1). Psycholinguistic Study of the Text Linguistic Study of the Text The text is an object-related form of the act of communication necessary components of which are the subject of communication, the author and the recipient. The text is a really stated (written) sentence which serves as a material for the observation of the facts of the given language. Creation of the projection of the text on the basis of processes of perception and understanding of a text as a result of speech and mental activity. Creation of a new text on the basis of the linguistic analysis of the text. Levels of hierarchy of understanding of the text: 1. search of the general sense of the message  2. sensitive level (recognition of sounds)  3. lexical (perception of separate words) 4. syntactic (perception of sense of separate sentences). Levels of hierarchy of understanding of the text: 1. interpretation of meanings of the separate words  2. understanding of meanings of the whole statements  3. understanding of the general idea of the text. Listener, reader = recipient Listener, reader = researcher Active use of experimental methods (associative, method of subjective scaling, content analysis, intentanalysis). Answers to such questions as "Did you like the text? Why? What made an impression?" Consideration of functioning of language and language units as special type of psychological reality. Understanding of separate words and phrases. The most attention of the researchers of psycholinguistic study of the text is given to the category of the integrity referred to the plan of the text contents. The integrity can be treated as the existence of a semantic dominant which is established when perceiving the text by the recipient and sets the direction to the process of generation of sense connected with the text Leontiev (1979) calls integrity the fundamental property of the text: "… the integrity is the characteristic of the text as of the semantic unity, as of the united structure and is defined on all the text. It isn't correlated directly to linguistic categories and units and has the psycholinguistic nature (Usmanova and Nurullina, 2018). In psycholinguistics it is generally accepted that the integrity of the text is a certain condition of the text arising in the course of interaction of the recipient and the text. In the psycholinguistic aspect of the study of the text the concept "perception" of the text treated as the act of knowledge, experience and creativity is important. Comprehension of that ideal model of reality which was created by the author of the work becomes the end result of the act of reading. (Antúnez, 2016). However the author's model of reality isn't adequate to the reader's because the reader in a varying degree expresses the attitude to the read. Personal interpretation becomes the embodiment of this attitude in different sign systems. Certainly, the interpretation is created not only in the course of reading, but also in the course of the analysis directed to specification, adjustment and deepening of the perception (Granik and Samsonova, 1993). The unity of sensual and logical is always shown in the perception. The perception demands ability to see an image behind a word, to recreate a picture in one's own representation. To form these abilities the scientists Granik and.Samsonova (Shcherba, 1957) recommend using the technique of the slowed-down reading which goes back to the idea of slow reading of Shcherba. In "The experiences of the linguistic interpretation of poems" the famous linguist warned readers against danger "of arguing the ideas which they, maybe, subtracted incorrectly from the text" (Dridze and Leontiev, 1976). The technique of the slowed-down reading is used to form the psychological mechanisms of understanding and perception of the text. (Zaidullina and Demyanova, 2017). The researchers distinguish three phases of the process of art perception: 1) pre-communicative (formation of an art and mental set, as of generalexpectation of joy of communication with art, so of privatepreparation for the forthcoming meeting with the concrete work); 2) communicative (direct contact with the work of art; beginning of a dialogue: the author createsan imagethe reader recreates); 3) post-communicative (assignment of the work of art as personally significant value). The erception of the text is closely connected with a text projection which in psycholinguistics is treated as mental formation (a text concept, meaning of the text, its integrity), the product of process of judgment of the semantic perception of the text by the recipient to some extent approaching author's version of the text. The projection of the text includes the system of meanings (representations) which is formed at the recipient. The projection focuses attention on the certain aspects of reflection of the personality in the types of his activity and mental behavior, nominates in the center of consideration the identity of the person, the sphere of his subjective world in which all phenomena and events of the outside world are refracted. The psycholinguistics studies projections of the text both at the reader, and at the author. Thus the mental form of the text existing in consciousness of the person is studied. Individual experiences of the person in various forms and manifestations which can include emotional and estimated experiences, esthetic senses, frames, schemes of situations, denotata, images of different modalities, pragmatic knowledge are used for the organization of a projection of the text. It is necessary to remember that the text represents the sequence of sign units connected by the meaning assuming single coverage of rather large number of the facts of surrounding reality and therefore subject to obligatory interpretation from the recipient. According to T.M. Dridze, the recipient adequately interprets the text only if the main idea of the text is interpreted adequately to the author's plan (Krasnikh, 1998). Discussion However in the interpretation of the same text by different recipients divergences can be observed. Especially it concerns art texts as their contents is so ambiguous that it is possible to speak about plurality of contents. The reader perceiving the text can create his own projection which can radically differ both from the projections of the texts of other recipients, and from the author's projection. The variability of the perception of the same text can be explained by several psychological reasons where it is necessary to include the features of motivational, cognitive and emotional spheres of the personality. Those motives and attitudes which have induced the person to address the certain text are important here as well. Also an important role is played by an emotional spirit of the recipient at the time of the perception of the text. In the modern linguistics there is an increase of interest to the psycholinguistic analysis of the text. So, Krasnykh allocates the following elements in the analysis: a con-situation, time, the sequence of remarks of the communicants, the specific subject, the incentive to a speech action and the intention (purpose) of the generation of the speech, the verbal form of a product of speech and mental activity, reaction to the specific speech action, structure of the text, logical and semantic structure of the text, specific speech action, relations between speech actions . As the example of the psycholinguistic analysis of the text (according to V.V. Krasnykh) in the given work we have submitted the analyses of the texts of art and colloquial styles. The analysis of the textfragment from the work by V. Tokareva "A day without jam". Two con-situations are presented in the analyzed text: 1) The entrance hall of a school (the watchman Panteley meets); 2) The class-room of the fifth "B" (in the past it was the gym and the Swedish wall bars remained here which the pupil Sobakin is hanging on). Time: the slow current of time: "I am constantly looking at the watch to see how many minutes remained to the bell. And when I hear the bell at the end of the lesson, something even breaks inside me". Sequence of the remarks of the communicants: the dialogue at the beginning of the text takes place in a question-answer form between the teacher (the author) and his pupil (Sobakin). The conversation is conducted within colloquial style: (1) -Sobakin! -I begin sincerely. (4) -I hear and see better from here. (5) -Did you hear what I told you? (6) -And what, am I disturbing?.." The specific communicant is the story-teller on behalf of whom all events are being narrated. The incentive and the intention of a speech action -that is of a creation of the text -is the psychophysiological representation of the language identity of the author. In the text the author is thinking about the age, about a sad current of time. The reaction to the specific speech action: dialogues of the teacher with the pupil Sobakin the purpose of which is to persuade his correctness and his motivation to act (speech action) are presented in the work. Conclusions The psycholinguistic aspect of study of the text proves once again the consideration of the generation and the perception of the text as of the result of speech and mental activity of the individual, as of a way of the reflection of the reality in the consciousness by means of elements of the language system. In psycholinguistics the text is usually considered within a concrete communicative situation necessary components of which are subject of the communication, the author and the recipient. At the same time the form and the content of the texts are defined by psychological features of the individuals (recipients)the participants of the communication.
2019-06-13T13:19:43.683Z
2018-11-13T00:00:00.000
{ "year": 2018, "sha1": "6810d17b580a080431efc27d06932285018927af", "oa_license": "CCBY", "oa_url": "https://arpgweb.com/pdf-files/SPI-1-JSSR18-1-1-4.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "6c1097bf70fcbc628eb7c759637344a4c6a98e70", "s2fieldsofstudy": [ "Linguistics", "Psychology" ], "extfieldsofstudy": [ "Psychology" ] }
3393
pes2o/s2orc
v3-fos-license
SUMO-2 and PIAS1 Modulate Insoluble Mutant Huntingtin Protein Accumulation SUMMARY A key feature in Huntington disease (HD) is the accumulation of mutant Huntingtin (HTT) protein, which may be regulated by posttranslational modifications. Here, we define the primary sites of SUMO modification in the amino-terminal domain of HTT, show modification downstream of this domain, and demonstrate that HTT is modified by the stress-inducible SUMO-2. A systematic study of E3 SUMO ligases demonstrates that PIAS1 is an E3 SUMO ligase for both HTT SUMO-1 and SUMO-2 modification and that reduction of dPIAS in a mutant HTT Drosophila model is protective. SUMO-2 modification regulates accumulation of insoluble HTT in HeLa cells in a manner that mimics proteasome inhibition and can be modulated by overexpression and acute knockdown of PIAS1. Finally, the accumulation of SUMO-2-modified proteins in the insoluble fraction of HD postmortem striata implicates SUMO-2 modification in the age-related pathogenic accumulation of mutant HTT and other cellular proteins that occurs during HD progression. INTRODUCTION Huntington disease (HD) is caused by the expansion of a CAG repeat within the HD gene and the corresponding poly-glutamine track within the Huntingtin (HTT) protein (MacDonald et al., 1993). Symptoms include movement abnormalities, psychiatric symptoms, and cognitive deficits with accompanying degeneration of medium spiny neurons in striatum and loss of cortical volume (Ross and Tabrizi, 2011). Posttranslational modifications modulate protein function, and HTT is subject to multiple functionally relevant modifications including SUMOylation, ubiquitination, acetylation, palmitoylation, and phosphorylation (for review, see Ehrnhoefer et al., 2011;Pennuto et al., 2009). We previously demonstrated that a fragment of mutant HTT is modified by Small Ubiquitin-like MOdifier 1 (SUMO-1), and genetic reduction of SUMO in Drosophila-expressing mutant HTT exon 1 is protective (Steffan et al., 2004). SUMO-1 modification of mutant HTT in cells was also associated with increased toxicity and decreased aggregation by the striatal enriched small guanine nucleotide-binding protein Rhes (Subramaniam et al., 2009). SUMO modification is also implicated in Alzheimer's disease (AD), Parkinson's disease (PD), and amyotrophic lateral sclerosis (ALS), as well as other CAG repeat diseases (SBMA, DRPLA, SCA1, and SCA7) (for review, see Krumova and Weishaupt, 2013;La Spada and Taylor, 2010;Wilkinson et al., 2010). Although this modification is linked to pathogenesis, the precise mechanisms involved have not yet been elucidated. SUMO modification is the covalent attachment of SUMO to specific lysine residues within a target protein and regulates key processes involved in normal cellular function, including subcellular localization, protein stability, transcriptional regulation, and interaction properties of SUMO-modified proteins with their cellular targets (Cubeñas-Potts and Matunis, 2013;Gareau and Lima, 2010). Although highly transient, the effects of this modification are long lasting (Johnson, 2004). Four different forms, SUMO-1-SUMO-4, exist in mammals. SUMO-2 and SUMO-3 are nearly identical (97% identity) and often referred to as one protein (SUMO-2/SUMO-3) (Gareau and Lima, 2010;Johnson, 2004), and SUMO-4 is found only in a precursor form (Bohren et al., 2004). The SUMOylation pathway involves a cascade of enzymes, similar to ubiquitination, with a single E1activating enzyme (SAE1/UBA2), a single E2-conjugating enzyme (UBC9), multiple E3ligating enzymes (Protein Inhibitors of Activated STAT [PIAS], PC2, MMS21, and RanBP2), which provide substrate specificity, and multiple pro-teases (SENPs) that both cleave the SUMO moiety from target proteins and process SUMO itself ( Figure 1A). Given that SUMOylation is implicated in HD and other neurodegenerative diseases, identification of the E3s responsible for HTT modification may provide insight into mechanisms underlying HD and provide novel therapeutic targets. Here, we systematically evaluated the enzymatic machinery involved in HTT SUMO modification and report for the first time that HTT is modified by SUMO-2 and that PIAS1 functions as a HTT SUMO E3 ligase. This modification may serve more than one function because longer HTT polypeptides, both wild-type (WT) and mutant, are SUMO modified downstream of exon 1. SUMO-2 overexpression causes mutant HTT to accumulate in cells. In HD postmortem striatum, SUMO-2-modified proteins accumulate in the insoluble fraction, suggesting that this modification is relevant in vivo to HD. Further validating the potential in vivo relevance for SUMO modification pathways in HD, genetic reduction of dPIAS in Drosophila expressing expanded repeat HTT is neuroprotective. Taken together, these results provide a rationale for targeting SUMO-2 and PIAS1 as novel therapeutic targets for HD. HTTex1p Lys 6 and Lys 9 Are the Primary Sites of SUMO Modification We previously showed that truncated HTT (Httex1p) is SUMO-1 modified in cells and that Lys 6 (K6) and Lys 9 (K9) may represent primary sites for modification based on the absence of SUMO modification upon mutagenesis of target lysines (Steffan et al., 2004). From these studies, it was not clear which lysines are preferentially SUMO modified. Based on SUMO prediction software (SUMOplot, Abgent), the lysines in HTTex1p ( Figure 1A) do not fall within a classic SUMO consensus sequence but, rather, are low-probability SUMO sites or not predicted ( Figure 5A). However, classic SUMO consensus sequences are neither necessary nor sufficient for determining SUMO modification of a protein. To directly determine which of the three N-terminal lysines (K6, K9, or K15) are preferentially SUMO modified, HTTex1p and lysine mutants, mutated singly and in combination, were purified and analyzed using an in vitro SUMO modification system followed by mass spectrometry analysis. SUMO-1 (t95R) was used based on ease of detection of mono-SUMO modification, and to minimize the confounding effect of aggregation, unexpanded HTTex1p (25Q) constructs were used. When K6 and K9 were mutated singly to arginine (K6R, K9R), SUMO modification is reduced compared to WT, but when K6 and K9 (K6,9R) are mutated together, SUMO modification is greatly reduced, similar to the three lysine mutants ( Figure 1B), suggesting that K6 and K9 are the major target sites. Mass spectrometry analysis confirmed that K6 and K9 are indeed the primary sites of SUMO modification ( Figure S1). HTTex1p is subject to other posttranslational modifications such as ubiquitination and phosphorylation (Ehrnhoefer et al., 2011;Zheng and Diamond, 2012). Because (1) phosphorylation modulates SUMO modification of cellular proteins, (2) mimicking phosphorylation of serines 13 and 16 (S13 and S16, respectively) in expanded repeat HTTex1p (97QP) regulates SUMO-1 modification in cells (Thompson et al., 2009), and (3) this modification is relevant in vivo (Gu et al., 2009), we evaluated SUMO modification of WT HTTex1p (25Q) in the context of a HTT phosphomimic S13,16D in vitro. SUMO modification of the phosphomimetic HTT (25Q) was equal to or more rapid than for WT HTT control ( Figure 1B); mass spectrometry analysis revealed that SUMO modification of the S13,16D phosphomimic is restricted to K6 ( Figure S1). These results suggest that both in vitro and in cells, other posttranslational modifications, including phosphorylation, may influence SUMO modification of HTT. PIAS and SUMO Modification Proteins Are Highly Expressed in Mouse Brain Because SUMO E3 ligases provide specificity in targeting proteins where a modified lysine does not fall within a consensus site (Gareau and Lima, 2010), such as HTT, identifying the E3 ligase(s) that promotes HTT SUMOylation may be key to identifying therapeutic targets that regulate this modification. Based on the fact that PIASy was identified as a HTT-interacting protein in a yeast two-hybrid (Y2H) screen (Goehler et al., 2004), we evaluated whether PIAS proteins could function as HTT E3 ligases. In humans, the PIAS family consists of four members: PIAS1, PIASx (xα and xβ), PIAS3, and PIASy. Originally identified as PIAS (Shuai and Liu, 2005), the PIAS proteins are involved in regulation of transcription, immune responses, cytokine signaling, and E3 ligase activity (Liu and Shuai, 2008;Rytinki et al., 2009). As E3 ligases, they enhance SUMOylation of a number of different proteins, and multiple PIAS proteins can sometimes act as E3 ligases for the same substrate (Schmidt and Müller, 2002). To first establish that SUMO modification enzymes are present in brain regions relevant to HD, expression of SUMO-related proteins was quantified in mouse striatum and cortex using quantitative RT-PCR (qRT-PCR). SUMO-1, SUMO-2, PIAS, and SENP mRNAs are all expressed in WT cortex and striatum. Within each region, PIASx is most highly expressed, followed by PIAS1, with PIAS3 and PIASy having similar expression profiles ( Figure 2A). This suggests that any PIAS could potentially serve as a HTT E3 SUMO ligase based on its expression in vivo. The SENP proteins are also expressed in the brain, with SENP6 most highly expressed followed by SENP2 and SENP3, and finally SENP1 ( Figure 2A). SUMO-1 and SUMO-2 are expressed in mouse brain at relatively high levels. These data demonstrate that the SUMO machinery is present in relevant brain regions. To determine if these SUMO-related genes show expanded repeat HTT-dependent alterations in expression patterns, each was quantified in WT and R6/2 mouse cortex and striatum at 4, 8, and 12 weeks. R6/2 mice express a truncated HTT fragment (exon 1 with 150Qs) and show very rapid HD-like disease progression with onset by approximately 6 weeks, highly penetrant phenotypes at 8 weeks, and end-stage disease by 12 weeks (Mangiarini et al., 1996). At 4 and 8 weeks, some dysregulation begins to occur (Figures S2A and S2B), and by 12 weeks, there are statistically significant increases of SENP1, SENP3, and SUMO-1 in R6/2 cortex and of SENP1, SENP6, PIAS3, SUMO-1, and SUMO-2 in R6/2 striatum ( Figures 2B and 2C), suggesting that in vivo SUMO-modifying pathways may be perturbed in HD. The increases in SUMO-1 and SUMO-2 specifically in striatum, the region of greatest vulnerability to neurodegeneration, were particularly noteworthy. A similar pattern of increased SUMO-2 in R6/2 striatum but not cortex was also observed at the protein level ( Figures 2D and 2E), suggesting a SUMO-2-selective response in striatum. RNA was also isolated from dissected brain regions of BACHD mice (Gray et al., 2008) that express full-length human HTT as a BAC transgene with 97Qs and show progressive disease over a longer time course. SUMO-1 and SUMO-2 expression was similarly increased in BACHD striatal samples at 14 months ( Figure S2C), a time when disease phenotypes are evident, and SUMO-2 is increased in N171 mouse striatum at 12 weeks compared to 6 weeks (Jia et al., 2012). The progressive nature of these changes, with the most robust observed at late disease stages, is consistent with the concept of early changes in protein homeostasis systems (Schipper-Krom et al., 2012) with later profound disruption of the SUMO network at the gene expression and protein level. Taken together, the most consistent change in fragment and full-length mutant HTT mouse models is upregulation of SUMO, suggesting that this pathway may be dysregulated in vivo. HTTex1p Is Modified by Both SUMO-1 and SUMO-2 Modification by SUMO-2 is unique in its capacity to be induced by cellular stress (Saitoh and Hinchey, 2000). Based on our previous data that the stress-inducible kinase IKK can activate phosphorylation of HTT S13 and S16 and increase polySUMOylation of 97QP HTTex1p , and given that oxidative stress and other cellular stressors are implicated in HD (Browne and Beal, 2006), we investigated whether SUMO-2 can modify HTT. A cell-based SUMOylation assay was optimized to visualize and quantify SUMO modification ( Figure S3A; Extended Experimental Procedures), which for endogenous proteins, is a highly dynamic process, and only low levels of modification are typically observed for an individual protein at any given time (Johnson, 2004). Expanded repeat HTTex1p (46QP) with a C-terminal epitope tag (46QP-H4) was cotransfected with SUMO-1 (GFP-SUMO-1) or SUMO-2 (GFP-SUMO-2) and then purified under denaturing conditions using magnetic nickel beads (Ni-NTA). SUMO-1 and SUMO-2 can both modify HTTex1p, and when the lysines (K6, K9, and K15) are mutated to arginine (3R), SUMO modification cannot be detected ( Figure 3A). An unusual laddering below control HTT in the presence of SUMO-2, but not in the presence of lysine mutants, is observed for this construct; however, the laddering is not detected by anti-GFP, and its significance is not clear. PIAS1 Is a Candidate SUMO E3 Ligase for HTT Each of the PIAS proteins was evaluated for its ability to enhance SUMO modification of HTT, and the ratio of SUMO-modified HTT versus purified HTT was quantified using a western blot imaging system (Odyssey Imager; LI-COR). Because the PIAS proteins are regulators of transcription, like SUMO, and can act as coactivators and corepressors in addition to their E3 ligase activity (Rytinki et al., 2009), one-tenth mycactin under a CMVbased promoter was cotransfected to account for transcriptional effects ; Figure S3A). To control for differences in protein loading due to denaturing conditions precluding protein assays, each membrane was stained with a reversible protein stain (MEMCode). Finally, to control for differential HTT expression, 10% of each sample was trichloracetic acid (TCA) precipitated and subjected to western analysis. To demonstrate that the modified form of HTT does indeed represent SUMO-modified HTT, SUMO-1 was coexpressed in its mature, processed form (SUMO-1-GG) together with each of the SUMO isopeptidase SENPs (SENP1, SENP2, SENP3, SENP5, and SENP6) to show elimination by isopeptidases. SENP1, SENP2, and SENP6 each catalyzed removal of SUMO-1 from HTT, whereas SENP3 and SENP5 had no effect ( Figure 3B), providing validation that the shift in protein mobility represents SUMO-modified HTT and suggesting selectivity of isopeptidase action. To evaluate each PIAS protein for the ability to enhance HTT SUMO modification, 46QP-H4 was coexpressed with SUMO-1 (GFP-SUMO-1) and each of the PIAS proteins (PIAS1, PIASxα, PIASxβ, PIAS3, and PIASy). HTT is readily modified by SUMO-1 (Steffan et al., 2004) and ( Figure 3A) at saturating levels of SUMO-1. In order to detect enhancement of SUMO-1 modification, a titration was performed to determine the SUMO-1 concentration at which modification of HTT was barely detectable ( Figure 3C) and the addition of a relevant E3 ligase could increase HTT SUMO modification (Stankovic-Valentin et al., 2007). PIAS1 repeatedly enhanced SUMO-1 modification of HTTex1p ( Figure 4A, representative figure), as did Rhes as previously reported ( Figure S3B). We next investigated whether SUMO-2 modification of HTT is also sensitive to the addition of an E3 ligase. Because this modification is more difficult to visualize under basal conditions, limiting SUMO-2 levels was not necessary. SUMO-2 was cotransfected with individual PIAS cDNAs, and PIAS1 was also most effective at enhancing SUMO-2 modification of HTT ( Figure 4B, representative figure). In this assay, Rhes did not enhance the formation of a SUMO2-HTT species ( Figure S3C). Consistent with its role as an E3 SUMO ligase and the underlying rationale that direct interactions between E3 ligases and targets can promote SUMO modification, PIAS1 was evaluated for its ability to bind to HTT fragments in vitro. Using GST pull-down assays with HTT exon 1-encoding polypeptides, both normal range (20QP) and expanded repeat HTTex1p(51QP) interact strongly with PIAS1 (7.75% and 7%, respectively) ( Figure 4C). As controls, protein lacking the protein-interacting proline-rich domain of Htt (20Q and 51Q) or expressing the proline-rich region alone showed greatly reduced interactions. These results suggest that a direct interaction between PIAS1 and HTTex1p may facilitate SUMO modification. Longer HTT Fragments Are SUMOylated A potentially critical and initiating cleavage event occurs at a caspase-6 cleavage site of HTT (Graham et al., 2006), creating a polypeptide of 586 amino acids (HTT 586 aa). Using bio-informatic tools (SUMOplot; Abgent), five additional lysines downstream of HTTex1p are predicted to be SUMO modification sites of high (two lysines) or lower (three lysines) probability ( Figure 5A). Of interest, further analysis of the 586 aa fragment reveals up to 13 potential overlapping SUMO-interacting motifs (SIMs) within this region depending upon consensus sequence designation ( Figure 5A) (Tatham et al., 2008). SIMs are noncovalent interactions that may enhance SUMOylation of the SIM-containing proteins themselves (Blomster et al., 2010). Therefore, the SUMO and SIMs in HTT may work together to regulate HTT SUMOylation. To determine if longer HTT fragments are SUMOylated independently of the lysines within the first 17 aa, HTT 586 aa containing either an unexpanded (25Q-586 aa) or expanded polyQ repeat (137Q-586 aa) was coexpressed with SUMO-1 ( Figure 5B) because this modification should be detected under basal conditions. As a control, HTT 586 aa constructs with K6,9,15R mutations that diminish SUMO modification within the first 17 aa were tested. Immunoprecipitated HTTex1p (46QP-H4) is monoSUMOylated by SUMO-1 ( Figure 5B), but the 3R mutant form is not modified as expected. However, other lysines downstream of this amino-terminal domain can also be SUMOylated because SUMO-1 modification is observed even in the presence of the K6,9,15R mutation in either its expanded or unexpanded forms. PIAS1 Increases SUMO-2 Modification of HTT 586 aa SUMO-2 modification of HTT 586 aa is not detected in the absence of external stimuli ( Figure 5E); therefore, longer HTT fragments were evaluated for SUMO-2 modification in the presence of an E3-SUMO ligase. Potential interactions between longer HTT fragments and E3 SUMO ligases were first investigated using a large-scale Y2H screen. Here, the 586 aa fragment of HTT was used as bait, and PIAS1 emerged as the single E3 SUMO ligase interaction partner ( Figure 5C), supporting its relevance even for longer HTT polypeptides. Previously identified interactors were also tested in this system and confirmed, including HIP2 and GIT1 (Goehler et al., 2004). To further validate this interaction in vitro, coimmunoprecipitation of HTT from cell lysates transfected with HTT (25Q-586) and PIAS1 with or without cotransfected SUMO-1 was tested. PIAS1 binds both transfected HTT (586 aa) and full-length endogenous HTT based on PIAS1 detection even in the absence of exogenous HTT 586 overexpression ( Figure 5D). SUMO-2 modification of longer HTT polypeptides was therefore tested in the presence of PIAS1. Based on in vitro results ( Figure 1B) and because SUMO-2 is stress inducible similar to the signal transduction cascades that modulate HTT phosphorylation, S13, 16D phosphomimic polypeptides were also tested. Expanded repeat HTT 586 aa fragments are SUMO-2 modified in the presence of PIAS1, and this modification is enhanced by mimicking phosphorylation (S13, 16D-586 aa) for both expanded and unexpanded HTT 586 aa ( Figures 5E and 5F). SUMO-2 Promotes Accumulation of Insoluble Mutant HTT Emerging data suggest that SUMOylation may influence aggregation and accumulation of aggregation-prone neurodegenerative disease proteins Tatham et al., 2011). To address the functional consequences of SUMO modification of mutant HTT on disease, the involvement of SUMO-1 and SUMO-2 modification on the formation of insoluble HTT species was evaluated in HeLa cells, where SUMO modification systems are highly active. Our previous studies evaluated visible inclusion formation and levels of soluble mutant HTT and showed that fusion of SUMO to the HTTex1p N terminus promoted stabilization of HTTex1p (Steffan et al., 2004). However, we and others have since identified the HTTex1p N terminus as an important mediator of aggregation, localization, and protein stability (Atwal and Truant, 2008;Rockabrand et al., 2007;Sivanandam et al., 2011;Thompson et al., 2009;Zheng et al., 2013), which may have been masked by the presence of the SUMO moiety. SUMO-1 also decreased mutant HTT aggregation in the presence of Rhes and increased toxicity in cells (Subramaniam et al., 2009). In each case, only SUMO-1 was evaluated. For HD, the process of aggregation and specific aggregation intermediates are likely to be critical to pathogenesis. Using a centrifugation protocol published for α-synuclein , lysates were separated into detergent-soluble and detergent-insoluble fractions. The detergent-soluble fraction contains monomeric HTT, which includes overexpressed mutant HTTex1p (97Q) and endogenous full-length HTT (indicated by arrows in Figure 6A, SOLUBLE fraction). In contrast, the detergent-insoluble fraction contains only high molecular weight (HMW) HTT in the samples containing 97Q-HTTex1p ( Figure 6A, INSOLUBLE fraction), which are likely multimers or potentially oligomers of soluble HTT. MG132 is a proteasomal inhibitor that causes the accumulation of mutant HTT in cells (Lee and Goldberg, 1998). To investigate the relationship between proteasomal degradation and SUMO modification and to analyze levels of soluble and insoluble HTT species in the presence of SUMO-1 and SUMO-2, HeLa cells were transiently transfected with 97Q-HTTex1p and treated with MG132 (5 μM). Treatment with MG132 causes a robust increase in HMW mutant HTT in the insoluble fraction ( Figure 6A), with accumulation of ubiquitinmodified cellular proteins (data not shown). In contrast, soluble, monomeric HTT levels are maintained or slightly decreased ( Figure 6A), supporting the concept that impairment of proteasomal function increases levels of aggregating HTT. Immunoprecipitation was performed to increase HTT detection. Addition of SUMO-1 had little to no additional effect on soluble HTT or insoluble HMW HTT levels ( Figures 6B and 6C). However, the addition of SUMO-2 caused an increase in insoluble HTT ( Figures 6B and 6C) that was comparable to proteasome inhibition. This effect is not augmented by combined SUMO-2 expression and proteasome inhibition, suggesting that SUMO-2 modulates accumulation and aggregation of mutant HTT in a manner that mimics proteasome inhibition. This regulation of insoluble HTT levels by SUMO-2 is dose dependent. When cells were treated with increasing amounts of SUMO-2, verified by increasing levels of mono-SUMO-2 by western analysis ( Figure 6D, boxed in first panel), increasing levels of HMW HTT were observed, whereas monomeric levels showed a corresponding decrease in the insoluble fraction, potentially reflecting insoluble monomeric HTT levels. Because SUMO-2 modulates HMW HTT species and SUMO-2 modification of HTT appears to require the presence of a SUMO E3 ligase, we tested whether PIAS1 alone can modulate mutant HTT accumulation. When PIAS1 was over-expressed in the presence of expanded repeat HTT with 97Qs, the detergent-insoluble HTT HMW "oligomeric" species increased, whereas soluble HTT appeared to be unaffected ( Figure 7A), and a reduction of the HMW oligomeric HTT species is observed following PIAS1 acute knockdown ( Figure 7B). Taken together, these data demonstrate that PIAS1 can regulate the accumulation of insoluble HMW HTT polypeptide species, suggesting that modulation of PIAS1 may influence pathogenesis in HD. Genetic Reduction of Su(-var)2-10 Is Neuroprotective in Mutant HTTex1p-Expressing Drosophila To validate the potential involvement of PIAS proteins in HD pathogenesis in vivo, the single Drosophila PIAS protein, Su(-var)2-10 (dPIAS) (Hari et al., 2001), was evaluated for its effect on HD-like phenotypes in a fly model (Steffan et al., 2001). When expressed in all neurons from embryogenesis on, expanded repeat HTTex1p (93Q) causes a progressive loss of visible rhabdomeres (photoreceptor neurons in the eye) (Marsh and Thompson, 2006) and a decrease in the number of flies that eclose from the pupal case as adult flies ( Figure 7C). When expressed in a background of heterozygous genetic reduction of dPIAS, the number of visible photoreceptor neurons and survival (eclosion) are both increased ( Figure 7C). The observed neuroprotection is not simply a consequence of decreased transgene expression based on analysis of HTT RNA by qPCR (data not shown). These results are consistent with our previous observations showing that reduction of Drosophila SUMO (smt3) was protective in this same model (Steffan et al., 2004). Insoluble SUMO-2-Modified Proteins Are Increased in Human HD Brain To investigate whether SUMO-modified proteins accumulate in HD brain tissue compared to control subjects and whether SUMO-1 or SUMO-2 has selective effects, postmortem striata from three control and three HD brains were evaluated. Each of the HD subjects displayed a remarkable accumulation of SUMO-2-modified protein compared to controls in insoluble fractions ( Figure 7D). To a lesser extent, accumulation of SUMO-1 is also observed. Differences in the levels of SUMO-2-modified protein do not correlate with ubiquitin reactivity in control and HD brain fractions but, rather, appear to be specific for SUMO. Although we cannot conclude from these data that the increased SUMO-2 reactivity represents an increase in mutant HTT SUMO-2 modification per se, a HTT antibody raised against aa 115-129 shows a similar pattern of increased HMW HTT species in HD samples compared to controls ( Figure 7D), suggesting that HTT is included in the proteins that accumulate in HD striata. Taken together, these results support SUMO-2 relevance in HD pathology and that SUMO-modifying enzymes may be valid therapeutic targets. DISCUSSION SUMO modification contributes to an impressive array of regulatory mechanisms that have critical biological functions (Bruderer et al., 2011). In turn, dysregulation of this cellular process is implicated in diseases ranging from cancer to neurological disease (Gareau and Lima, 2010). Although SUMO modification is transient, downstream consequences are long lasting and impact processes such as protein folding, subcellular localization, stability, transcriptional regulation, and protein activity (Geiss-Friedlander and Melchior, 2007), all of which are affected in HD (Zheng and Diamond, 2012). Implicating a role in neurodegenerative diseases, a growing number of causative neurodegenerative disease proteins either colocalize with SUMO molecules or are target proteins for SUMO modification (for review, see Krumova and Weishaupt, 2013;Wilkinson et al., 2010). To date, the primary mechanisms involve altered solubility of or visible inclusion formation by these disease proteins, with ensuing protective (SBMA), deleterious (SCA7), or mixed effects (α-synuclein), depending on the protein context and form of SUMO tested. Enzymes involved in these processes, such as Rhes, are beginning to emerge Subramaniam et al., 2009). Here, we demonstrate that SUMO-2 modification of HTT, a stress-responsive modification pathway not previously investigated for HTT, regulates the accumulation of insoluble mutant HTT. This SUMO form is consistently upregulated in striata from several HD mouse models, at a stage of disease anticipated to display significant dysregulation of protein homeostasis network components. Furthermore, PIAS1, which selectively enhances HTT SUMO-1 and SUMO-2 modification and is expressed in brain, is integral to this accumulation. The functional relevance of these findings is further validated by (1) the neuroprotection observed upon reduction of the single Drosophila PIAS (dPIAS), which is most similar to PIAS1 (Hari et al., 2001), in flies expressing a mutant fragment of HTT, and (2) the profound accumulation of SUMO-2-modified HMW protein in human HD brain. The finding that the stress-responsive SUMO-2 is likely most relevant to HD pathogenesis over basal SUMO-1 modification by regulating the accumulation of HMW and likely poly-SUMOylated protein is consistent with the prevailing literature that chronic expression of mutant HTT causes cellular stress, including oxidative stress (Turner and Schapira, 2010). This cellular stress, which is likely progressive and could therefore promote stress responses, including SUMO-2 modification, could then contribute to disease. Validation of SUMO-2 involvement in response to neuronal stressors has recently emerged in several systems, including APP overexpression in AD mice (McMillan et al., 2011), transient cerebral ischemia in brains of ground squirrels (Lee et al., 2012), and transient ischemia in cells , in some cases, promoting a neuroprotective response to ischemic stress (Datwyler et al., 2011). We previously reported that mimicking phosphorylation of S13 and S16 reduces monoSUMOylation and increases polySUMOylation of mHTTex1p with a highly expanded 97Q repeat . Here, we confirm that K6 and K9 within the first 17 aa domain are indeed SUMO-1 modification sites even though these lysines do not lie within classic SUMO consensus sequences. Intriguingly, SUMO-1 is conjugated to K6 and K9 equivalently in the absence of other modifications, whereas phosphomimetic substitutions of S13 and S16 appear to block SUMO modification on K9 or promote SUMO-1 modification on K6, which may be significant to regulation of other HTT modifications, such as K9 acetylation. In cells, mimicking phosphorylation at these sites enhances SUMO-2 modification in the presence of a relevant E3 SUMO ligase. Given that phosphorylation of HTT is responsive to inflammatory cues , that PIAS1 is involved in immune function, and that inflammation is increased in HD (Björkqvist et al., 2009;Khoshnan et al., 2004), it is likely that SUMO-2 modification is also responsive to inflammatory cues that appear early in HD. A key feature in HD is the accumulation of mutant HTT fragments containing the polyQ expansion (Landles et al., 2010). WT and mutant HTT can be cleaved by caspases, calpains, and aspartyl proteases to form N-terminal fragments (Warby et al., 2008), which become toxic when in the context of the expanded polyglutamine repeat. Studies in mice expressing a caspase-6-resistant form of mutant HTT suggest that HTT proteolysis specifically at 586 may be critical to HD pathogenesis (Graham et al., 2006;Warby et al., 2009), and overexpression of transgenic-expanded repeat HTT 586 supports potential toxicity of this fragment (Waldron-Roby et al., 2012). Analysis of longer polyQ polypeptides revealed that unexpanded and expanded HTT 586 fragments are SUMOylated downstream of HTTex1p. The caspase-6 cleavage fragment of HTT (586 aa) has five predicted SUMOylation sites Cterminal to exon 1, with potentially greater than 13 overlapping SIMs within this region, depending on the SIM evaluated. SIMs are noncovalent protein-protein interactions that have recently emerged as having critical regulatory properties (Gareau and Lima, 2010). For example, the ubiquitin ligase RNF4 has multiple SIMs that recognize polySUMO-2 chains and ubiquitinate them for degradation by the proteasome (Geoffroy and Hay, 2009;Tatham et al., 2008). Indeed, these SIMs may be important signal transduction inducers downstream of polySUMOylation events (Sun and Hunter, 2012). SIMs within target proteins can also enhance their SUMO modification and are found within several E3-SUMO ligases, including PIAS1 (Gareau and Lima, 2010). The implications of these multiple SIMs in HTT are not yet clear; however, we are actively pursuing this area of investigation. The interplay between SUMO-2 and proteasome inhibition is consistent with recent proteomic analysis of extracts from HeLa cells treated with MG132 to identify SUMO-2modified proteins . In these studies, all SUMO paralogs accumulated upon treatment with MG132; however the greatest response exhibited was by SUMO-2, suggesting that SUMO modification of cellular proteins is not only involved in regulating proteostasis of unfolded and misfolded proteins within a cell but may in fact represent a response to the presence of misfolded or oligomerized proteins, such as mutant HTT, and be involved in protein clearance mechanisms. This hypothesis is supported by studies showing that the presence of mutant HTT polypeptide alone in C. elegans can cause the misfolding and inactivation of temperature-sensitive mutant proteins to a similar degree as heat shock (Gidalevitz et al., 2006). These findings suggested that mutant HTT protein expression is sufficient to impact the protein homeostatic network and relevant to the work described here, accumulation of SUMO-2-modified cellular proteins. Furthermore, when the production of misfolded proteins exceeds the capacity of the chaperone and UPS systems, mimicked here by proteasome/cathepsin inhibition by MG132, then these proteins may be targeted for degradation by autophagy, which also becomes impaired late in disease. As protein clearance mechanisms become impaired upon aging, modified proteins normally targeted for degradation by post-translational modification, such as phosphorylation and acetylation, may accumulate and take on toxic functions. Supporting this concept, proteasome inhibition promoted formation of aggregates containing SUMO-modified αsynuclein . In summary, the work presented here supports a general mechanism in HD whereby the chronic expression of expanded repeat HTT promotes general protein misfolding and initiation of stress response pathways that promote SUMO-2 modification of HTT, progressively resulting in accumulation of insoluble and HMW species that may be a reflection of ongoing pathogenesis. In addition, loss of normal HTT functions may also contribute to the accumulation of SUMOylated proteins (Steffan, 2010). Initially, SUMO-2 modification and polySUMOylation are likely to facilitate normal cellular clearance mechanisms with integration between SUMOylation and ubiquitination and serve in a neuroprotective capacity; however, these are likely to become toxic as pathways become impaired and these species accumulate and cause further disruption of overall cellular protein homeostasis, reflected here by increased expression and level of SUMO-2 and other SUMO modification cellular components. This is demonstrated by the accumulation of SUMO-2 protein in a HMW insoluble fraction from human HD striatum, the region most profoundly affected in HD. We further identify a HTT E3-SUMO ligase, PIAS1, which is expressed in relevant brain regions and appears to have a pivotal role in the regulation of SUMO-2-modified HTT, providing a novel and selective therapeutic target. GST Pull-Down Assays Assays were performed as previously described (Steffan et al., 2000). Cell Culture HeLa cells were plated on 10 cm plates and cultured in DMEM plus 10% FBS. For cDNA only, cells were plated on day 1 and transiently transfected with 6 μg of total DNA and 8 μl Lipofectamine 2000 (Invitrogen) on day 2, media were changed on day 3, and cells were collected on day 4. For siRNA only, transient transfections were carried out as described above, except that 720 pmol siRNA and 36 μl of RNAi Max were used. For combined siRNA and cDNA experiments, cells were transfected with siRNA, media were replaced on day 2, and cDNA was transfected 12 hr after siRNA. Days 3 and 4 are as described above. Cells were transfected at ~70% confluency for DNA and ~30%-50% confluency for siRNA. Western Blot Analysis A total of 8% bis-acrylamide gels and Invitrogen 4%-12% bis-tris mini gels were used for SDS-PAGE, proteins were transferred to nitrocellulose membrane, and nonspecific proteins were blocked with SuperBlock Blocking Buffer (Thermo Scientific). Two types of detection were used: chemiluminescence/film, or Odyssey Imager/LI-COR (Extended Experimental Procedures). Experiments were performed in triplicate with representative images shown. Soluble/Insoluble Fractionation HeLa cells were collected in lysis buffer containing 10 mM Tris (pH 7.4), 1% Triton X-100, 150 mM NaCl, 10% glycerol, and 0.2 mM PMSF (Roche Complete Protease Mini and PhosphoStop pellets). Cells collected were lysed on ice for 60 min before centrifugation at 15,000 × g for 20 min at 4ºC. Supernatant was collected as the detergent-soluble fraction. The pellet was washed 3× with lysis buffer and centrifuged at 15,000 × g for 5 min each at 4ºC. The pellet was resuspended in lysis buffer supplemented with 4% SDS, sonicated 3×, boiled for 30 min, and collected as the detergent-insoluble fraction. Protein concentration was quantitated using Lowry Protein Assay (Bio-Rad) Soluble/insoluble fractionation protocol was previously described . Filter Retardation Assay A total of 30 μg of detergent-soluble and detergent-insoluble protein in 200 μl of 2% SDS was boiled for 5 min and run through a dot blot apparatus under a vacuum onto a cellulose acetate membrane. Membrane was then washed 3× with 0.1% SDS and then blocked in 5% milk and subject to western blot analysis (previously described by Sontag et al., 2012;Wanker et al. 1999). Automated Y2H Screening GATEWAY technology (Invitrogen) was used to subclone 25Q-586 and 73Q-586 aa cDNAs encoding human HTT fragments into Y2H expression plasmids. "Gateway compatible" cDNAs encoding selected proteins were generated by PCR amplification. Amplified DNA products were isolated from agarose gels and combined with the pDONR221 plasmid (Invitrogen), creating the desired entry DNA plasmids. The identity of all PCR products was verified by DNA sequencing. Subsequently, utilizing LR recombination, pBTM116_D9 plasmids (for the production of LexA DNA-binding domain fusions) were generated encoding HTT bait proteins for automated Y2H interaction mating. The identity of the plasmids was verified by BsrGI restriction digestion. Bait plasmids were transformed into the L40ccua MATa yeast strain, and yeast clones were individually mated against a matrix of MATα yeast clones encoding 16,888 prey proteins (with Gal4 activation domain fusions) using pipetting and spotting robots. The automated Y2H screenings were repeated three times. Interaction mating experiments and imaging were performed as described previously by Stelzl et al. (2005). Human Postmortem Brain Human autopsy brain tissue, from the striatum of control and patients with HD, was obtained from individuals with ages at autopsy from 72 to 93 years of age and grades 3-4 and flash frozen with postmortem intervals ranging from 13 to 22 hr. Frozen human brain tissue was homogenized on ice using T-PER Tissue Protein Extraction Reagent (Thermo Scientific) containing a complete mini pellet (Roche), phosphatase inhibitor #1 (Sigma-Aldrich), phosphatase inhibitor #2 (Sigma-Aldrich), and 25 mM NEM. Lysates were ultracentrifuged at 45,000 rpm at 4ºC for 60 min, and the pellet was homogenized on ice in 70% formic acid, ultracentrifuged at 45,000 rpm at 4ºC for 60 min, and the supernatant was collected as the "insoluble fraction." Supplementary Material Refer to Web version on PubMed Central for supplementary material. (A) Schematic illustration of the SUMOylation pathway. SUMO is expressed as a precursor protein (SUMO-X) that is processed by SUMO-specific proteases (SENPs) to expose a Cterminal diglycine motif (-GG). Enzymatic reactions are similar to ubiquitination and include activation by the SUMO E1-activating enzyme (SAE1/UBA2), to the SUMO E2conjugating enzyme (Ubc9), and transfer of SUMO to target lysines with or without the assistance of the SUMO E3-ligating enzymes. (B) Time course of SUMO-1 modification of His-tagged HTTex1p (25Q)-purified proteins (WT, K6R, K9R, K6,9R, K6,9,15R, and S13,16D) was performed in vitro. SUMOylation was visualized using anti-His antibody. WT HTTex1p is SUMOylated within 16 min, K6R or K9R mutations delay SUMOylation, and combined mutations (K6,9R or K6,9,15R) greatly reduce SUMOylation. Mutations that mimic phosphorylation (S13,16D) alter kinetics of SUMO-1 modification with SUMO modification observed beginning by 4 min. See also Figure S1. (A) qRT-PCR analysis of SUMO-modifying proteins and enzymes in cortex and striata of WT mice at 12 weeks. Relative expression for all the SUMO enzymes is normalized to mouse β-actin. (B and C) qRT-PCR of SUMO mRNAs from 12-week-old WT and R6/2 mouse cortex (B) and striatum (C). SUMO enzyme mRNAs are differentially expressed in R6/2 versus control with statistically significant increases in SENP1 (p = 0.01), SENP3 (p = 0.02), and SUMO-1 (p = 0.003) in cortex and SENP1 (p = 0.02), SENP6 (p = 0.04), PIAS3 (p = 0.007), SUMO-1 (p = 0.02), and SUMO-2 (p = 0.02) in striatum. Samples were analyzed in quadruplicate and normalized to mouse β-actin. Data are shown as R6/2 expression relative to WT levels set at 1 for each enzyme with ± SD (n = 4). *p < 0.05. n.s., not significant. (D and E) Western blot analysis of SUMO-2 in 14-week R6/2 cortex (D) and striatum (E) versus aged-matched controls. SUMO-2 is upregulated in R6/2 striatum versus control (p = 0.026; n = 4). Protein is normalized to α-tubulin and quantitated using ImageJ. Note that only the 12-week time point is shown in (B) and (C). Please see 4-and 8-week time points in Figures S2A and S2B. (A) Western analysis of whole-cell lysates from HeLa cells transfected with His-SUMO-1 or SUMO-2 and/or 97Q-HTT exon 1 and treated with 5 μM MG132 for 18 hr. Lysates were separated using differential centrifugation into a detergent-soluble fraction (SOLUBLE) with 1% Triton X-100 and a detergent-insoluble fraction (INSOLUBLE) with 4% SDS. Western blot probed with anti-HTT shows full-length endogenous HTT in the SOLUBLE fraction (upper arrow) and 97Q-HTTex1 (lower arrow) (left panel). In the INSOLUBLE fraction, HTT HMW species are indicated by the bracket and asterisks (right panel). (B) MG132 and SUMO-2 cause mutant HTT to accumulate as HMW species (bracke and asterisk). Western blot showing IP with HTT antibody crosslinked beads from the detergentinsoluble fraction probed with the anti-HTT antibody. (C) Mutant HTT (97Q-Httex1) fibrils are detected with anti-HTT in the insoluble fraction with treatment of MG132 or addition of exogenous SUMO-2. (A) Western blot analysis of HeLa cells over-expressing exogenous PIAS1 in the presence of mutant HTT (97Q) when separated into detergent-soluble and detergent-insoluble fractions. No difference is detected in monomeric HTT (top panel, Soluble), but HMW HTT levels increase with PIAS1 overexpression. Anti-PIAS1 antibody (Invitrogen) was used to detect PIAS1. (B) Acute knockdown of PIAS1 decreases HMW HTT species in the detergent-insoluble fraction. PIAS1 knockdown is detected in detergent-soluble and -insoluble fractions using anti-PIAS1 antibody. (C) Drosophila melanogaster expressing mutant HTTex1p (93Q) in a reduced Su(var)2-10/ dPIAS genetic background exhibits statistically significantly reduced photoreceptor neuron degeneration (left panel, p = 0.033) when comparing dPIAS/+ to +/+ flies and increased overall survival (right panel, p = 0.047) when comparing dPIAS1/+ to +/+ flies. Significance was measured by Student's t test. (D) HMW SUMO-2 accumulates in postmortem HD striata. Western blot analysis of the insoluble fraction from three control and three HD postmortem striata as described (Experimental Procedures).
2016-05-12T22:15:10.714Z
2013-07-18T00:00:00.000
{ "year": 2013, "sha1": "99e568828934eb4f4588fdf74c3abe65313ec506", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.celrep.2013.06.034", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a6ae488813f2f9efb8b6137da611fd5338f0b1c0", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
213774790
pes2o/s2orc
v3-fos-license
Analysis of automatic transmission fluid in CNG bus engine by ferro graphic technique Public transportation helps to reduce road congestions, gasoline consumptions, saves money and also enhances personal opportunities. However, maintenance problem is a concern in public transportation that could cause disruption of service. In this study, this issue had been addressed in order to help prediction of wear so that the problem could be predicted before it happens. This study looks into the Compressed Natural Gas (CNG) Engine of Nadi Putra bus on the inconsistency range of mileage in changing the Automatic Transmissions Fluid (ATF). The objective of this project is to study gearbox conditions in a bus (Nadi Putra) in term of lubrication and wear. Ferrographic analysis was carried out in order to determine types of wear particles and suitable range of mileage for the bus to change the ATF by analyzing the sample of transmissions fluid taken from bus gearbox. Optical microscopy and Predict Chart were used to characterize and identify sample in where groups. It was observed that the rolling-element fatigue and cutting wear were major findings in all samples oil analysis tested. Introduction The integration of natural gas vehicles into the transportation sectors are promoted by the availability of domestic natural gas reserves, improved fueling infrastructure, and state incentives. Many heavyduty fleet applications such as refuse trucks, transit buses, school buses and delivery trucks have transformed to use compressed natural gas (CNG) instead of diesel. This is due to attractive features of CNG. The natural gas used in natural gas vehicles (NGVs) is the same gas used in domestic sector for cooking and heats. CNG is produced by compressing the conventional natural gas to less than 1% of the volume it occupies at standard atmospheric pressure and stored in rigid container at a pressure of 200-248 bars (2900 -3600 psi) [1]. Natural gas is characterized by soot-free combustion when used in internal combustion engines producing a clean-burning fuel. These characteristics make natural gas is more environmental-safe compared to diesel [2]. Natural gas is becoming one of the most important resources of energy and recently shares 23% of world primary consumption [3]. The rise of oil prices has given opportunity to CNG vehicles to prove itself as a cheap and cleanest fuel and making the countries more energy sovereign by reducing the dependency on oil. The transport type, distance, and means of transport are among the factors to be considered in using different fuels and alternative drives and among others, urban areas hold the greatest possibilities of using different alternative fuel. This is going back to the advantages of CNG whereby applying natural gas to power buses has a positive impact on eutrophication, acidification, and photochemical ozone creation potential [4] and considerably limits elemental and organic carbons, and PAHs [5]. Relatively large stores are also an advantage of natural gas. Wear is one of the common failures occur in Natural Gas Vehicles. It is crucial to prevent this failure from happen since the source of failure could be predicted by using Ferrography Techniques. Ferrography Technique (FT) is a method where particles will separate on a glass slide based on the interaction between an external magnetic field and the magnetic of the particles suspended in a flow stream. Levi and Eliaz [6] claimed that a reliable procedure was developed for condition monitoring of an open-loop oil system, based mainly on Analytical Ferrography in the study of involving closed loop dynamic system. The origin, mechanism and level of wear could be estimated by determining the number, shape, size, texture and composition of particles on the ferrogram. This technique was described in detail by previous researchers [7,8]. In this present study, oil analysis by FT has been used to identify the presence of material composition. This investigation was conducted on a series of CNG Gearbox transmission with the purpose of correlating ferrographic results with the actual wear conditions and compare wear evolution in a gearbox of the same type. Methodology In this research, the oil samples were collected from a local bus company, Nadi Putra buses which has shown in Figure 1(a). This oil samples were taken from CNG bus gearbox (DIWA.3 Automatic Transmission) as shown in Figure 1 In this study, the oil sample analysis was performed by using Ferrogram Maker (FM-III) which provided a strong magnetic field that used to separate the wear debris from the oil sample on the ferrogram glass slide. The testing process started with prepared 3ml of the lubricant fluid and 3ml of n-Heptane by mixed it on shaker machine. This oil sample then was run on FM-III and details analysis using an optical microscope (Olympus BX51M). Both of the FM-III and optical microscope were shown in Figure 2. Results and Discussion The results of Ferrograhic test were analysed using optical microscope and the observation is captured. The results can be categorized into different groups based on the mileage. There are 4 different mileage tested: (1)20000 km, (2)30000 km, (3)40000 km, and (4)50000 km. Figure 6 shows images of wear particles morphology that presents on the oil sample slide for mileage 20000 km, 30000 km, 40000 km and 50000 km respectively. The wear particles morphology obtained using Optical Microscope were then identified and compared to the Predict chart. This was done for comparison purposes to get the most similar profile from the chart on which the wear mechanism was approximated. Figure 3(a) shows a normal rubbing wear whilst Figure 3(b) shows a particle typical of the break-in period for components having a ground or machined surface finish with 500X magnification. These particles are formed in decreasing quantity until the original finishing marks are covered or worn away. Figure 3(c) is an image of fine cutting wear particles with a 500X magnification. The height of the largest particle is greater than the depth of focus of the optical microscope. The bent portion above the glass substrate cannot be brought into the focus simultaneously with the substrate. In Figure 3(d), a typical distribution of particles with 200X magnification during the normal running of rolling-element bearings were observed. These particles were generated by rolling-element bearing fatigue spherical. In Figure 3(e), scuffing wear particle with 500X magnification were identified while As can be observe from Figure 6, wear morphology in mileage 50000km for normal rubbing wear, break-in period, cutting wear and gear fatigue were also presents in mileage 30000 km and 40000 km. From the observations in this study, existence of wear particles from normal rubbing, break-in period, cutting, scuffing, severe sliding, rolling-element fatigue and gear fatigue wear are found in used transmission fluid of CNG bus gearbox. Other than existence wear particles, corrosive, red oxides, black oxides, copper alloy wear etc were not found. Summary of all the results were presented in Table 1. Interpretations of Observed Wear Morphology From the microscopic evaluation summarised in Table 1, seven types of wear particles morphology were observed. It can be seen that, cutting wear and rolling-element fatigue wear were presents in most of the results for 20k, 30k, 40k and 50k millage. Cutting wear occur due to the result of penetration surface one to another that generated a very thin wire and long large stripes with thickness of 0.25 micrometers (μm) particles. According to Sondhiya and Tiwari et.al [9,10], these particles is generated off by two factors; misaligned of the hard component and soft surface that was cut off by hard sharp edge. It also suggested that, the component is approaching to fail if the quantities of the particles size increasing to 50 μm long. Rolling-element fatigue wear is also known as a bearing wear particle. This wear particle is created by periodical contact of the bearing surface, hence generated a spherical fatigue element [12]. According to Vlcek et al. [14], there are a few factors that affected the rolling-element fatigue varying such as material, processing and manufacturing along with operating conditions. Normal rubbing wear and break-in period were presents in mileage 20k, 40k and 50k while Severe sliding wear was found in mileage 20k, 30k and 40k. Rubbing wear particles generated were observed as a normal sliding wear in the engine and machine. Normally, the size of flat platelets generated as small as 5 microns and up to 15 microns [9,11]. For the break-in period is deal with the changes in engine usually during the early period where it is affected from the high friction, high blow-by or high oil consumption [15]. Meanwhile, the severe sliding wear particles can be identified by the parallel pattern on the surface as it occurred due to the high and low speed between the components. The particles size generally larger than 15 microns as it can up to 30 microns [9,12]. From Table 1, scuffing wear and gear fatigue wear were the least wear particles present in the mileages. Scuffing wear particles presents due to the surface contact between metals as it affected by a high speed or high load surfaces working rotation. This particle generated by a few causes such as excessive heat by friction, existence of solid welding and characteristics of materials contact [16,17]. Meanwhile, as for the gear fatigue wear, it has a common wear characteristic with rolling-element bearing fatigue particles [10].
2020-01-09T09:15:05.016Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "e9918d60a3d79625d9f2196902b99ec2bc515324", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1432/1/012045", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "9a1ff26e428963780386aaea024f51c6fc309403", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Computer Science" ] }
4792514
pes2o/s2orc
v3-fos-license
Role of maternal characteristics and epidural analgesia on caesarean section rate in groups 1 and 3 according to Robson’s classification: a cohort study in an Italian university hospital setting Objective To investigate the role of maternal characteristics and epidural analgesia (EA) on caesarean section (CS) rates in selected groups by using the Robson 10-Group Classification System (RTGCS). Design Cohort study. Setting Department of Obstetrics and Gynaecology, Fondazione Policlinico Universitario ‘A. Gemelli’, Rome, Italy. Patients A total of 12 098 deliveries in periods I (1998–1999) and II (2010–2011). Main outcome measures CS rates in groups 1 and 3 of RTGCS. Results In group 1, 1144 (20%) patients were assigned to period I and 1302 (20.4%) to period II, while in group 3, 1587 (27.8%) were assigned to period I and 1502 (23.5%) to period II. CS rates were 16.4% and 23.1% in group 1 and 12.7% and 10.9% in group 3 in periods I and II, respectively. In group 1, significant and independent contributions to CS rate were provided by maternal age (p=0.018; OR 0.95 (95% CI 0.85 to 0.97)), body mass index (BMI) (p=0.022; OR 0.89 (95% CI 0.85 to 0.91)) and EA administration (p=0.037; OR 0.59 (95% CI 0.43 to 0.77)). In group 3, maternal age (p<0.001; OR 0.93 (95% CI 0.89 to 0.96)) and BMI (p=0.023; OR 0.98 (95% CI 0.96 to 0.99)) were found to be significantly associated with CS. Conclusions RTGCS is an effective tool for analysing changes in obstetric care, allowing for the recognition of maternal age, BMI and EA administration in the strategic planning for mitigation of CS rates in selected groups. * Discussion, page 12, line 53: "...such as changes in physician behaviour, or non-medical risk factors could interfere on the final decision-making process." This is interesting, could you please specify what you mean, in what way might physician behaviour have changed and why? Perhaps a reference is needed here. * Discussion, limitations: In the short summary/bullet points at the top you mention the retrospective design to be a limitation, however, you don't mention this is the limitation-section of the Discussion. If you consider the design to be a limitation, please comment on this in the Discussion-section. GENERAL COMMENTS The present study becomes strong as it has taken into account these factors and enhanced the utility of The Robson Ten Group Classification System ( RTGCS) in the efforts to reduce c -section rate more effectively. In the view of previous caesarean section as one of the major indications of caesarean section contributing to the rise in c -section rate, all efforts must be directed to prevent primary c-sections. The researchers in the present study targeted group 1 and group 3 of RTGCS, while considering the additional factors e.g. maternal age, Body Mass Index (BMI) and Epidural Analgesia (EA)by fixing the intensions to reduce primary c-sections. To summarise, it is an excellent effort on the part of researchers to blend the RTGCS and certain maternal characters with introduction of EA. More studies of this kind in different parts of the world not only from institutions but from private sector as well need to be undertaken. Study of different clinical indications in combination with RTGCS will narrow down the focus so as to improve the efforts to reduce c-section rate. VERSION 1 -AUTHOR RESPONSE Reviewer 1 #1. Abstract, study design: The authors describe the study as a retrospective, observational study. This does not very clearly describe the study design. Was it a cross -sectional or cohort study? Also, the terms prospective/retrosepctive is not recommended by the STROBE-recomendations. Please specify the study design. Thank you for your comment. It's a cohort study. We included it in the title, as suggested also by Editorial Office. #2. Introduction, page 4, line 56: The authors state "the role of ethnicity could be crucial on CS rate". It would be interesting if you could comment on this to make it clear to the reader-in what way is ethnicity associated with CS rates? A growing body of evidence can be cited in order to clarify the association between ethnicity and mode of delivery. We commented it, as indicated. #3 Introduction, page 5, line 10: Please provide a reference on the association between CS and EDA. Added. #4. Method, page 6, line 16: It is stated that participant informed consent was obtai ned. How was this done, as data during the first period were from the late 90s? In a contest of university settings, at time of admission patients receive information about the possibility to use their personal data for possible (retrospective) studies. They accept by signing an authorization to process personal data, according to current law in Italy. We clarified this aspect in the text. #5. Methods, page 8, line 30: "qualitative data". I cannot see that this study includes any qualitative data (?), please specify or revise. We considered 'qualitative' data those for the assignment to the groups of the classificative system, as listed in Method section. In an early analysis, we evaluated them individually, while in the final report we combined them for the construction of all ten groups. We amended the text. #6. Method: The authors elaborate on the practice of EDA, however, it would be valuable to know more about the hospital setting such as referral pattern (how many percent are referrals?), staffi ng etc., to evaluate the emergency CS rate presented in the two low-risk groups in this study. Have there been any other temporal changes in the case-mix of the obstetric population at this hospital between the two periods? As reported in the method section, the hospital is an university center, with admission of complicated cases in pregnancy from lower levels of health care settings, requiring emergent CS, if necessary. This can explain the higher CS rate in comparison to other settings. Excluding the E DA, no additional changes have been performed between two periods in analysis. The choice of the study period was related to the need to obtain results without any potential confounders (i.e., changes in management). #7. Results: Interesting with the reduction in CS rate in group 3. Is it statistically significant? Could you please comment on the potential reasons for this decrease in the Discussion? explication, we could speculate that the combination of pain control by using EA administration and slight increase in instrumental vaginal delivery rate seems be effective for CS reduction, avoing CS for maternal request during labor. #8. Results, page 9, line 40: I don't see how 255 of > 1000 patients in Group 1 and 136 of > 1000 patients in Group 3 can become > 80%? Please specify or revise. Many thanks for your careful observation. We revised the text adding the appropriate information (numerator/denomitaor for each group and its percetage), in order to clarify the results. #9. Discussion, page 12, line 53: "...such as changes in physician behaviour, or non-medical risk factors could interfere on the final decision-making process." This is interesting, could you please specify what you mean, in what way might physician behaviour have changed and why? Perhaps a reference is needed here. Done. #10. Discussion, limitations: In the short summary/bullet points at the top you mention the retrospective design to be a limitation, however, you don't mention this is the limitation-section of the Discussion. If you consider the design to be a limitation, please comment on this in the Discussionsection. Added. Reviewer 2 The present study becomes strong as it has taken into account these factors and enhanced the utility of The Robson Ten Group Classification System (RTGCS) in the efforts to reduce c -section rate more effectively. In the view of previous caesarean section as one of the major indications of caesarean section contributing to the rise in c-section rate, all efforts must be directed to prevent primary c -sections. The researchers in the present study targeted group 1 and group 3 of RTGCS, while considering the additional factors e.g. maternal age, Body Mass Index (BMI) and Epidural Analgesia (EA)by fixing the intensions to reduce primary c-sections. To summarise, it is an excellent effort on the part of researchers to blend the RTGCS and certain maternal characters with introduction of EA. More studies of this kind in different parts of the world not only from institutions but from private sector as well need to be undertaken. Study of different clinical indications in combination with RTGCS will narrow down the focus so as to improve the efforts t o reduce c-section rate. We agree with the reviewer and we are working in this direction in our ongoing research. We are grateful for the positive comment.
2018-04-26T22:47:31.936Z
2018-04-01T00:00:00.000
{ "year": 2018, "sha1": "8e4fabd6af42161f3d16dc197d0cb03b81dc0fd9", "oa_license": "CCBYNC", "oa_url": "https://bmjopen.bmj.com/content/bmjopen/8/4/e020011.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8e4fabd6af42161f3d16dc197d0cb03b81dc0fd9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
209520323
pes2o/s2orc
v3-fos-license
Combined treatment with N‐acetylcysteine and gefitinib overcomes drug resistance to gefitinib in NSCLC cell line Abstract We aimed to explore the molecular substrate underlying EGFR‐TKI resistance and investigate the effects of N‐acetylcysteine (NAC) on reversing EGFR‐TKI resistance. In the current research, the effects of NAC in combination with gefitinib on reversing gefitinib resistance were examined using CCK‐8 assay, combination index (CI) method, matrigel invasion assay, wound‐healing assay, flow cytometry, western blot, and quantitative real‐time PCR (qRT‐PCR). CCK8 assay showed that NAC plus gefitinib combination overcame EGFR‐TKI resistance in non‐small cell lung cancer (NSCLC) cells by lowering the value of half maximal inhibitory concentration (IC50). CI calculations demonstrated a synergistic effect between the two drugs (CI < 1). Matrigel invasion assay and wound healing assay demonstrated a decrease in migration and invasion ability of PC‐9/GR cells after NAC and gefitinib treatment. Flow cytometry displayed enhanced apoptosis in the combination group. Western blot and qRT‐PCR revealed that increased E‐cadherin and decreased vimentin in the combination group. When PP2 was administered with gefitinib, the same effects were seen. Our findings suggest that NAC could restore the sensitivity of gefitinib‐resistant NSCLC cells to gefitinib via suppressing Src activation and reversing epithelial‐mesenchymal transition. plus gefitinib combination overcame EGFR-TKI resistance in non-small cell lung cancer (NSCLC) cells by lowering the value of half maximal inhibitory concentration (IC50). CI calculations demonstrated a synergistic effect between the two drugs (CI < 1). Matrigel invasion assay and wound healing assay demonstrated a decrease in migration and invasion ability of PC-9/GR cells after NAC and gefitinib treatment. Flow cytometry displayed enhanced apoptosis in the combination group. Western blot and qRT-PCR revealed that increased E-cadherin and decreased vimentin in the combination group. When PP2 was administered with gefitinib, the same effects were seen. Our findings suggest that NAC could restore the sensitivity of gefitinibresistant NSCLC cells to gefitinib via suppressing Src activation and reversing epithelial-mesenchymal transition. K E Y W O R D S epithelial-mesenchymal transition, gefitinib resistance, N-acetylcysteine, NSCLC, Src phenotypes and epithelial-mesenchymal transition (EMT). [11][12][13][14] EMT is a universal phenomenon in various physiological and pathological processes. It is clear that epithelial cells display characteristics of mesenchymal cells, accompanied by upregulation of vimentin and N-cadherin, and downregulation of E-cadherin during EMT. EMT also contributes to tumor invasion, proliferation, metastasis, and therapy resistance to EGFR-TKIs. 15,16 As a consequence, targeting EMT might be a potential strategy to reverse or prevent EGFR-TKIs resistance. N-acetylcysteine (NAC) is an effective antioxidant widely used in anticancer investigation in recent years. Our previous studies have demonstrated that NAC could overcome gefitinib resistance mediated by cigarette smoke extract (CSE). However, whether NAC plays a critical role in non-smokers would explore the combined effect of NAC with gefitinib on gefitinib-resistant cells and the underlying mechanisms. | Cell culture PC-9 gefitinib-sensitive cells (PC-9) 17,18 and gefitinib-resistant cells (PC-9/GR) were gifts from Dr Jian Zhang at Xijing Hospital, Fourth Military Medical University, China. It is known that exon19 deletion is one of the hall-marks in EGFR-activating mutation, and PC-9 cell line is characterized by exon19 deletion of lung cancer cells. The mutation profile is exon19(E746-A750)del for PC-9/GR. 19 Cells were cultured in RPMI 1640 medium (Hyclone, USA) with 10% fetal bovine serum (PAN, USA) at 37°C in a cell incubator containing 5% CO 2 . In addition, PC-9/GR cell was cultured in medium containing 10 nmol/L of gefitinib to maintain resistance. | Cell growth assay Cell proliferation was evaluated with a CCK8 kit (Dojindo Laboratories). In brief, Cells (5 × 10 3 cells) were seeded into 96-well plates, cultured overnight and treated with various concentrations of drugs for 48 hours. Then, 10 µL of CCK8 was added to each well and cells were incubated for 2 hours. Optical density (OD) was set at 450 nm by Mircroplate Reader. | Combination studies Combination studies were performed as described previously. 20,21 On the basis of the median-effect analysis by Chou and Talalay (CalcuSyn software, Biosoft: Chou, 2010), the effects of drugs were calculated using the CI method for each experimental condition. 22,23 | Western blotting analysis The precise procedure was in accordance with our previous methods. 24 In brief, proteins were separated by SDS-PAGE gel and transferred to PVDF membranes. After blocking with 5% fat-free milk or BSA, the membranes were incubated with primary antibodies overnight at 4°C, followed by incubation with secondary antibodies for 2 hours at room temperature. The protein bands were detected. | Cell invasion assay Sixty microliters of Matrigel (Becton Dickinson) was added into the center of each chamber (Millipore). The cells were seeded in the upper chamber of the insert with or without drugs, after incubation for 24 hours at 37°C with 5% CO 2 . The upper surface of the insert was scraped and the cells on the lower surface of the membrane were fixed with methanol and stained with 0.1% crystal violet solution (Becton Dickinson). Cells in the bottom of the membrane were counted using a light microscope. | Measurement of cell migration Cells were planted into six-well plates to create a confluent monolayer with up to 80% cell confluence, and then starved for 24 hours with serum-free medium, followed by scratching with a sterile 200 µL tip to manually create a wound. The cells were washed with PBS and cultured in medium supplemented with NAC, gefitinib or a combination of both. Images were acquired by inverted optical microscope after creating the wound. | Measurement of apoptotic rate Apoptotic cells were determined by Annexin V-FITC/ propidium iodide (PI) Kit according to manufacturer's guidelines. The specific process referred to our previous work. 24 Briefly, PC-9/GR cells were treated with NAC, gefitinib or a combination of both for 48 hours, respectively. Afterward, cells were harvested, and stained with Annexin V-FITC and PI according to manufacturer's recommendations (Beyotime Institute of Biotechnology). | Statistical analysis The data were represented as mean ± standard deviation (SD) or 95% confidence interval (CI) for three independent experiments. The GraphPad Prism software (version 5.0, GraphPad Software) was used for statistical analysis. The comparison between two independent treatment groups was analyzed by unpaired, two-tailed Student's t test. One-way analysis of variance (ANOVA) was utilized to analyse the variance among multi-sample. Statistical significance was assumed at P < .05. | CalcuSyn-based analysis of NAC and gefitinib combination treatment The constant combination ratio experiments were carried out at an equipotency ratio approximating their individual IC50 (IC50 NAC : IC50 gefitinib ≈ 2:1), which made sure the effect of each drug in combination was roughly equal. Figure 1A showed the dose-response curves for PC-9/GR cells exposed to NAC, gefitinib and both. CI values of the group treated with a combination of both drugs in different fractional cell growth inhibition (Fa) were shown in Figure 1B. CI values of less than 1 were acquired from the combination group, demonstrating that the two drugs must have a synergistic effect on growth inhibition. Then, PC-9/GR cells were treated with 5 mmol/L of NAC adding different concentrations of gefitinib. We found that the IC50 of gefitinib was 0.3986μmol/L in the combination group, which was lower than gefitinib alone (P < .001; Figure 1C). | Combination of NAC and gefitinib inhibited migration and invasion of PC-9/GR cells To investigate whether NAC (5 mmol/L) in combination with gefitinib (2 μmol/L) had an impact on biological behavior of PC-9/GR cells, we performed migration and invasion assays. After 48 hours of treatment with NAC or gefitinib alone or in combination (NAC + gefitinib group), the number of cells passing through the Matrigel decreased in the NAC + gefitinib group compared to that in either alone group (Figure 2A). Cell migration assay showed that the distance of cell migration was the shortest in the NAC + gefitinib group ( Figure 2B). These data illustrated that NAC in combination with gefitinib could inhibit the invasion and migration of PC-9/GR cells. | NAC in combination with gefitinib promoted apoptotic rate in PC-9/GR cells Furthermore, we detected the apoptosis of PC-9/GR cells under different treatments using flow cytometry analysis. NAC + gefitinib caused more apoptotic cells compared with NAC or gefitinib alone did (P < .01; Figure 3A,B). Bax and Bcl-2 are known as pro-apoptotic and anti-apoptotic molecules, respectively. The protein level of Bcl-2 was decreased in the NAC + gefitinib group. While treatment with NAC or geftinib alone led to higher expression of Bax ( Figure 3C). The trend of protein level was observed for Bcl-2 and Bax in the combination group compared to the other groups. These results demonstrated that NAC in combination with gefitinib promoted apoptosis of PC-9/GR cells. | NAC in combination with gefitinib reversed EMT and inhibited Src activation in PC-9/GR cells We next explored the underlying mechanism that led to the superior efficacy of NAC in combination with gefitinib. Western blot analysis showed that NAC in combination with gefitnib facilitated E-cadherin expression and inhibited F I G U R E 2 CalcuSyn-based analysis of the N-acetylcysteine (NAC) and gefitinib combination. A, Cells were pretreated with NAC or gefitinib alone and combination, PC-9/GR cells passed through the matriged was lower than other groups. B, Wound healing assays showed that the distance of cell migration in different group for 0 h, 24 h. NAC + gefitinib vs control, *P < .001; NAC + gefitinib vs NAC, † P < .01; NAC + gefitinib vs gefitinib, ‡ P < .01. Scale bars: 100 µm. NAC, 5 mmol/L; gefitinib, 2 μmol/L | 1499 LI et aL. Vimentin expression in PC-9/GR cells. In addition, the protein level of p-Src was decreased in the combination group ( Figure 4A). In qRT-PCR analyses, expression of the E-cadherin increased and vimentin was decreased in the NAC + gefitinib group ( Figure 4B). Then, we treated PC9/GR with PP2 (a potent inhibitor of Src, 10 µmol/L) in combination with gefitinib, which displayed the similar results as NAC + gefitinib did ( Figure 4C,D). These results indicated that NAC in combination with gefitinib could inhibit Src activation and reverse EMT. | DISCUSSION NSCLC patients with EGFR mutations initially get good responses to EGFR-TKIs. However, these patients gradually acquire drug resistance inevitably. Therefore, it is of necessity to develop novel strategies to delay or overcome the acquired resistance to EGFR-TKIs. One of the interesting approaches would be the combination treatment with alternative drug in addition to gefitinib. Drug combination is widely used and has become the primary treatment modality for cancer. 26 A combination of gefitinib with metformin shows a synergistic effect and increases sensitivity of patients with lung cancer to gefitinib. 27 In this study, we used NAC in combination with gefitinib, which could effectively overcome drug resistance to gefitinb. NAC is a precursor of glutathione, a powerful antioxidant used in anti-tumor research. It has been reported that NAC inhibits proliferation and invasive behavior of human cancer cells in vitro, including colorectal cancer, bladder cancer, prostate cancer, tongue cancer, and lung carcinoma. [28][29][30][31] Our previous study showed that NAC could overcome gefitinib resistance induced by cigarette smoke extract. NAC has been proven to inhibit the growth of lung carcinomas by reducing cell proliferation and facilitating apoptosis in tobacco carcinogen-treated A/J mice. 32 NAC exerts inhibitory effect on tumor growth via modulation of EGFR/AKT signaling and HBP1 expression in EGFRoverexpressed oral cancer. 33 It can be inferred that NAC F I G U R E 3 N-acetylcysteine (NAC) in combination with gefitinib induced PC-9/GR cells apoptosis. A and B, NAC combination with gefitinib induced apoptotic. C, The level of Bcl-2 protein expression was low, while Bax was high expression. NAC + gefitinib vs control, *P < .001; NAC + gefitinib vs NAC, † P < .01; NAC + gefitinib vs gefitinib, ‡ P < .01 has promising potential to be a novel anticancer agent. In this study, we demonstrated that NAC could overcome gefitinib resistance by inhibiting Src activation and reversing EMT in PC-9/GR cells when used in combination with gefitinib. It has been demonstrated that Src is a key regulator of EMT in cancer cells. 34,35 Our research verified that NAC modulated EMT of lung cancer cells by inhibiting the activation of Src, which might clarify the underlying mechanism theoretically. EMT is one of hallmarks in cancer and plays a crucial role in the development and progression of most solid tumors. 36 As a candidate mechanism of drug resistance, EMT plays a crucial role in acquired EGFR-TKIs resistance as reported by different research groups. 37,38 It has been demonstrated that EMT could be regarded as a predictor of therapy response in patients with NSCLC. Furthermore, there is a direct connection between EMT and EGFR-TKIs sensitivity. Research has shown that E-cadherin potentiates the sensitivity to gefitinb to improve the effect. 39 Notably, EMT co-occurs with EGFR T790M mutations in a study using re-biopsies. 40 Multiple signaling networks modulate EMT, such as transforming growth factor-β1 (TGF-β1), mitogen-activated protein (MAP) kinases, RHOA, AKT, and STAT3. Multiple molecules are involved in EMT, including Src, a key oncogene. Src is the first oncogene ever identified, and its family members have also been recognized as potential targets in cancer therapy. Activation of Src promotes numerous pathological processes, including invasion, migration, proliferation, and angiogenesis in a variety of cancers. 41 Increased Src activity boosts EMT process, while Src inhibition suppresses this process. Moreover Src-mediated EMT is involved in the chemotherapy resistance of cancers. 42,43 Some studies concentrate on the combination of EGFR inhibitors and Src inhibitors. A previous study has revealed that the efficacy of Src inhibitors combined with EGFR inhibitors is synergistic, and Src inhibitors could improve gefitinib resistance in NSCLC with EMT. 44 Our previous studies have shown that Src is involved in cigarette smoke-induced EMT and EGFR-TKI resistance, and Src inhibition sensitizes resistant cells to gefitinib. However, further investigations are needed to determine whether this combination in vivo could delay drug resistance and prolong patient progression-free survival (PFS) and overall survival (OS). A lack of in vivo study is an obvious limitation in the current study, and we will conduct in vivo experiments in animal models to verify the cytological results in this study. We have not further investigated the up-and-down pathway of apoptotic proteins, which is another limitation of this study. To prove the synergistic effect of this combination therapy for EGFR mutant lung cancer, the future study should demonstrate with other EGFR mutant lung cancer cell lines. In addition to gefitinib, osimertinib has obtained positive results from the phase 3 AURA trial and phase 3 FLAURA trial. 45,46 With the approval of osimertinib in Nov 2015 (https ://www.fda.gov/), it will be clinically meaningful if osimertinib has the similar effect when combined with NAC. In conclusion, we used the combination index value to evaluate the efficacy of NAC and gefitinib combination treatment. Our results demonstrated that NAC had synergistic effect with gefitinib. These two drugs used in combination overcame the resistance of PC-9/GR to gefitinib by inhibiting Src activation and reversing EMT. Thus, these findings highlighted a novel insight into overcoming gefitinib resistance and provided a potential strategy for NSCLC patients. in combination with gefitinib reversed EMT and inhibited Src activation in PC-9/ GR cells. A and B, NAC combination with gefitinib could inhibited Src activiation and reversed EMT. C and D, PP2 combination with gefitinib could inhibite Src activation and reverse EMT in PC-9/GR cells. NAC + gefitinib vs control, *P < .001; NAC + gefitinib vs NAC, † P < .01; NAC + gefitinib vs gefitinib, ‡ P < .01; PP2 + gefitinib vs control, § P < .001; PP2 + gefitinib vs NAC, || P < .01; PP2 + gefitinib vs gefitinib, ¶ P < .01. PP2, 10 µmol/L
2020-01-01T15:20:50.045Z
2019-12-31T00:00:00.000
{ "year": 2019, "sha1": "e380b3e6e2c341f596291c970b646365ed9d9bbd", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/cam4.2610", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1462ce6a60ab71c11a6b7a9076fb7a68fc05d4ac", "s2fieldsofstudy": [ "Biology", "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
259140275
pes2o/s2orc
v3-fos-license
Glycosylation of H4 influenza strains with pandemic potential and susceptibilities to lung surfactant SP-D We recently reported that members of group 1 influenza A virus (IAV) containing H2, H5, H6, and H11 hemagglutinins (HAs) are resistant to lung surfactant protein D (SP-D). H3 viruses, members of group 2 IAV, have high affinity for SP-D, which depends on the presence of high-mannose glycans at glycosite N165 on the head of HA. The low affinity of SP-D for the group 1 viruses is due to the presence of complex glycans at an analogous glycosite on the head of HA, and replacement with high-mannose glycan at this site evoked strong interaction with SP-D. Thus, if members of group 1 IAV were to make the zoonotic leap to humans, the pathogenicity of such strains could be problematic since SP-D, as a first-line innate immunity factor in respiratory tissues, could be ineffective as demonstrated in vitro. Here, we extend these studies to group 2 H4 viruses that are representative of those with specificity for avian or swine sialyl receptors, i.e., those with receptor-binding sites with either Q226 and G228 for avian or recent Q226L and G228S mutations that facilitate swine receptor specificity. The latter have increased pathogenicity potential in humans due to a switch from avian sialylα2,3 to sialylα2,6 glycan receptor preference. A better understanding of the potential action of SP-D against these strains will provide important information regarding the pandemic risk of such strains. Our glycomics and in vitro analyses of four H4 HAs reveal SP-D-favorable glycosylation patterns. Therefore, susceptibilities to this first-line innate immunity defense respiratory surfactant against such H4 viruses are high and align with H3 HA glycosylation. Introduction The prevalence and subtype distribution of the low-pathogenicity avian influenza virus (LPAIV) differ across bird taxa. A crucial factor in the epidemiology of these viruses is the ability to transmit between and within different host taxa (van Dijk et al., 2018). Although LPAIV infections are often asymptomatic, these viruses can serve as progenitors for reassortment and recombination for the eventual rise of high-pathogenicity avian influenza virus (HPAIV), which can lead to significant disease burden in wild and domestic fowl and swine. The AIV may also transmit to humans. In the 1990s, HPAIV H5N1 spread through 60 countries, causing devastating outbreaks of avian influenza with over 800 cases of human infections and 450 deaths (World Health Organization, 2020). A novel LPAIV H7N9, first discovered in China in 2013, caused severe infection in humans with nearly 1,600 laboratoryconfirmed cases reported and 39% mortality as of April 2017 Liu et al., 2014). Four years after this outbreak, H7N9, with mutations shown to increase virulence in chickens (HPAIV), was recovered from two human patients in China . H1, H2, and H3 IAVs have adapted to humans, and pandemics of each are hallmarks of these zoonotic transformations including the 1918 H1N1 Spanish flu, 1957 H2N2 Asian flu, 1968 H3N2 Hong Kong flu (Capua and Alexander, 2002), and 2009 H1N1 swine flu (Shapshak et al., 2011). Other IAVs have caused sporadic cases in humans such as H5N1 and H7N9 described previously. In the laboratory setting, four mutations in H5N1 have been identified that facilitate infection of ferret and human hosts (Herfst et al., 2012;Lu et al., 2013;Zhang et al., 2013;Shi et al., 2014). Two of these mutations, Q226L and G228S (H3 numbering), are in the receptorbinding domain of HA, leading to enhanced binding to sialylα2,6capped receptors, which are abundant in swine (Bateman et al., 2010), ferret (Jia et al., 2014), and human respiratory tissues (Jia et al., 2014). Although no such mutations have been found in the H5N1 viruses in nature, this situation clearly shows their potential for adaptation to alternative hosts, including humans, and indeed, analogous mutations have been detected in the H4 viruses covered herein. Live poultry markets (LPMs) are considered a major source of IAV dissemination, reassortment, and interspecies transfers (Liu et al., 2003;Bi et al., 2016). The IAV H4 subtype, which infects domestic ducks, is known to circulate in LPMs in China (Kawaoka et al., 1988) (Wu et al., 2014a) and has been isolated from swine suffering from pneumonia in Canada (Karasin et al., 2000). Transmission to swine populations in southeastern China and in humans has also been reported (Ninomiya et al., 2002) (Bateman et al., 2008;Kayali et al., 2011). A recent phylogenetic analysis of 3,020 cloacal swabs from apparently healthy ducks resulted in the identification of 107 influenza strains with eight HA subtypes and six NA subtypes (Deng et al., 2013). A second phylogenetic analysis of 3,210 cloacal swabs resulted in the identification of 109 strains with 10 HA subtypes (H1, H2, H3, H4, H5, H6, H7, H9, H10, and H11) and eight NA subtypes (N1, N2, N3, N4, N6, N7, N8, and N9) for a total of 21 IAV subtype combinations. In both studies, the most abundant HA subtype was H4, and based on a comparison of all genes, the reassortment (gene exchange) between the different subtypes was widespread. The findings showed that H4 and H3 viruses have reassorted and that H4 HAs have evolved considerably at the receptor-binding site (RBS) (Deng et al., 2013). A recent phylogenetic tree of H4 viruses was analyzed by Quan et al. (2018). Among key RBS residues, Q226 and G228 (H3 numbering) are associated with an avian sialylα2,3 receptor preference. However, H4 IAV isolates from Canadian swine have been reported to contain L226 and S228, which is predicted to shift the RBS to a sialylα2,6 preference, both a swine and human tissue receptor (Wu et al., 2015). We have previously reported that this prediction is borne out (Song et al., 2017), as X-ray crystallographic analysis revealed that the Q226L and G228S mutations reduce sialylα2,3 ligand contacts, whilst increasing sialylα2,6 contacts in the RBS. Surface plasmon resonance and tissue binding analyses targeting sialylα2,3 or sialylα2,6 receptors supported the structural findings. The abilities of H4 to resort with IAV H3, which is prevalent in humans, to evolve to recognize swine and human receptors and to infect humans, albeit sporadically to date, all suggest that H4 has the potential to infect humans in a pandemic event. Are these influenza H4 viruses susceptible to lung surfactant protein D (SP-D), a primary innate defense surfactant collectin? Indeed, a key glycosylation site in the head of H3 required for interaction with SP-D co-aligns with that of H4 align as we will demonstrate herein, and this site is centered at N165 (H3 numbering). Our previous work demonstrated that this glycosylation site can be either primarily high-mannose or primarily complex N-glycan subtype groups according to the IAV group, at least in part, and that subtype is the key to the SP-D interaction. IAV HAs can be classified into two broad groups based on phylogenetic analysis. Group 1 contains H1, H2, H5, H6, H8, H9, H11, H12, H13, H16, H17, and H18, while group 2 contains H3, H4, H7, H10, H14, and H15 (Wu et al., 2014b;Allen and Ross, 2018). We recently reported the susceptibilities of group 1 LPAIV to lung SP-D. H2, H5, H6, and H11 whole virus and recombinant HAs were investigated (Parsons et al., 2020). Glycoproteomics analysis of these HAs revealed that they all contained the complex H3 relative N165 ("N165") glycosylation site. H3 HAs from group 2, on the other hand, contain almost exclusively high-mannose N-glycans at the N165 site in the globular head region, which is required for SP-D recognition of H3 HA and subsequent elimination from the lung (Hartshorn et al., 1994;Hartshorn, 2010). In our work, the "N165" glycosite of H2, H5, H6, and H11 HAs was located more forward on their respective resident beta strand compared to that of H3 viruses. This situation results in fewer intra-and inter-subunit contacts than the H3 N165 resident N-glycan core. This orientation of the glycosite predicts more exposure to solvents in the group 1 LPAIV strains studied, as well as higher mobility and more exposure to ER and Golgi glycosylation machinery. The predicted outcome is more processed glycans at that site yielding not high-mannose glycans but complex ones, which are not receptors for SP-D. This prediction was confirmed by our glycoproteomics analysis (Parsons et al., 2020). The analyzed group 1 LPAIV HAs had almost completely complex glycan at "N165." Experiments with recombinant versions of SP-D also demonstrated that compared to HA of H3 strains (group 2), HAs of all group 1 LPAIV strains tested (H2, H5, H6, and H11) were not ligands for SP-D (Parsons et al., 2020). Constructs yielding group 1 HAs with only high-mannose glycan at "N165" imparted SP-D sensitivity, demonstrating that the SP-D interaction can be imparted with high-mannose glycan at "N165" in group 1 IAV. The prediction would be that if the LPAIV infected humans through gain of function adaptation, the resulting IAV would potentially be highly pathogenic as it would not be acted upon efficiently by SP-D in the lung. As SP-D is a primary innate immunity factor (Reading et al., 2007), this situation could be detrimental. Indeed, HPAIV H5 infections in humans have occurred and demonstrate high mortality (Govorkova et al., 2005; Frontiers in Molecular Biosciences frontiersin.org 02 de Jong et al., 2006;Chen et al., 2007), and those H3 strains that have lost key high-mannose glycans in the head of HA become more pathogenic in model systems (Hartshorn et al., 2008). Our glycoproteomics work with the group 1 LPAIV and group 2 H3 HAs revealed a high correlation between "N165" location, glycan subtype, susceptibility to surfactant SP-D, and the presence or absence of key residues in the 220 loop, especially W222 or its alternative R222 (H3 numbering). Upon analysis of representative H4 sequences (Supplementary Figures S1, S2) containing the avian Q226/G228 and swine L226/S228 (H3 numbering as per the work of Bateman et al. (2008), Wu et al. (2015), and Song et al. (2017)) receptor region amino acid patterns, which dictate avian and swine receptor preferences, respectively (Parsons et al., 2020), we noted that, in all cases, the H4 HA N165 placement aligned with that of H3 HA. Unlike H2, H5, H6, and H11, the corollary glycosite "N165" in H4 is recessed back in its resident beta strand like H3, thus less solvent exposed. However, some H4 strains, such as the duck strain studied here, have L222, whilst swine and teal H4 IAVs, also studied here, contain W222. Therefore, as the position on the beta strand and the presence or absence of key amino acid residues influence the glycosylation in this region, the question is what is the glycan subtype in the H4 HAs? In addition to the demonstrated shift in receptor preference dictated by the evolution of Q226L and G228S mutations, interactions of the N165 glycan with key residues, including either L222 or W222, may dictate the glycan subtype and, therefore, sensitivity to SP-D. The status of RBS preference for sialylα2,6 glycans is considered a strong marker for pathogenicity. Here, we investigate site N165 status as a second possible pathogenicity marker based on predicted interactions with SP-D and test the susceptibility in vitro with SP-D constructs. Chemicals and reagents HyperSep C18 and porous graphite carbon (PGC) cartridges with 100-mg bed weight were purchased from Thermo Fisher Scientific Inc. (Waltham, MA). TSKgel Amide-80 particles were purchased from Tosoh Bioscience LLC (Montgomeryville, PA). Sequencing-grade modified trypsin was purchased from Promega Corp. (Madison, WI). Peptide N-glycosidase F (PNGase F) was purchased from New England BioLabs Inc. (Ipswich, MA). Iodomethane, dimethyl sulfoxide (DMSO), sodium hydroxide beads, and other chemicals were purchased from Sigma-Aldrich (St. Louis, MO, USA); solvents were of high-performance liquid chromatography (HPLC) grade or higher. All other reagents were American Chemical Society (ACS) grade or higher. Whole-virus glycopeptide preparation Of the total mass of each of the four egg-grown whole-virus samples, 2 mg was processed. Upon reconstitution, enough dry urea to equal 8 M in the existing volume was added, resulting in concentrations of about 6 M. Dithiothreitol (DTT) was added to reach 5 mM, and the samples were incubated for 3 h at 37°C. After cooling, iodoacetamide was added to a concentration of 15 mM, and the samples were stored in the dark at room temperature for 30 min. The reaction was quenched by increasing the concentration of DTT to a total of 30 mM. Using a 10-MWCO membrane, the~3 mL samples were dialyzed three times into 500 mL of 50 mM ammonium bicarbonate, pH 8.0 (ammonium bicarbonate (ABC) buffer). The samples were concentrated using 10-MWCO spin filters. The final amount of the modified protein was approximately 100 μg of proteins per sample as measured by Bearden's assay. Trypsin was added at an enzyme:protein ratio of 1:35 (w/w), and the samples were incubated at 37°C overnight. After collecting LC/MS data on the trypsin-cleaved, HILIC-purified sample, ABC buffer (ammonium bicarbonate 50 mM; pH 8.0) was added to the remaining 12 μL of samples to reach a concentration of 50 mM, and chymotrypsin was added at an enzyme:protein ratio of 1:20. The sample was incubated at 37°C overnight, and data were collected the next day. N-glycan release About 20 μg of each glycopeptide was dried and resuspended in 50 μL O 18 water or normal water (sw15 only) with 50 mM ammonium bicarbonate, pH 8.0. The samples were heated for 10 minutes at 95°C to inactivate trypsin. Glycans were then released with 10 uUnits/μL of PNGase F overnight at 37°C. Purification of deglycosylated peptides and free glycans Deglycosylated peptides were captured using C18 cartridges preconditioned with 1 mL ethanol and then 1 mL of water. Glycans were eluted with 3 mL of deionized water and peptides subsequently eluted with 60% acetonitrile (ACN)/ 0.1% tri-fluoroacetic acid (TFA), dried in a SpeedVac concentrator, and resuspended in 15 μL water for nanoLC-MS E analysis. Enrichment of glycopeptides Intact glycopeptides were enriched from the trypsin cleavage mixture with TSKgel Amide-80 hydrophilic interaction liquid (HILIC) resin as described previously (An and Cipollo, 2011). In brief, 200 μg of resin (400 μL of wet resin) in a 1 mL Supelco fritted column was washed with 1 mL of 0.1% TFA in water, followed by equilibration with 1 mL of 80% ACN with 0.1% TFA. Trypsintreated samples were diluted with pure ACN and 0.1% TFA to make a final ACN concentration of 75% to avoid precipitation. The samples were loaded, and the run-through was re-applied to maximize capture. The columns were washed three times with 1 mL of 80% ACN/0.1% TFA, sequentially eluted with 1 mL 60% ACN/0.1% TFA and then 1 mL 40% ACN/0.1% TFA. The eluents were combined, dried by vacuum centrifugation, and then, resuspended in 25 μL MS-grade water for analysis by nanoLC-MS E . Permethylation of free N-glycans The samples were permethylated following the protocol of Ciucanu and Kerek (1984) with modifications as described previously (Parsons et al., 2017). MALDI-TOF analysis of permethylated N-glycans Permethylated glycans were spotted in triplicate on a hydrophobic surface 384 circle μFocus plate (Hudson Surface Technology) as follows: each spot was pre-treated with 1 μL DHB matrix solution (20 mg/mL 2.5-dihydroxybenzoic acid in 50% ACN/ 50% water with 1 mM sodium acetate) and air-dried. The permethylated glycans were resuspended in 50% ACN and mixed 1:1 (v/v) with the DHB solution on each spot and dried. Spotted surfaces were recrystallized at 50°C upon the addition of 1 μL of 100% ethanol. A total of 4,000 shots were collected and summed from each spot using a Bruker AutoFlex MALDI-ToF/ToF. A mass ladder of permethylated maltooligosaccharides was used as an external calibrant. Data were calibrated and smoothed, and the baseline was subtracted using FlexAnalysis 3.4 (Bruker Daltonics) and automatically assigned using AssignMALDI (Parsons and Cipollo, 2023). AssignMALDI uses a glycan library compiled from public glycan databases. NanoLC-MS E analysis of glycopeptides and peptides Approximately 2-4 μg of the prepared glycopeptide or deglycosylated peptide sample was injected three separate times onto a C18 column (BEH nanocolumn, 100 μm i.d. X 100 mm, and 1.7 μm particles; Waters Corporation). Parameters were as described previously (Parsons et al., 2017). Initial calibration of the Waters SYNAPT G2 HDMS system (Waters Corp., Milford, MA) was performed in the MS 2 mode using glufibrinopeptide B in 50% ACN/0.1% TFA. Data analysis for peptide and glycopeptide information NanoLC-MS E data were processed using BiopharmaLynx 1.3x (Waters) and GLYMPS (in-house software (Parsons et al., 2017;An et al., 2019;Parsons et al., 2019;Parsons et al., 2020)). Settings for trypsin-digested peptides were 1 missed cleavage, fixed cysteine carbamidomethylation, variable methionine oxidation, and variable N-glycan modifications. After initial processing by BiopharmaLynx to assign fragment ions to the correct precursor ions, GLYMPS was used to automatically assign the spectra with a buildingblock glycan database as described previously (Parsons et al., 2017). Mass accuracy was set in GLYMPS to 30 ppm, and assignments were based on (1) the presence of a core fragment (peptide, peptide + HexNAc, peptide + HexNAc2, peptide + dHex1HexNAc2, or peptide + Hex1HexNAc2) (World Health Organization, 2020), (2) the presence of three or more assigned glycopeptide fragments , (3) the presence of three or more assigned peptide fragments (Liu et al., 2014), (4) the presence of the assignment in at least two out of three spectra , and (5) the existence of the glycan in the GlycoSuiteDB. MALDI-TOF (Bruker Autoflex ™ Speed) analysis in the linear positive mode using sinapinic acid (10 mg/mL in 50% acetonitrile) as the matrix verified at least one, and up to five lysines were labeled per monomer when the reaction was carried out with a label concentration 20 times that of the protein. The protein was mixed with 0.45 mL of phosphate-buffered saline (PBS) and concentrated three times using a 0.5 mL 10 KDa MWCO centrifugal filter (Amicron) to exchange the buffer, concentrate the protein, and remove excess labels. Binding of different influenza viral strains to biotinylated SP-D glycan was analyzed at 25°C using a ProteOn surface plasmon resonance biosensor (Bio-Rad Labs). In some cases, the viruses were first digested using endoglycosidase F1. In brief, 0.03 U Endo F1 and 10 μL of 250 mM sodium acetate, pH 4.5, were added to approximately 25 μg of antigen with 33.5 μL of deionized water. The reaction mixtures were incubated at 37°C for 1 h and then dialyzed against PBS. Biotinylated SP-D was coupled to an NLC sensor chip at 350 resonance units (RUs) in the test flow cells. Three-fold serial dilutions (30-, 90-, and 270-fold) of freshly prepared influenza virus samples in PBS containing 10 μM neuraminidase inhibitors (oseltamivir and zanamivir) were injected at a flow rate of 50 μl/ min (120-s contact time), followed by dissociation for 600 s. The flow was directed over a mock surface to which no protein was bound, followed by the SP-D-coupled surface. Responses from the SP-D surface were corrected for the response from the mock surface and for responses from a separate, buffer-only injection. Kinetic data analyses were performed to calculate the apparent affinity constant for the interaction between the influenza virus and SP-D using Bio-Rad ProteOn manager software (version 2.0.1). Viral particle counts all measured between 1 × 10 8 -2 x 10 9 vp/mL. Protein sequence alignment analysis Previously, we compared seasonal H3 HAs, from group 2, to H2, H5, H6, and H11 HAs from group 1 IAV (Parsons et al., 2020). As described earlier, herein, we discovered that the local placement, contacts, and orientation of glycosite "N165" had a dramatic effect on the N-glycan subtype present at the glycosite. Here, we compared the H3, H4, and other group 2 HA sequences (Supplementary Figures S1, S2). We found that the analogous "N165" glycosite of H4 (N162 in H4 numbering) aligns with the H3 position rather than with group 1 strains such as H5. This was the case for all four H4 HAs analyzed in this study (Supplementary Figure S2) and is also the case for the consensus sequence derived from all 1,117 non-redundant H4 sequences in the flu database (Supplementary Figure S1). As previously reported, the H3 HA N165 glycan interacts strongly with the neighboring subunit 220 loop. Multiple contacts between the inner glycan core and W222 were observed. We note that hereafter, numbering used is that of H3 to be consistent with previous publications describing the recent changes in the H4 RBS (Shi et al., 2016;Song et al., 2017). The two teal and the swine H4 strains studied here contain W222. However, the duck strain contains L222. Not unexpectedly, the swine sequence also contains L226 and S228, which have been shown to shift the RBS to a sialylα2,6 preference, while all the avian sequences have Q226 and G228 consistent with sialylα2,3 preference. Overall, the prediction would be that H4 HA with W222 would have a similar glycosylation subtype to H3 HA (highmannose at "N165"), whereas those with L222 may or may not, depending on the intra-and intermolecular connectivities at the site. Strains used in this study are shown in Table 1. Overall glycosylation PNGase F-released glycans from whole viruses grown in eggs were permethylated and analyzed by MALDI-TOF MS. The overall glycoform distributions for the four HAs are shown in Figure 1. The figure shows the predominance of high-mannose glycans. Figure 2 shows a detailed comparison of glycoforms grouped by the number of N-acetylhexosamines (HexNAcs2, HexNAcs3, or HexNAcs4+), which is reflective of complexity. Those with HexNAc2 are associated with high-mannose glycans. Those with three HexNAcs are associated with hybrid or short complex ones, and those with more than three HexNAcs are associated with complex glycans. These associations are well-documented in the literature and accepted as such (Kornfeld and Kornfeld, 1985;Schachter, 2000;Parsons et al., 2019;Parsons et al., 2020;Shajahan et al., 2020). Although high-mannose glycans dominated the profiles, both hybrid and complex glycans were also present. It should be noted that the overall glycosylation of the whole-viral samples grown in eggs here represents all glycosylation present in the virus and not just that of HA. Glycosylation on other proteins such as neuraminidase will affect the overall abundance, although hemagglutinin is the most abundant glycoprotein present in the virus. Typically, HA glycans represent more than 85% of the released glycans as it is approximately 70% of viral protein and is highly glycosylated. See (cited) as an example. Occupancy NetNGlyc is a publicly available online software application published in 2004 to predict the glycosylation sites in human proteins (Gupta et al., 2004). It was used in this study to identify glycosylation sequence and their potential for glycosylation. Five sites are predicted to be glycosylated in the four strains studied (Table 2). Sites N2 and N165 of the mature sequence are predicted to have the highest potential to be glycosylated, sites N18 and N481 are predicted to have low-tointermediate potential, and site N294 is predicted to have the least. FIGURE 2 Detailed compositions of MALDI-TOF MS analysis of PNGase F-released glycans from H4N6 strains. A comparison of glycoforms grouped by the number of N-acetylhexosamines (HexNAcs2, HexNAcs3, or HexNAcs4+) is shown, which is reflective of complexity, where the former is primarily high mannose, the second is hybrid, and the last is complex glycans. Frontiers in Molecular Biosciences frontiersin.org 06 glycopeptides subsequently treated with chymotrypsin. The glycan heterogeneity and relative intensity at individual glycosites are shown in Figure 3. Only trace abundances of glycopeptides containing N2 or N18 were found in the Teal10, Teal83, or SW15 samples. While the signal intensity from the duck sequence at the N2 site was weak, multiple glycosyl and peptide fragments were found consistent with high-mannose glycans. This is likely because the glycopeptide was rather long and hydrophilic, thus resulting in lower ionization efficiency. The percent abundance of N165, N294, and N481 glycopeptide assignments found in all samples and a comparison are shown in Figure 4 and listed in Supplementary Table S1. The range of glycans, their abundance, and their glycan family subtype distributions at each site were similar across the egg-derived HAs. The N165 site in all four samples (duck, teal83, teal10, and sw15) was populated with only high-mannose forms with the majority being the Hex 8 HexNAc 2 in all samples. Representative spectra are shown in Supplementary Figures S4-7. N294 glycosylation was mostly high-mannose in the two teal samples but a combination of high-mannose and HexNAc3-4containing glycoforms in the duck and swine (SW15) virus samples. N481 glycan was mostly a hybrid and complex glycan across all four samples with very similar abundances. A full list of assignments is given in Supplementary Tables S1A-D. The most abundant glycoform at each site in each sample is shown in Figure 4, along with its location on the hemagglutinin monomer. The samples contained predominantly high-mannose glycans on the head (site N165) and mid (site N294) locations. The higher the number of *, the higher the potential to be glycosylated. FIGURE 3 Glycosylation profile comparisons at each glycosylation site. Individual compositions at N165, N294, and N481 are shown for each of the four H4N6 strains examined. N165 is essentially only high mannose, whilst N294 may contain primarily high mannose or up to a mixture of high-mannose, hybrid, and complex types depending on the strain. N481 contains primarily hybrid and complex N-glycans. Frontiers in Molecular Biosciences frontiersin.org FIGURE 4 Rendering of the H4 monomer with the most abundant glycan detected at each site shown and the distribution of glycan subtypes shown with a colored bar where green represents high-mannose, blue hybrid, and yellow complex glycans. Frontiers in Molecular Biosciences frontiersin.org 08 Molecular comparison of H3 and H4 HA crystal structures Based on our glycoproteomic mass spectrometry analysis, glycosite N165 is occupied exclusively by high-mannose glycans. Figure 5 shows the representative crystal structure projections showing the intramolecular contacts observed in and around glycosite N165 for H3 HA from A/Hong Kong/1/1968 ( Figure 5A), H4 HA from A/duck/ Czechoslovakia/1956 ( Figure 5B), and H4 HA from A/swine/Missouri/ A01727926/2015 ( Figure 5C). A full list of the molecular contacts is given in Supplementary Table S2. PDB structures of the teal sequences are not available, but they are about 96% identical to the duck sequence and about 98% identical to the swine sequence, and both teal sequences have W222, similar to the swine sequence (see Supplementary Figure S2 for a sequence alignment). As previously reported, H3 HA N165 is more N-terminally located on the beta strand in a less solvent-exposed region than that of the group 1 viruses from our previous study, which included HAs, H2, H5, H6, and H11, where the corollary site is more C-terminal and solvent-exposed (Parsons et al., 2017;Parsons et al., 2020). The location of the H4 HA N165 glycosite is more similar to that of the H3 HAs. As seen in the H3 HA representation in Figure 5A, the inner core of the high-mannose glycan forms extensive contacts with the neighboring subunit 220 loop residues S219, P221, and W222. Most of these are between the second GlcNAc of the glycan and P221 and W222, with W222 aligning in a planar arrangement to the chitobiose (GlcNAc2) rings. Overall, 20 contacts were counted. Figure 5B shows the structure of the duck HA studied here. The H4 HA N165 glycosite is located in the analogous position on the beta sheet compared to H3 HA. However, the neighboring subunit 220 loop has leucine instead of tryptophan at position 222, and no contacts with this amino acid are observed with the N165 glycan core. There are also no contacts to P221. Intra-subunit contacts are observed between the aglycone-most GlcNAc (closest to the glycosidic linkage) and S186, T187, and S219. The rest of the glycan cannot be seen, suggesting that there are little to no contacts between the rings of the chitobiose unit and the 220 loop, unlike that in the H3 structure. Figure 5C shows the structure of the swine HA. N165 is again placed in the same position as that of H3 HA. In this case, W222 is present. However, unlike the H3 configuration, the chitobiose unit of the N165 glycan is not planarly oriented with W222, although several contacts are made between the O4 oxygen of the aglycone GlcNAc and the NE1 nitrogen, CE2, and CZ2 carbons of W222. Contact is also made between the GlcNAc C1 and S219 carbon CB. Supplementary Table S2 shows a list of contacts shown in Figure 5 and described herein. In summary, the H4 HAs all contained exclusively highmannose glycans at N165, and crystal structure data examination reveals the contacts and orientation of the H4 HA N165 glycosite that limits solvent exposure compared to group 1 HAs previously examined and similar to the group 2 H3 HA as shown here. The H4 N165 structural orientation likely limits exposure of the site to glycosylation processing machinery, thus limiting glycosylation subtypes to high-mannose glycan. Interestingly, the high-mannose glycans at site N165 on the duck H4 HA, which has L222 and fewer contacts with the 220 loop ( Figure 5), had more Hex7HexNAc2 than the viruses with W222, which had predominantly Hex8HexNAc2 at N165 (Figure 3). Hex7HexNAc2 glycoforms are further along the glycan processing pathway than Hex8HexNAc2 glycoforms. It is tempting to speculate that the additional processing occurs due to the apparently less structured N165 in the duck H4 HA. Table 3. H4 and H3 viruses had highly similar dissociation contents, consistent with similar binding properties to SP-D. All H4 viruses examined are ligands for recombinant SP-D and likely to be efficiently acted upon by endogenous SP-D. We extended our analysis to assess if reducing the high-mannose glycans selectively would reduce SP-D binding. Our previous unpublished and published studies revealed that the conditions required for digestion of intact non-denatured HA resulted in the loss of detectable potency as tested using the single radial immunodiffusion (SRID) test. Furthermore, CD analysis revealed a change in the detected secondary structure after PNGase F digestion. Additionally, if the enzyme had worked, the release would have been indiscriminate for the subtype of N-glycan. However, digestion with Endo F1, F2, and F3 preserved CD patterns, and SRID detected potency. We used endoglycosidase F1, which exclusively targets high-mannose and hybrid glycans (Maley et al., 1989). SP-D binding characteristics of avian and human IAV HAs by surface plasmon resonance Although complete digestion cannot be expected under the mild and native-preserving conditions used and the recessed position of site N165, we did detect a reduction in SP-D binding comparable to that of H3, which is known to bind SP-D in a high-mannose N165 glycosylation site-dependent manner (Hartshorn et al., 1996;Hartshorn et al., 2008;Crouch et al., 2011b;Parsons et al., 2020) (Supplementary Table S3). Discussion The domestic duck is positioned at the boundary between wild aquatic and terrestrial fowl. This situation is known to play an important role in the ecology of the IAV (Li et al., 2004;Huang et al., 2010;Huang et al., 2012;Kim et al., 2012). H4 influenza was initially isolated in the former in Czechoslovakia in 1956 and is now known to widely circulate in domestic fowl in Asia, Europe, and North America. This LPAIV has been known to resort with other influenza subtypes, and there is evidence, for instance, that H4 viruses have reassorted with H3 and H11 IAV (Teng et al., 2012;Deng et al., 2013). Transfer of H4 LPAIV from an avian host to domestic swine has been reported (Karasin et al., 2000). Reassorted H4 strains isolated from domestic fowl live poultry markets were shown to infect mice without prior adaptation (Wu et al., 2015). Furthermore, H4 IAV has crossed the species barrier to infect humans. This has been seen in Lebanese poultry workers as revealed through serologic antibody detection by microneutralization and RBC hemagglutination assays (Kayali et al., 2011). In 2000, H4 HAs from isolates of swine were shown to harbor mutations at amino acid residues 226 and 228, specifically, Q226L and G228S (Karasin et al., 2000). The G228S mutation alone imparted dualreceptor specificity such that both sialylα2,3 and sialylα2,6 receptors were preferred, explaining the alignment of receptor specificity with an abundant swine host ligand. Addition of the Q226L mutation refined the specificity more toward sialylα2,6 receptors, thus have more preference for the human sialyl ligand, which is more predominantly this linkage type (Bateman et al., 2008). As swine are an intermediary hosts between avian and human species, this situation is a cause for concern for human infection. The current work investigated a possible further pathogenic characteristic, i.e., the potential susceptibility of the LPAIV strains to lung SP-D, a primary innate immune system lectin prevalent in respiratory tissue and known to be a key factor in reducing influenza burden in the lung (Hartshorn et al., 1994;Watson et al., 2020). A key factor in the interaction between SP-D and influenza virus is a specific glycosylation site on the globular head of HA. In the H3 viruses, this site is located at N165 and nearly exclusively harbors a high-mannose glycan. Extensive evidence in the literature shows that the high-mannose glycan interacts with specific regions in SP-D, whereby the Manα1,3-linked arm residues are key (Crouch et al., 2009;Crouch et al., 2011b). Previously, we investigated representatives of the avian LPAIV from H2, H5, H6, and H11 viruses. Sequence alignments indicated that, compared to H3 IAV, these LPAIV strains contained a "N165" glycosite that was four residues further toward the C-terminus and down its resident beta sheet, making it more solvent-accessible (Parsons et al., 2017;Parsons et al., 2020), and glycoproteomics analysis revealed that all the LPAIV strains tested contained complex glycans at the glycosite and did not bind to SP-D significantly compared to H3 HAs. Replacement of the LPAIV glycan at "N165" in H6 recombinant HA with a high-mannose glycan via growth in HEK293 cells under conditions including swainsonine, a potent mannosidase inhibitor, evoked an increase in SP-D preference, thus demonstrating that, the high-mannose glycan, and not strictly the position in the HA tertiary structure, was responsible for the interaction with SP-D in those group 1 AIVs. Essentially, the position of the glycosite in group 1 and group 2 HA appeared to dictate the access of Golgi processing enzymes that either allow elaborate modification of high-mannose glycan to result in complex ones (group 1) or prevent high-mannose glycan processing to complex ones via poor access to ER and Golgi processing machinery. Sequence alignment of the group 2 LPAIV H4 HAs studied here with H3 HAs produced a different pattern from the group 1 HAs. Clearly, the Q226L/G228S mutations present a possible pandemic risk. Again, our question was would such pandemic strains be susceptible to SP-D? Susceptibility to SP-D would decrease the potential pathogenicity, whereas lower susceptibility to the surfactant would increase potential pathogenicity. Sequence analysis and inspection of available X-ray crystal structures revealed that the position of the H4 HA glycosite N165 was highly similar to that of H3, being high on its resident beta sheet in essentially the same position as that of H3 HAs (Supplementary Figure S1). However, key residues involved in intra-subunit 220 loop contacts between key amino acid residues and the glycan core chitobiose (GlcNAcβ1-4GlcNAc-) moiety differed. A major interaction observed in the H3 structure involved multiple contacts with W222, P221, and S219. The duck H4 HA had leucine instead of tryptophan at position 222, although S219 was present. Subsequently, the orientation of the duck H4 glycan core differed from that of H3. Again, we note that duck contains Q226 and G228, and thus, the avian receptor type. The swine H4 HA, containing L226 and S228, and thus, the human receptor type, contained the H3-like S219 and W222 but did not form close contacts with P221 and was not planar to W222 as in the H3 HA. Neither the swine nor the duck X-ray structures exhibit the more complete glycoform observed in the H3 structure, hinting at more mobility in the H4 duck and swine HAs versus H3 HAs. Although the teal10 and teal83 HAs had no X-ray crystal structures available, they were highly similar to duck in the N165 glycosite region except for the presence of W222 in place of L222. Despite the differences in the crystal structures, the HA glycosite N165 of all of the H4 HAs examined by glycoproteomics analysis herein contained exclusively high-mannose glycan substitutions. Therefore, if such viruses were to infect humans, they would likely be acted upon significantly by SP-D. SP-D surface plasmon resonance studies performed here demonstrated SP-D preference that was similar to that of a seasonal H3N2 HA and much higher than that of H6N2, which contains a complex glycan at N165. There is clear evidence that SP-D is a key factor in removing influenza virus from the respiratory tissues. It is well documented that the affinity of SP-D for influenza HA is related to key Nglycosites in the globular head region of HA (LeVine et al., 2001;Hartshorn et al., 2002;Hartshorn et al., 2008;Tecle et al., 2008;White et al., 2010) and that increasing head high-mannose glycosites enhances this interaction Parsons et al., 2020). This has been previously shown for H3 and H1 viruses. In studies of H3 HA, it has been shown that the absence of the key N165 glycosite enhances pathogenicity in mice (Hawgood et al., 2004). A study that tested historically relevant seasonal influenza-like strains with increasing high-mannose glycans demonstrated decreased pathology in murine lung after exposure. Exposure of SP-D −/− mice to these strains restored susceptibility to infection and increased pathology in respiratory tissues (Wanzeck et al., 2011;An et al., 2015). At the molecular level, based on crystallographic studies, specific mannose residues interact with the SP-D-binding site, the Frontiers in Molecular Biosciences frontiersin.org most strongly being the Manα1,2Man-disaccharides that are present in the larger high-mannose glycoforms (Crouch et al., 2009), and the majority of the high-mannose glycans detected at N165 in all four H4 viruses studies here contained these residues. Docking experiments using both H3 and H1 viruses and human wild-type and lower-affinity mutant constructs of SP-D have refined the understanding of protein interaction of glycosite high-mannose glycan interactions with the SP-D carbohydrate recognition domain (Crouch et al., 2011b). There are a range of clinical implications related to SP-D activities as revealed in group 2 IAV strains lacking N165, certain co-morbidity states, murine models, and SP-D polymorphisms. SP-D polymorphisms have been associated with respiratory infection risk. Those that result in reduced forms of higher-molecular weight multimers have reduced anti-IAV activities and have been shown to increase the risk of respiratory infection in children with associated haplotypes (Thomas et al., 2009). Complications of diabetes, including primary effects of high glucose levels, have been shown to impair SP-D binding in a murine "metabolic syndrome" model, which correlates with the increased risk of respiratory infection in diabetes mellitus (Reading et al., 1998). IAV strains lacking high-mannose glycan at N165 or other high-mannose sites in the globular head have been shown to have decreased activity of SP-D and higher morbidity and mortality in murine models (Hartley et al., 1997;Reading et al., 2009). H5 IAVs, which do not contain high-mannose glycan on the globular head region, are associated with high morbidity and mortality in human outbreaks (Lee et al., 2005;Chen et al., 2006) and a contributing factor may be the inability of SP-D to interact with these strains. Also, previous publications showed that deficiency in SP-D through genetic knockout leads to more severe infections with H3 IAV in murine studies (Vigerust et al., 2007). Recombinant multimeric SP-D has been proposed as a therapeutic intervention (White et al., 2001;Orgeig et al., 2010), which could be useful agents against pandemic and seasonal IAVs that are predicted to contain high-mannose glycans on the HA head region. In conclusion, the H4 viruses have a demonstrated propensity for the gain of function mutations at key residues in the receptor-binding domain. Adaptation to the swine intermediary host by G228S and to human receptor-binding patterns of the Q226L and G228S adaptations has been documented. However, glycosylation sites N165 in the H4 AIV have a strong propensity to contain high-mannose glycans. These AIVs are susceptible to SP-D activity. This mosaic of RBS changes and head glycosite characteristics may predict key characteristics of infection if these HAs appeared in human infection. That is, human infection may occur based on RBD specificity, but SP-D could be effective for the removal of these AIVs from respiratory tissue. Data availability statement The original contributions presented in the study are included in the article/Supplementary Material; further inquiries can be directed to the corresponding author.
2023-06-13T13:11:24.288Z
2023-06-13T00:00:00.000
{ "year": 2023, "sha1": "e367f15813203b91bc2c21f082ef81d5298cca30", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "e367f15813203b91bc2c21f082ef81d5298cca30", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
264958400
pes2o/s2orc
v3-fos-license
Galactic chemical evolution in hierarchical formation models - I. Early-type galaxies in the local Universe We study the metallicities and abundance ratios of early-type galaxies in cosmological semi-analytic models (SAMs) within the hierarchical galaxy formation paradigm. To achieve this we implemented a detailed galactic chemical evolution (GCE) model and can now predict abundances of individual elements for the galaxies in the semi-analytic simulations. This is the first time a SAM with feedback from Active Galactic Nuclei (AGN) has included a chemical evolution prescription that relaxes the instantaneous recycling approximation. We find that the new models are able to reproduce the observed mass-metallicity (M*-[Z/H]) relation and, for the first time in a SAM, we reproduce the observed positive slope of the mass-abundance ratio (M*-[$alpha$/Fe]) relation. Our results indicate that in order to simultaneously match these observations of early-type galaxies, the use of both a very mildly top-heavy IMF (i.e., with a slope of x=1.15 as opposed to a standard x=1.3), and a lower fraction of binaries that explode as Type Ia supernovae appears to be required. We also examine the rate of supernova explosions in the simulated galaxies. In early-type (non-star forming) galaxies, our predictions are also consistent with the observed SNe rates. However, in star-forming galaxies, a higher fraction of SN Ia binaries than in our preferred model is required to match the data. If, however, we deviate from the classical model and introduce a population of SNe Ia with very short delay times, our models simultaneously produce a good match to the observed metallicities, abundance ratios and SN rates. INTRODUCTION The chemical properties and abundance ratios of galaxies provide important constraints on their formation histories.Galactic chemical evolution has been modelled in detail in the monolithic collapse scenario (e.g., Matteucci & Greggio 1986;François et al. 2004;Romano et al. 2005;Pipino & Matteucci 2004, 2006).These models have successfully described the abundance distributions in our Galaxy and other spiral discs, as well as the trends of metallicity and abundance ratios observed in early-type galaxies.In the last three decades, however, the paradigm of hierarchical assembly in a Cold Dark Matter (CDM) cosmology has revised the picture of how structure in the Universe forms and evolves.In this scenario, galaxies form when gas radiatively cools and condenses inside ⋆ email: arrigoni@astro.rug.nldark matter haloes, which themselves follow dissipationless gravitational collapse (White & Rees 1978;White & Frenk 1991).The CDM picture has been successful at predicting many observed properties of galaxies, though many potential problems and open questions remain.It is therefore interesting to see whether chemical evolution models, when implemented within this modern cosmological context, are able to correctly predict the observed chemical properties of galaxies. The semi-analytic approach provides a cosmological framework in which to study galaxy formation and chemical evolution in different environments, by following the merger history of dark matter haloes and the relevant physical processes such as gas cooling, star formation and feedback (e.g.White & Frenk 1991;Kauffmann et al. 1993;Cole et al. 1994Cole et al. , 2000;;Hatton et al. 2003;Somerville & Primack 1999;Somerville et al. 2001).A major challenge for models of galaxy formation within the CDM picture arises from the mismatch between the shape of the mass function of the dark matter haloes and that of the baryonic condensations that we call galaxies (White & Frenk 1991;Kauffmann et al. 1993;Somerville & Primack 1999;Benson et al. 2003).The CDM theory predicts a steeper slope for low-mass halos, and a more gradual drop-off in the abundance of high-mass halos than is seen in luminous galaxies, implying that the formation of stars must be inefficient in both low-mass and high-mass haloes (Moster et al. 2009).However, the inclusion of physically motivated, if still ad hoc, feedback processes in the semi-analytic models can cure these discrepancies.The faint end of the luminosity function can be matched with a combination of supernova feedback and suppression of gas cooling in low mass haloes as a result of a photo-ionising background.At the bright end, heating by giant radio jets powered by accreting black holes has become a favored mechanism for preventing over-cooling and quenching star formation in massive halos (Croton et al. 2006;Bower et al. 2006;Somerville et al. 2008).This latest generation of semi-analytic models ('SAMs') is successful at reproducing many properties of galaxies at the present and at high redshift, for example, the luminosity and stellar mass function of galaxies, color-magnitude or star formation rate vs. stellar mass distributions, relative numbers of early and late-type galaxies, gas fractions and size distributions of spiral galaxies, and the global star formation history (e.g.Croton et al. 2006;Bower et al. 2006;Cattaneo et al. 2006;De Lucia et al. 2006;Somerville et al. 2008;Kimm et al. 2009;Fontanot et al. 2009, to name just a few). The modelling of chemical enrichment of the galaxies and intergalactic (and intracluster) gas, however, has not been thoroughly developed in semi-analytic models, and to date most SAMs have only used the instantaneous recycling approximation (in essence only considering enrichment by type II supernovae) and trace only the total metal content.There are, however, a few models that have included a more refined treatment of the chemical enrichment.Thomas (1999) and Thomas & Kauffmann (1999) were the first to include enrichment by type Ia supernovae (SNe Ia) in models with cosmologically motivated star formation histories.However, rather than implementing the chemical evolution self-consistently within a semi-analytic model, they made use of star formation histories from the SAM and assumed a closed-box model (no gas inflows or outflows) for the chemical evolution.They calculated the evolution of [Fe/H] and [Mg/Fe] and found a decreasing trend of [Mg/Fe] with increasing galaxy luminosity, in stark disagreement with observations (e.g.Worthey et al. 1992;Trager et al. 2000a,b;Thomas et al. 2005). The first semi-analytic model to self-consistently track a variety of elements due to enrichment by SNe Ia and type II supernovae (SNe II) was that of Nagashima et al. (2005a,b).Among other things, they adopted a bimodal IMF described by a standard IMF for normal quiescent star formation in discs and an extremely flat 'top-heavy' IMF during mergerdriven starbursts.This model was motivated by the difficulty that semi-analytic models with a standard IMF experienced in reproducing the observed population of very luminous sub-mm galaxies at high redshift (Baugh et al. 2005).However, the notion that early-type galaxies form their stars with an IMF flatter than standard is not new and has been proposed many times in the past as a plausible explanation for the abundance patterns in early-type galaxies and in the ICM of galaxy clusters (e.g., Worthey, Faber & Gonzalez 1992;Matteucci & Gibson 1995;Gibson & Matteucci 1997;Thomas, Greggio & Bender 1999).The predictions of the Nagashima et al. model were in good agreement with the abundances of the intracluster medium (ICM) of galaxy clusters, matching the trend of individual elements (O, Fe, Mg, Si) and abundance ratios with ICM temperature.However, the same model failed to reproduce the trend of [α/Fe] in early-type galaxies, where they found that the abundance ratio decreases with increasing galactic velocity dispersion, again in clear contradiction with observations.Very recently, Pipino et al. (2008) have coupled galactic chemical evolution to the GalICS semi-analytic model (Hatton et al. 2003), and obtained results similar to those of Nagashima et al. (2005b). In a simple closed-box picture, it is well-known that galaxies with short star formation timescales are expected to have enhanced [α/Fe] ratios (because their enrichment is dominated by α-rich Type II SNe), while galaxies with extended star formation histories tend to have lower [α/Fe] (e.g.Worthey et al. 1992;Thomas et al. 1999;Thomas 1999;Thomas & Kauffmann 1999;Trager et al. 2000b), because of the additional Fe contributed by delayed Type Ia enrichment.Therefore a possible interpretation of the difficulties that CDM-based galaxy formation models have experienced in reproducing the positive trend between mass, luminosity, or velocity dispersion and abundance ratio is related to the issue of so-called "downsizing".This refers to the variety of observational evidence that high-mass galaxies formed their stars early and over short timescales, while low-mass galaxies have more extended star formation histories (see Fontanot et al. 2009, for a summary).Before the inclusion of AGN feedback or some other mechanism that quenches star formation in massive halos, CDM-based galaxy formation models predicted the opposite trend (massive galaxies continued to accrete gas and form stars until the present day, leading to extended star formation histories).It has been demonstrated that including radio-mode AGN feedback in semianalytic models leads to a "downsizing" trend for star formation that is at least qualitatively in better agreement with observations (De Lucia et al. 2006;Somerville et al. 2008;Trager & Somerville 2009;Fontanot et al. 2009).Therefore we expect that the new models might do better at reproducing the trend of [α/Fe] with mass as the more massive galaxies will have shorter star formation timescales. Clearly, observations of chemical abundances and abundance ratios in various phases (stellar, ISM, ICM) offer the opportunity to obtain strong constraints on galaxy formation histories and the physics that shapes them.However, in order to take advantage of these observations, it is necessary to implement detailed modeling of chemical evolution into a full modern SAM that includes the relevant physical processes (e.g.triggered star formation and morphological transformation of galaxies via mergers, the growth of supermassive black holes, and AGN feedback).In this work we incorporate detailed chemical evolution into the semi-analytic galaxy formation model of Somerville et al. (2008), taking into account enrichment by SNe Ia, SNe II and long-lived stars, and abandoning the instantaneous re-cycling approximation by considering the finite lifetimes of stars of all masses.The delay in the metal enrichment by SNe Ia is calculated self-consistently according to the lifetimes of the progenitor stars.This is, to our knowledge, the first time that detailed chemical evolution has been included in a semi-analytic model with AGN feedback (both radiomode heating and AGN-driven winds).The base model includes gas inflows due to radiative cooling of gas and outflows due to supernova and AGN-driven winds.We compute the abundances of many α and Fe-peak elements for earlytype galaxies of different masses, exploring different IMF slopes and values for the fraction of binaries that yield a SN Ia event, and compare these with observations of abundances and abundance ratios for a sample of local early-type galaxies.We also calculate SNe rates using both the classical Greggio & Renzini (1983) approach for type Ia SN and the more recent Delay-Time-Distribution (DTD) formalism (Greggio 2005).Another improvement in the present work is our use of re-calibrated estimates for chemical abundances obtained from line-strengths in early type galaxies (see Appendix B for details). The outline of the paper is as follows.In Section 2 we give an overview of the main ingredients of the semi-analytic model.In Section 3 we describe in detail the adopted treatment for the chemical evolution.In Section 4 we present our predictions and compare them with observations.In Section 5 we summarise our findings and present our conclusions.Two appendices describe the detailed implementation of the chemical evolution model and the data used in this paper. THE SEMI-ANALYTIC MODEL In this section we summarise the basic ingredients of the SAM used to model the formation and evolution of galaxies.These include the growth of structure of the dark matter component in a hierarchical clustering framework, radiative cooling of gas, star formation, supernova feedback, AGN feedback, galaxy merging within dark matter haloes, metal enrichment of the ISM and ICM, and the evolution of stellar populations.The reader is referred to Somerville & Primack (1999), Somerville et al. (2001) and especially Somerville et al. (2008, hereafter S08) for a comprehensive and detailed description of the different prescriptions used in this semi-analytic model.In what follows we briefly sketch the modelling of the most important physical processes. Dark matter merger trees and galaxy merging The merging histories (or merger trees) of dark matter haloes are constructed based on the Extended Press-Schechter formalism using the method described in Somerville & Kolatt (1999), with improvements described in S08.Each branch in the tree represents a merger event, and, in order to make the process finite, the trees are followed down to a minimum progenitor mass of 10 10 M ⊙ . Whenever dark matter haloes merge, the central galaxy of the largest progenitor becomes the new central galaxy, and all others become 'satellites'.Satellite galaxies may eventually merge with the central galaxy due to dynamical friction.To model the timescale of the merger pro-cess we use a variant of the Chandrasekhar formula from Boylan-Kolchin et al. (2008).Tidal stripping and destruction of satellites are also included as described in S08. Gas cooling, star formation and supernova feedback Before the Universe is reionised, each halo contains a mass of hot gas equal to the universal baryon fraction times the virial mass of the halo.After reionisation, the photo-ionising background can suppress the collapse of gas into low-mass halos.We use the results of Gnedin (2000) and Kravtsov et al. (2004) to model the fraction of baryons that can collapse into haloes of a given mass after reionisation.When a dark matter halo collapses, or merges with a larger halo, the gas within it is shock-heated to the virial temperature of the halo, and gradually radiates and cools at a rate given by the cooling function.To calculate this function we use the metallicity-dependent radiative cooling curves of Sutherland & Dopita (1993).A detailed description of how the cooling process is modelled can be found in S08.The rate at which gas can cool is given by: where m hot is the mass of the hot halo gas, rvir is the virial radius of the dark matter halo, r cool is the radius within which all of the gas can cool in a time t cool , which itself depends on density, metallicity and temperature.In our models, we assume that the cold gas is accreted only by the central galaxy of the halo, but in reality satellite galaxies should also receive some measure of new cold gas.This aspect of the modelling should be improved (cf.Pipino et al. 2008) and for the present study we restrict our analysis to only the central galaxy of each halo, except when otherwise stated. When the gas cools we assume that it settles into a rotationally supported disc.The radial sizes of the discs are calculated according to the results described in Somerville et al. (2008), and agree well with observed disc sizes to z ∼ 2. We model the star formation rate in quiescent discs with a recipe based on the empirical Schmidt-Kennicutt law (Kennicutt 1989): where ṁ⋆ is the star formation rate, Σ0 ≡ m cold /(2πr 2 gas ) is the average surface density of the cold gas, rgas is the scalelength of the gaseous disc (assumed to be an exponential disc with its scale-length proportional to that of the stellar disc), rcrit is the radius at which the gas reaches the critical surface density threshold for star formation (Σcrit), and AK and NK are the normalisation and slope of the SFR law.We adopt the values AK = 8.35 × 10 −5 , NK = 1.4 and Σcrit = 6 M ⊙ pc −2 , as in S08. Galaxy mergers in the SAM trigger enhanced episodes of star formation.The burst is modelled by two parameters, the time-scale and the efficiency of the burst.The time-scale is a function of the virial velocity of the progenitor galaxies, the equation of state of the gas, the cold gas fraction in the discs, and the redshift (Robertson et al. 2006).The efficiency, which is defined as the fraction of the cold gas reservoir (of both galaxies) that is turned into stars during the burst, is assumed to be a power-law function of the mass ratio of the merging galaxies, and the exponent of the power-law depends on the galaxy morphology (Cox et al. 2008).The collisional starburst occurs in addition to any ongoing 'normal' quiescent star formation, which continues uninterrupted through the merger but is usually insignificant in comparison to the burst.Any new stars formed during the burst are always placed in the bulge component of the resulting galaxy. As supernovae occur, they inject energy into the ISM and reheat the cold gas, which is then expelled from the disc and incorporated into the hot halo gas where it can cool again.The rate of reheating by SNe is given by where ǫ SN 0 and α rh are free parameters.The circular velocity of the disc V disc is taken to be equal to the maximum rotational velocity of the dark matter halo.Some fraction of the reheated gas can also be ejected from the halo entirely into the diffuse Intergalactic Medium.This fraction is described by: where αeject = 6 and Veject is a free parameter in the range ≃ 100 − 150 km/s.This ejected gas is allowed to re-collapse into the halo at later times and once again becomes available for cooling. Formation of Spheroids In most semi-analytic models each merger is classified as 'major' or 'minor' depending on whether the ratio of the smaller to the larger galaxies' baryonic masses is greater than or less than the parameter f ellip ∼ 0.25, respectively.The usual assumption is then that, in a major merger, the bulge and disc stars of both progenitor galaxies, as well as the stars formed in the merger driven starburst (see below), are transferred to the bulge component of the resulting galaxy.In a minor merger, all the pre-existing stars of the smaller galaxy end up in the disc of the postmerger galaxy, and all the newly formed stars are placed in bulge.We follow a similar practise here, but instead of using a sharp threshold to define major or minor mergers, we use a more gradual transition function.In detail, when two galaxies with bulge masses B1 and B2, and disc masses D1 and D2 merge, the resulting galaxy has a bulge mass Bnew = B1 + B2 + f sph (D1 + D2) and a disc mass Dnew = (1 − f sph )(D1 + D2).The value f sph is a continuous function of the total mass ratio (baryons and dark matter) in the central parts of the galaxy (see S08). Black Hole Growth and AGN Feedback The models of S08 also track the growth of super-massive black holes and the energy they release.Each top-level DM halo is seeded with a ∼ 100 M⊙ black hole, and these black holes are able to grow via two different accretion modes. The first accretion mode is fuelled by cold gas that is driven into the nucleus of the galaxy by mergers.This mode is radiatively efficient, and the accretion rates are close to the Eddington limit.Because this accretion mode is associated with optically bright classical quasars and AGN, it is referred to as 'bright mode' or 'quasar mode' accretion.The second mode is fuelled by hot gas in a quasi-hydrostatic halo, and the accretion rate is modelled via the Bondi-Hoyle approximation.Accretion rates in this mode are significantly sub-Eddington (∼ 10 −4 to 10 −3 times the Eddington rate) and the accretion is assumed to be radiatively inefficient.This mode is, however, associated with the production of giant radio jets, and is therefore referred to as the 'radio mode'.Energy released during 'bright mode' activity can couple with the cold gas in the galaxy via radiation pressure, driving galactic scale winds that can eject cold gas from the galaxy.The mass outflow rate due to the AGN driven wind is modelled by the following formula: where ǫ wind is the effective coupling efficiency, Vesc is the escape velocity of the galaxy and ṁacc is the accretion rate of mass onto the black hole.The radio jets produced by 'radio mode' activity are assumed to inject thermal energy into the hot halo gas, partly or completely offsetting the cooling flow.This process is responsible for quenching the star formation in massive galaxies (which contain massive black holes) and solves the 'overcooling problem' that plagued CDM-based galaxy formation models for many years. Stellar Population Synthesis and Dust In order to compare the luminosities and colours of the galaxies in the simulations with real observations, we convolve the star formation and chemical enrichment history of each galaxy with the multi-metallicity simple stellar population (SSP) models of Bruzual & Charlot (2003).We use the models based on the Padova1994 (Bertelli et al. 1994) isochrones with a Chabrier (2001) IMF. We also model the effects of dust extinction.Based on the model of Charlot & Fall (2000), we consider extinction due to two components, one due to the diffuse dust in the disc and another associated with the dense 'birth clouds' surrounding young star forming regions.The V -band, faceon extinction optical depth of the diffuse dust is given by where τ dust,0 is a free parameter, Z cold is the metallicity of the cold gas, m cold is the mass of the cold gas in the disc, and rgas is the radius of the cold gas disc.To compute the actual extinction we assign each galaxy a random inclination and use a standard 'slab' model.Additionally, stars younger than 10 7 yr are enshrouded in a cloud of dust with optical depth τBC,V = µBCτV,0, where µBC = 3.Finally, to extend the extinction correction to other wavebands, we assume a Galactic attenuation curve (Cardelli et al. 1989) for the diffuse dust component and a power-law extinction curve A λ ∝ (λ/5500 Å) n , with n = 0.7, for the birth clouds. Cosmological and Galaxy Formation Parameters We adopt a flat ΛCDM cosmology with Ω0 = 0.2383, ΩΛ = 0.7617, h ≡ H0/(100 km s −1 Mpc −1 ) = 0.732, σ8 = 0.761, and a cosmic baryon fraction of f b = 0.1746, following the results of Spergel et al. (2007).We adopt these parameters for consistency with the published models of S08, but find that we obtain nearly identical results with the updated values of the cosmological parameters from Komatsu et al. (2009). We leave the values of the free parameters associated with the galaxy formation models fixed to the fiducial values given in S08.These values were chosen by requiring that the models reproduced key observations of nearby galaxies, such as the z ∼ 0 stellar mass function, and gas fractions and star formation rates as a function of stellar mass.These models have also been shown to produce reasonable agreement with observed local galaxy colour distributions in Kimm et al. (2009), and with observed stellar mass functions and star formation rates at high redshift (0 < z < 4; Fontanot et al. 2009).In § 4.1 we check that our new models, with the updated treatment of chemical evolution modelling, still reproduce the key observational quantities with the same values of the free parameters. GALACTIC CHEMICAL EVOLUTION In S08, the production of metals was tracked using a simple approach commonly adopted in semi-analytic models (see e.g.Somerville & Primack 1999;Cole et al. 2000;De Lucia et al. 2004;Kang et al. 2005).In a given time-step, when we create a parcel of new stars dm * , we also create a mass of metals dMZ = y dm * , which we assume to be instantaneously mixed with the cold gas in the disc.The yield y is assumed to be constant, and is treated as a free parameter. 1We track the mean metallicity of the cold gas Z cold , and when we create a new parcel of stars they are assumed to have the same metallicity as the mean metallicity of the cold gas in that time-step.Supernova feedback ejects metals from the disc, along with cold gas.These metals are either mixed with the hot gas in the halo, or ejected from the halo into the 'diffuse' Intergalactic Medium (IGM), in the same proportion as the reheated cold gas.The ejected metals in the 'diffuse gas' reservoir are also re-accreted into the halo in the same manner as the gas. In the present study, we discard the instantaneous recycling approximation and allow the ISM to be enriched by the products of type Ia and type II supernovae on their own timescales.Consequently, we now track individual elements, and not just the total metal content.The integrated ejecta of each element is not a free parameter, but instead is calculated according to theoretical yields and the star-formation histories provided by the SAM.In the next subsection we describe the implementation of the new chemical evolution model in detail. Basic equations of the GCE For the purposes of tracing the enrichment of the ISM, we still model our galaxies as a single zone with instantaneous mixing of gas.We assume that newly produced metals are deposited into the cold gas, and may subsequently be ejected from the galaxy and mixed with the hot halo gas (or ejected from the halo altogether) according to the feedback model described above.The metallicity of each new batch of stars equals that of the cold gas at the moment of formation.In this context, the evolution of the abundance of metals in the cold gas is given by ĠZ where G(t) is the total mass of gas and Z(t) is the massweighted metal abundance, GZ(t) = G(t)Z(t) is the mass of gas in the form of metals, ψ(t) is the star formation rate and ψ(t)Z(t) represents the rate at which metals are depleted from the ISM by star formation, eZ(t) is the rate of ejecta of enriched material by dying stars (integrated over stellar mass), and the last two terms represent the infall of cooled halo gas into the galaxy and the outflow of reheated gas from the galaxy.Here we refer generally to 'metals' for simplicity, but in fact we apply this equation to each individual element by considering the abundance Zi of a given element i instead of the total metallicity Z.For comprehensive reviews of Eq. 7, we direct the reader to Tinsley (1980) and Pagel (1997).The modelling of the star formation rate, the inflow rate (cooling flows) and the outflow rate (supernovae and AGN driven galactic winds) have already been sketched in the previous section.The different prescriptions shown before relate to the terms in Eq. 7 in the following way: where Zc and Z h are the abundances of the cold ISM gas and the hot halo gas, respectively. In most SAMs previous to this work, chemical evolution was traced in a very simple manner by assuming a constant 'effective yield', or mean mass of metals produced per mass of stars, and the value of this effective yield was treated as a free parameter.In the models presented here, we have implemented detailed calculations for the production of heavy elements and the chemical enrichment of the ISM (the second term in Eq. 7) in a similar fashion to the models of Matteucci & Gibson (1995), Timmes et al. (1995) and Pipino & Matteucci (2004).In this framework, not only do we trace the evolution of the total metallicity, but we also track the distinct elements as well.At this moment we can follow the evolution of the abundances of 19 different elements, but here we will only discuss α-elements and Fe.By α-elements we mean the composite abundance of N, Na, Ne, Mg, Si and S. At any given time, the rate at which an element i restored into the interstellar medium is calculated according to the following formula where ML and MU are the lower and upper masses of stars formed, ψ(t) is the star formation rate as before, φ(M ) is the initial mass function (IMF), f (µ) is the distribution function for the mass of the secondary star in a binary pair (µ = M2/MB), τM is the lifetime of a star of mass M , and Qmi(t) represent the fractional mass of an element i restored by a star of mass M in the form of both newly synthesised and unprocessed material.Although not explicitly dependent on time, the quantities Qmi depend on metallicity, which of course evolves in time. Each of the integrals in Eq. 9 represents the contribution to the enrichment by stars in different mass ranges.The first integral indicates the contribution of single stars with masses between ML = 0.8M ⊙ (the minimum mass which can restore gas to the ISM within a Hubble time) and MB m = 3M ⊙ (the minimum mass of a binary system which can give rise to a SNe Ia event).These stars eject their chemical by-products through stellar winds and end their lives as white dwarfs.The second integral refers to the contribution from type Ia SNe, assuming that these events originate from C-O white dwarfs in binary systems exploding by Cdeflagration after reaching the Chandrasekhar mass.This implies a maximum primary mass of 8M ⊙ , and therefore . The parameter A represents the fraction of binary systems with total mass in the appropriate range that actually give rise to a SNe Ia event.In essence, A is a free parameter.Chemical evolution models of the Milky Way constrain the value of this parameter to around A ∼ 0.04 − 0.05 by ensuring compatibility with the observed present-day rate of SNe I and SNe II in our galaxy (François et al. 2004).However, this value results in an unacceptably high abundance of Fe and Fe-peak elements in our models.Therefore, we allow this parameter to take different values (0.015, 0.02, 0.03, 0.05) and constrain it a posteriori by comparison with abundance ratios and SNe rates (see also the discussion in de Plaa et al. 2007).The distribution function of the secondary mass fraction is assumed to follow the law with γ = 2.A complete description of all the quantities involved in the computation of the SNe I rate can be found in Greggio & Renzini (1983) and Matteucci & Greggio (1986).Note that in this scheme, it is the mass of the secondary star (M2) that sets the clock for the explosion.This implies a specific delay time distribution (DTD) for the explosions that may not represent reality (see § 4.2).The third integral represents the mass restored by stars in the mass range 3-16 M ⊙ which are either single, or, if binaries, do not produce a SN I event.These stars end their lives as white dwarfs (M < 8 M ⊙ ) or as SNe II (M > 8 M ⊙ ).Finally, the last term represents the contribution of short-lived massive stars (M > 16 M ⊙ ) that explode as SNe II.The fact that we take into account the lifetimes of the stars implicitly involves a time delay for the different enrichment modes (AGB stellar winds, SNe I and SNe II) since the integrands in Eq. 9 are by definition zero whenever t < τM .Before applying the GCE to the semi-analytic model, we tested the chemical evolution code separately.We ran simulations using only the chemical evolution algorithm and compared the results with simple models with analytic solutions, i.e., a closed box with either constant and continuous star formation or a single initial burst of star formation.We also compared our results with the output from other well-tested models, namely that of Fenner & Gibson (2003).Only after achieving satisfactory agreement in these tests did we proceed to implement the chemical evolution into the SAM. Ingredients of the GCE In the previous section, we introduced several fundamental quantities that determine the chemical enrichment of a galaxy's cold gas reservoir and described them in a qualitative manner.In what follows, we will point the reader to the different studies that quantify the ingredients of this model, and the values adopted for the simulations presented in the next section.We discuss the implementation in Appendix A. Initial Mass Function The stellar initial mass function that we use is based on the parameterization of Chabrier (2003): where in the standard Chabrier IMF, x = 1.3, σ = 0.69, mc = 0.079 M ⊙ , and the proportionality constants take the values A = 0.9098 and B = 0.2539 after normalisation in the mass interval 0.1 − 40 M ⊙ .This IMF differs somewhat from the standard power laws of Salpeter (1955) and Kroupa et al. (1993) often used in the literature.The reason for this choice is consistency with the stellar population synthesis models that are used to predict magnitudes and colors in the simulations, since those models use the Chabrier IMF (Bruzual & Charlot 2003).Note that this expression is for the IMF by mass.We show below that the results with this IMF were not entirely satisfactory.We therefore explored different slopes (x = 1.2, 1.1, 1.0) and upper mass limits (MU = 60, 100, 120 M ⊙ ).The values of the normalisation constants A and B for different values of x and MU are computed by requiring that We note that these alternate values for the slope are within the observational uncertainties (Chabrier 2003), and certainly do not represent a radical departure from the observed local IMF. Stellar lifetimes The adopted relation between the evolutionary lifetimes and the stellar mass is that of Padova tracks for solar metallicity (Bertelli et al. 1994).In principle the lifetimes depend not only on mass but also on metallicity.Nevertheless the difference in stellar ages for different metal abundances is smaller than the age binning in the grid that stores the star formation history of the galaxies in our simulations (see Appendix A). Stellar yields Stellar yields are the amount of material that a star can produce and eject into the ISM in the form of a given element, and are clearly one of the most important ingredients in any chemical evolution model.These yields are the quantities Qmi in the equations above.In this work we adopt different nucleosynthesis prescriptions for stars in the different mass ranges. Low and intermediate mass stars (0.8 < M/M ⊙ < 8) produce He, C, N, and heavy s-process elements2 , which they eject during the formation of a planetary nebulae.The yields that we adopt are from Karakas & Lattanzio (2007, hereafter KL07). Massive stars (M > 8M ⊙ ) produce mainly α−elements (O, Na, Ne, Mg, Si, S, Ca), some Fe-peak elements, light s-process elements and r-process elements.They explode as core-collapse Type II SNe.We adopt the yields from Woosley & Weaver (1995, hereafter WW95).Note that the upper mass limit in this study is 40 M ⊙ . Type Ia SNe are assumed to be C-O white dwarfs in binary systems, exploding by C-deflagration after reaching the Chandrasekhar mass via accretion of material from the companion star.They mainly produce Fe and Fe-peak elements.The yields we adopt are from Nomoto et al. (1997, hereafter N97), model W7.When calculating the contribution of SNe Ia, we assume that the primary star also enriches the medium as a normal AGB prior to the SN event. Except for the SN Ia yields, which are given only for solar metallicity, we use metallicity dependent yields, namely those tabulated for Z = 0.0002, 0.004, 0.02, interpolating when necessary but never extrapolating.Whenever the metallicity falls below or above the limiting values, we use the yields corresponding to the minimum or maximum Z respectively. Note that unlike in some recent studies of galactic chemical evolution (François et al. 2004;Pipino & Matteucci 2004, 2006;Nagashima et al. 2005a,b) we do not alter the yields in any way in our standard model.We want to see if we can fit the data with as few degrees of freedom as possible. Delay Time Distribution formulation for SNe Ia As mentioned before, and shown in § 4.2, the SN Ia model described in the previous subsection does not seem to be the best representation of this phenomenon.In order to test other models, we implemented the delay-time-distribution (DTD) formalism developed by Greggio (2005).In this scenario, the SN Ia rate is described by where ψ(t) is the star formation rate, τi is the minimum delay time for the SNIa events which we assume to be equal to the lifetime of an 8 M ⊙ star, τx is the maximum delay and equal to the lifetime of a 0.8 M ⊙ star, and kα is the number of stars per unit mass in a stellar generation defined by Finally, A(t − τ ) is the fraction of binary systems which give rise to Type Ia SNe and may, in principle, evolve in time, but here we will assume it to be constant.It should be noted that in this case, A(t − τ ) is the fraction relative to the full mass range defined by the IMF (ML − MU ) and not only the mass range 3 − 16 M ⊙ as before.To ease the comparison with the previous model we will define A(t − τ ) = A f3−16, where f3−16 is the fraction of stars in the 3 − 16 M ⊙ mass range (defined by the IMF) and A has the usual meaning.This formulation allows for different SNe Ia models depending on the DTD(τ ) used.In particular we have chosen the distribution favoured by Mannucci et al. (2006), and parametrized it in similar way as Matteucci et al. (2006): where delay time τ is in Gyrs, τo = 0.0851Gyr, and the distribution function is normalized so that In Figure 1 we show the behaviour of the implemented DTD. Note that for a single burst of star formation, about half of all SN Ia explosions occur within the first 100 Myr. RESULTS In this section we present the first results of our model and compare them with stellar population studies of a variety of early-type galaxies in the local universe.For this purpose, we ran simulations for a grid of dark matter haloes of different masses, ranging from 10 11 to 10 13 M ⊙ , using both the original (instantaneous recycling) and new (full GCE) version of the semi-analytic code.As mentioned before, we limit our analysis to the central galaxies of each DM halo.We selected early-type galaxies from our simulations according to the ratio of their bulge-to-total luminosity in the B-band; namely, we consider a galaxy to be an early-type when this ratio is larger than 0.4047 (Simien & de Vaucouleurs 1986).This selection encompasses both elliptical and S0 galaxies. Unless otherwise noted, the model results presented in this (2006), and adopted in this work. section are always for central early-type galaxies.We begin with a comparison of the results of our new model with those of the S08 SAM.We then proceed to compare the predictions of the new model with observations.For the purpose of this comparison, we will only show model galaxies with masses above 10 9 M ⊙ since the formation history for galaxies below this mass cannot be accurately resolved given the mass resolution of the dark matter trees (see §2.1). Impact of the new GCE modelling in galaxy observables The new chemical evolution modelling affects the physics in the SAM in at least three ways: changing the metallicity of the hot gas changes the cooling rates; changing the metallicity of the cold gas changes the amount of dust, and therefore observed colours and magnitudes; and the metallicities themselves and their evolution change as well.The original version of the S08 SAM did very well at reproducing several key properties of galaxies and it is important to verify that this is still the case after implementing the detailed chemical evolution model.For this purpose we will compare simulations with three different 'flavors' for the models: the original SAM from S08, a SAM+GCE with standard parameters (i.e.x = 1.3 and A = 0.05), and a SAM+GCE with parameters that best fit (best-fitting) the abundance ratios and metallicities of observed galaxies (x = 1.1 and A = 0.015, see § 4.2). A fundamental feature that any good galaxy formation model must reproduce is the luminosity function or stellar mass function of galaxies.Given the distribution function of dark matter halos and sub-halos predicted by CDM, the relationship between stellar mass and halo mass implies a specific stellar mass function.The required stellar mass to halo mass relationship, in the form of the fraction of baryons in the halo that are converted to stars in a galaxy, has been derived by Wang et al. (2006) and Moster et al. (2009).In Figure 2 we show the stellar mass fraction as a function of halo mass for the SAM with and without GCE and compared with the empirical relation obtained by Moster et al. (2009).The agreement between all 'flavors' of the models (S08, standard parameters, best-fitting) and the observations is excellent.This implies that the stellar mass function in our new models will be nearly identical to that presented in S08, which was shown to agree well with observations. In Figure 3, we show the average star formation histories (SFH) of our model early-type galaxies, for galaxies with different present-day stellar masses.As pointed out in previous studies (see, e.g., De Lucia et al. 2006;S08;Trager & Somerville 2009, hereafter TS09), SAMs with AGN feedback do qualitatively reproduce a downsizinglike trend (i.e., the higher mass galaxies have shorter SF timescales and less extended SFH).This is still true in our new models. Another very important observational quantity to reproduce is the colour-magnitude diagram (CMD) of galaxies.In Figure 4 we show magnitudes and colours of earlytype galaxies in the SAM with and without GCE.Here there is one caveat regarding the calculation of the galaxy luminosities.The stellar population models that we use to predict the colours and magnitudes (Bruzual & Charlot 2003) use a fixed standard Chabrier IMF, while we allow the slope of this IMF to change when calculating the chemical evolution.Although this is not self-consistent, such a minor change in the IMF parameters should not significantly affect the predicted colours or magnitudes since the earlytype galaxies studied here are dominated by old populations for which high-mass stars are of little importance (C.Conroy, private communication).We divide the CMD into red and blue regions using the magnitude-dependent cut of Baldry et al. (2004).The galaxies form two clear groups: the majority in a bright red sequence and a few in a fainter blue cloud.The "original" and "best fit" models agree quite well with the observed CMD, while the luminous galaxies in the "standard model" are slightly too blue. In the previous version of the SAM, the fraction of cold gas relative to stars in galactic discs at the present time was used to calibrate the models by comparison with the observational estimates of Bell et al. (2003) for morphologically late-type galaxies (see Figure 5 in S08).This property is well reproduced by all test cases after implementing the GCE.Here we also compare the gas fraction of our galaxies, both early-type and discs, with those from Kannappan (2004) as a function of u − r colour, as shown in Figure 5. The agreement in the slope, scatter and zero-point of the relation is quite good for all models, especially when the new model of galactic chemical evolution is included.The first evidence for the mass-metallicity relation can be seen when looking at the metallicity distribution function of galaxies of different masses, which we show in Figure 6 for our three test cases.All of the models agree qualitatively, showing an increasing mean of the distribution as the mass range increases.At a given mass, however, the distributions for the best-fitting model are shifted to higher metallicities since galaxies in this simulation are more metal rich (see below). Summarising, we have seen that including a detailed chemical evolution model in the SAM has a minor effect on the predicted formation histories and present-day properties of galaxies, and therefore does not require a re-calibration of the free parameters of the model.In the next subsection we will investigate the predicted metallicities and abundances, which are indeed affected by the GCE. Chemistry of Early-type Galaxies The main effect of the new treatment of chemical evolution is reflected in the metallicity and abundance ratios of the galaxies.Most SAMs reproduce fairly well the massmetallicity relation of galaxies (with effective yield treated as a free parameter), but to date they have been unsuccessful in fitting the slope of the mass-[α/Fe] relation (e.g., Nagashima et al. 2005b;Pipino et al. 2008).This is the main challenge that we address in this study.The galaxy sample used for comparison is that described in Trager et al. (2000a).This sample has been reanalysed using the updated stellar population synthesis method presented in Trager et al. (2008), which is sensitive to age, metallicity and abundance ratios.In the current study, we use models based on the Bruzual & Charlot (2003) models with index variations due to abundance ratios taken from Lee et al. (2009).Inferred stellar population parameters are tabulated in Appendix B. When making the comparisons, we use the stellar mass of the simulated galaxies and the inferred dynamical mass of the observed ones.This should not introduce a significant bias since the dynamical mass is a good tracer of the stellar mass within one effective radius for most early-type galaxies (Cappellari et al. 2006).The abundances presented here were normalised to the solar values from Grevesse et al. (1996).We have also computed the present epoch SNe Ia and II rates for our galaxies and compared them with the results of Mannucci et al. (2005) and Sullivan et al. (2006). Dependence on IMF slope and SN Ia fraction In Figures 7 and 8 we show the relation between total metallicity ([Z/H]) and stellar mass (M⋆).From these figures, we see that in the "old" SAMs, galaxies tended to be too metalpoor compared to the observations (as seen in TS09), and implementing the detailed chemical evolution with the standard IMF improves the results only slightly.We explore the parameter space of the GCE equations to see if it is possible to improve the agreement with the observations.Specifically, the parameters allowed to vary are the fraction of binaries that give rise to SNe Ia (A in Eq. 9), the slope of the IMF above 1 M ⊙ (x in Eq. 11) and the upper mass limit of the IMF (MU in Eq. 9).However, we will only show the results for MU = 40 M ⊙ for the following reason.Given that our chosen SN II yields (WW95) are only tabulated up to that value, we are forced to assume that the yields relative to the initial mass remain constant and equal to those of a 40 M ⊙ star for stars above this value when running simulations with higher upper mass limits.This proves to be an unreliable assumption, as reflected for example in the nonmonotonic behaviour of the [α/Fe] ratio as a function of the stellar mass of our galaxies, in serious disagreement with observations.Other properties, such as the SNe Ia rate, are not significantly affected by this parameter, as expected. We also note that, even though we have varied the slope of the IMF, the supernova feedback efficiency remains the same because the prescription for the SN feedback energetics and the SN chemical enrichment are decoupled; the SN feedback efficiency is set manually as a free parameter (see Eq. 3), independent of the IMF.In this sense, the model is not entirely self-consistent.However, this choice makes it easier to interpret the effect of changing the IMF in the models. We now compare the predictions for the observed galaxy sample, the original SAM, and the new SAM with GCE with a standard Chabrier IMF, a shallower IMF (x = 1.1) and two choices of the parameter A (0.015 and 0.05).The use of a shallower IMF results in an upwards shift in the zero-point of the mass-metallicity relation, bringing it into much better agreement with the observations.This increase in metallicity with a shallower IMF is expected since more massive stars are produced and therefore the gas is enriched more efficiently.The metallicities of the model galaxies show almost no dependence on the parameter A, which is not surprising since this parameter mainly controls the ratio of type Ia to type II supernovae, affecting abundance ratios but not the overall metallicity (in essence the production of αelements and Fe-peak elements compensate for one another). A primary advantage of our new model is that we can now calculate abundance ratios for our galaxies, which gives us yet another property to compare with observations and set further constraints on the models.This is shown in Figures 9 and 10, where we plot the [α/Fe] ratio against stellar mass for the same parameter choices as before.For the simulated galaxies, we consider the abundance of α-elements to be the composite abundance of N, Na, Ne, Mg, Si, and S (cf.Trager et al. 2000a).If we assume a value of the parameter A ∼ 0.05 commonly used in the literature, the abundance ratios are far too low.The overall values of [α/Fe] can be raised by decreasing the value of A (i.e.producing fewer SNe Ia and consequently less iron).However, even with this decreased A and the standard IMF, we see that the relation is too flat or even has a slightly negative slope, and for the galaxies at the high mass end the [α/Fe] ratio is insufficiently enhanced.On the other hand, a shallower IMF, combined with a lower value of A, produces model galaxies in far better agreement with observations.The flatter IMF increases the slope of the relation while a low A brings up the zero-point.Moreover, it is interesting to note that lowering A also increases the slope (at any fixed IMF).This small steepening of the [α/Fe]-mass relation is due to the metallicity dependence of the yields, since the higher the initial metallicity, the higher the [α/Fe] in the yields.A lower fraction of SNe Ia (and consequently more type II) implies a slightly faster overall enrichment, and therefore galaxies spend more time forming stars in a regime of enhanced [α/Fe] (higher metallicity yields).In summary, we had expected that the inclusion of AGN feedback in the semi-analytic models might solve the problems that previous studies have encountered in trying to reproduce the observed trend between mass and [α/Fe] ratio, because the quenching due to AGN does lead to more massive galaxies having shorter formation timescales in the models.However, apparently the effect of "downsizing" on the trend of [α/Fe] is very small and a flatter IMF is required to achieve agreement between the models and the observations.This does not undermine the potential importance of AGN feedback, which appears to be a promising mechanism for solving many of the other problems experienced by earlier generations of models (such as the overcooling problem).In any case, it is encouraging that with minor variations (within the observational uncertainties) in the chemical evolution parameters, we can for the first time -11.5 -12.5 -12 -11.5 -11 -10.5 -10 -9.5 -9 -8.5 obtain very good agreement with the observed mass-[α/Fe] in a semi-analytic model.One concern is that the comparison between our models and the observations is not strictly rigorous since we are showing stellar mass-weighted abundances for the models, while stellar population studies derive abundances from line-strength indexes from integrated spectra, which are themselves light-weighted quantities.However, TS09 have shown that the SSP-equivalent (absorption-line-weighted) metallicity correlates very well with its mass-weighted and light-weighted counterparts.In future work, nevertheless, we will synthesise line strengths for the galaxies in our simulations and calculate abundance ratios in the same way as is done in the observational data.One additional worry is the effect of limited aperture size on the comparison, as early-type galaxies are well-known to have significant line-strength gradients.These gradients imply however mild metallicity gradients but no abundance ratio gradients whatsoever (e.g., Davies et al. 1993;Mehlert et al. 2003;Sánchez-Blázquez et al. 2007, TS09).Therefore we are confident that trends in [α/Fe] with mass are trustworthy.We note here (as described in Appendix B) that the data plotted in the figures have been constructed to appear as if the galaxies were at the distance of the Coma cluster and observed through fibre apertures of diameter 2. ′′ 7 (see Trager et al. 2008).While this does not eliminate gradient effects on the inferred metallicities, it reduces their magnitude to an offset of roughly −0.1 dex (TS09). Supernovae rates provide further independent constraints our models.We calculate the predicted supernovae rates for our model galaxies and compare them with those derived by Mannucci et al. (2005) rate per unit stellar mass), and in Figures 13 and 14 we show the rates for Type II SNe.Here we show all the model galaxies regardless of morphology, since the early-type galaxies populate only the lower SSFR side of these diagrams, and both comparison samples include galaxies of all morphological types.The galaxies in these sample are from a mixture of field and small cluster environments, but so are our model galaxies. The slope of the IMF has little effect on the predicted SNIa rates; only the A parameter has a significant influence on the results.From Figures 11 and 12, we see that no combination of IMF slope and fraction of binaries that yield SNe Ia can fit the observations over the whole range of SSFR.However, it is interesting to notice that SNe Ia rates of star forming galaxies (SSFR > 10 −10.5 ) are very well matched by models with a high value of A, while a low value of A is a better match for passive galaxies (SSFR < 10 −10.5 ).This behaviour, which is seen regardless of the slope of the IMF, is almost certainly due to the chosen supernova Ia model.In the Greggio & Renzini (1983) formalism which we use, the Delay Time Distribution (DTD) of the SN Ia explosions is given by a convolution of the distribution of secondary masses (in binary systems) and the lifetime of the secondary star.On the other hand, from the same observational data, Mannucci et al. (2006) derived a DTD with two components: a prompt peak and a later plateau, each encompassing half of the SNe Ia.Other authors have also reached similar conclusions about the delay-time distribution of type Ia explosions (e.g.Scannapieco & Bildsten 2005;Dahlen et al. 2008).This bimodal DTD effectively enhances the production of type Ia supernovae in star forming galaxies, exactly where a higher fraction of SNIa are needed in our models.We therefore expect that using a two-population DTD with a significant prompt component will alleviate the differences between our model predictions and the observed Type Ia rates, and we show that this is the case below. The Type II SN rates, on the other hand, show very good agreement with the observations over the whole range of SSFR.In this case, all variations of x and A give the same qualitative results, although the models with the "standard" parameter values are slightly better.Nevertheless, as we have shown above, not all combinations of IMF slopes and values of A (SN Ia producing fraction of binaries) produce model galaxies that agree with observed metallicities and abundance ratios.It is only for those models with a shallower IMF (x ∼ 1.1) and lower SNe Ia fraction among binaries (A ∼ 0.01-0.02)that we can reasonably reproduce the full set of observations. To summarise, after implementing detailed chemical evolution in the semi-analytic model of S08, we can now reproduce the mass-metallicity and mass-[α/Fe] relations for local early-type galaxies, provided we use a slightly flatter Chabrier IMF and a low fraction of binaries giving rise to SN Ia.The predicted rates of Type Ia SNe show a strong dependence on the fraction of binaries that yield such an event (our parameter A) but not on the slope of the IMF, with the rates in star forming galaxies better matched by a high value of A, while those in passive galaxies are better with a low value of A. However, for a standard IMF and high value of A, the galaxies are a bit too metal-poor and, most importantly, the abundance ratios are extremely low, in severe disagreement with observations. Bimodal Delay Time Distribution for Type Ia Supernovae In the previous section, we speculated that a bimodal distribution with a prompt population of SNe Ia explotions would give a better match for the observed supernova rates.Given the analytical nature of the DTD formulation (Greggio 2005), it is fairly straightforward to implement and test this hypothesis.We have implemented the DTD proposed by Mannucci et al. (2006).Figure 15 shows the four observational constraints used to test the models (metallicity vs. stellar mass, [α/Fe] vs. stellar mass, type Ia SNR vs. SSFR, and type II SNR vs. SSFR).Clearly the new double-peaked SN Ia DTD model gives a better match to the Type Ia SN rates, while maintaining the good agreement in the other galactic properties.However, for this model, the best-fitting parameters for the IMF slope and the SNIa binary fraction are slightly different; specifically, x = 1.15 and A = 0.03.This value for the IMF is in even better agreement with some recent studies (Baldry & Glazebrook 2003;Wilkins et al. 2008) then our previous 'best' value of x = 1.1.Finally, in Figure 16 we show the abundance ratios of some individual elements for the galaxies in our best fitting models using the classic SNe Ia recipe (x = 1.1 and A = 0.015) and the bimodal DTD (x = 1.15 and A = 0.03).With the exception of C and N, which are slightly higher for the latter, all the elements follow the same trends in both models.In particular Mg, even though it is under-abundant with respect to other α-elements, is the one element that best follows the observed trend of [α/Fe].This leads us to believe that the abundances derived from the stellar population analysis may be predominantly driven by Mg, and may not necessarily reflect of all the α-elements.There is also an excess of Ni in the Fe-peak group and a decreasing trend of [C/Fe] with increasing galactic mass which is apparently not observed (Sánchez-Blázquez et al. 2003;Graves & Schiavon 2008).This should not be considered a flaw of the model, since the abundances of individual elements are very sensitive to the chosen yields.We expect that future line-strength observations, interpreted with next-generation stellar population models (such as Schiavon 2007;Lee et al. 2009), will provide an interesting test of our models, including the set of assumed yields (KL07 + WW95 + N97). Dependence on other model parameters We have also explored whether we could match the observations investigated above by varying the galaxy formation parameters of the SAM instead of the IMF and binary fraction parameters.For this purpose, we ran several simulations in which we modified the star formation efficiency (AK in Eq. 2), the SN feedback efficiency (ǫ SN 0 in Eq. 3) and the virial velocity below which ejection of reheated gas from the halo into the diffuse intergalactic medium becomes important (Veject in Eq. 4).When the star formation efficiency was increased by a constant factor, a few more massive galaxies were produced and the slope of the mass-[α/Fe] relation increased slightly but the slope and zero point of the mass-metallicity relation did not change.A higher SF efficiency implies that the cold gas is consumed more rapidly and therefore the timescale for star formation is shorter, which is why the mass-[α/Fe] relation was affected. If AK was allowed to increase with increasing galactic baryonic mass, a considerable number of very massive galaxies were produced and the slope of the mass-metallicity relation increased mildly (but not the zero point).On the other hand, if the factor decreased with increasing galactic mass, the trends remained the same but no galaxies above 10 11 M ⊙ were produced.This excess or lack of high mass galaxies arises because star formation in the biggest systems is either boosted or suppressed by this mass-dependent variation of the SF efficiency. The effects of reducing the SN feedback efficiency and the Veject parameter were roughly the same.In both cases the slope and zero point of the mass-metallicity relation increased slightly and the central galaxies of the DM haloes were on average more massive, but the mass-[α/Fe] relation did not change and an unrealistically large number of lowmass satellite galaxies was also produced.This excess of lowmass satellites is due to the fact that small galaxies retain their gas more efficiently when these parameters that control the SN feedback are decreased. Overall, the effect of changing these other parameters is not as strong as flattening the IMF, and also destroys the agreement with other well-calibrated observations, such as the luminosity function, the metallicity distribution function, and the cold gas fraction.We therefore conclude that our results are robust to the values of these free parameters. As a final test, we modified some of the yields.Specifically, we decreased the Fe yield of SNe II by half and increased the Mg yield by a factor of four.Reducing the Fe yield had very little effect, indicating that the bulk of the Fe comes from SNe Ia, as expected.Changing the Mg yield raised the zero point of the relations, as expected.However, neither of these changes affected the slope.Pipino et al. (2008) reached a similar conclusion about the yields and other parameters when exploring the parameter space in their models. DISCUSSION AND CONCLUSIONS We have implemented detailed galactic chemical evolution in a semi-analytic model, and use the resulting model to study the metal enrichment of early type galaxies in the local universe.The base SAM is that presented in Somerville et al. (2008).We take into account the effects of galaxy mergers, inflow of cold gas, and SN and AGN driven outflows, as well as the production of metals by SN Ia, SN II and AGB stars.Unlike most previous SAMs we discard the instantaneous recycling approximation by properly accounting for the finite lifetimes of the stars and also make use of metallicity dependent yields. We run our SAM+GCE simulations in a grid of dark matter haloes ranging over present-day masses of 10 11 to 10 13 M ⊙ .We allow the slope of the IMF and the fraction of binaries that produce a SN Ia event to vary, and compare our results with the observed trends of metallicity and abundance ratio ([α/Fe]) against stellar mass of the galaxies, as well as the supernova rate (both type Ia and II) as a function of specific star formation rate.Only the models with a shallow IMF (x = 1.1) and a low fraction of SN Ia from binaries (A ∼ 0.015) match all four observations of early-type galaxies simultaneously.A slightly flatter than standard IMF is necessary in order to produce more massive stars, which enrich the interstellar medium more efficiently, making the galaxies in our simulations become more metal rich and improving the agreement with the data.The production of more massive stars, along with the fact that the star formation histories are more extended in time as the galaxy mass decreases, helps to achieve the correct trend of increasing [α/Fe] with increasing galaxy stellar mass.However, it is also necessary to invoke a low fraction of SNe Ia to raise the zero-point of this relation.We also predict abundance patterns for a variety of elements for early-type galaxies at z = 0 in our fiducial model.These predictions will be interesting to compare with future observations.From studying the SNe Ia rates, we find evidence supporting a 'two-population' distribution for the type Ia explosions, since galaxies with high specific star formation rates are better matched by models with a high fraction of binaries that explode as SN Ia (A) while those with low SSFR require a low value of A. We tested whether the use of a more realistic (bimodal) delay-time distribution of type Ia supernovae would, in fact, improve the results.After implementing the DTD formulation for SNe Ia and using a bimodal distribution with a prompt peak and an extended plateau, we found very good agreement with the SN rates, while still matching the trends of [Z/H] and [α/Fe] with stellar mass, although the best values for the slope of the IMF changed slightly and the fraction of SNeIa binaries needed to be doubled.Our favored model is now one with a Chabrierlike IMF with a slope of x = 1.15, a SNe Ia binary fraction of A = 0.03 (relative to the 3 − 16 M ⊙ range, A ∼ 0.0014 relative to the full range of masses defining the IMF), and a bimodal delay-time-distribution for Type Ia SN events as proposed by Mannucci et al. (2006). We have also studied the effects of varying the galaxy formation parameters in the SAM, but found that we were unable to reproduce the observations in this way.We there-fore conclude that our results are robust to the values of the free parameters in the SAM.This is not the first time that a GCE model has been applied within a SAM.Nagashima et al. (2005a,b) have also constructed such a model, and their 'superwind' model resembles our model.They first obtained fairly good agreement with observations of ICM abundances in galaxy clusters, however the same models failed to reproduced the trend of increasing [α/Fe] with increasing galactic mass.One of the main differences between their models and ours is in fact the IMF.They use a Kennicutt IMF (x = 1.5) for quiescent star formation and a flat IMF (x = 0.0) for stars formed in bursts.This flat IMF is rather extreme, while the proposed modification in our models is small, in fact within the observational uncertainties.Namely, we require the same 'shallow' IMF (x = 1.15) for all modes of star formation.Another model that reproduces the observed scaling of abundance ratio with galaxy mass is that of Pipino & Matteucci (2004, 2006).However they consider a very different scenario for galaxy formation, the monolithic collapse scenario, and allow for galaxy mergers only in the form of a second infalling episode.More recently, Pipino et al. (2008) also coupled GCE to a SAM, but they also failed to match the mass-[α/Fe] relation.They claim that flattening the IMF can not solve this problem, in contradiction with our findings.It is worth mentioning that none of these models include AGN feedback; only the GalICS model used by Pipino et al. (2008) has some form of halo quenching simply by shutting down the flow of cold gas onto galaxies with masses larger than 10 11 M ⊙ .However, contrary to our expectations, we find that SF quenching by AGN is not a key factor in our success at reproducing the mass-[α/Fe] relation, even though the AGN feedback in our models leads to shorter formation times for the more massive galaxies.We find that a slight flattening of the IMF is essential to achieve agreement between the model and the observations.AGN feedback, nonetheless, is likely to play an important role in reproducing other galaxy observations, such as the stellar mass or luminosity function and color bimodality. Our best-fitting IMF, nonetheless, is consistent with observations; the slope is within the observational uncertainty of Chabrier (2003) and agrees remarkably well with the results of Baldry & Glazebrook (2003), who found that ultraviolet to near-infrared galaxy luminosity densities require an IMF with a slope of 1.15 ± 0.2.The same slope was found by Wilkins et al. (2008) when trying to reconcile the redshift evolution of the observed stellar mass density with the cosmic SFH using a constant and universal IMF.The agreement, however, holds only at low redshift.In a forthcoming paper, Wilkins et al. (in preparation) propose an evolving IMF as a plausible solution.Such an IMF should be strongly top-heavy at high redshift (Hopkins, private communication).van Dokkum (2008) also suggests an evolving Chabrier-like IMF based on comparing the evolution of the M/L ratios of early-type galaxies to their colour evolution, but in this case the change is in the characteristic mass (mc in Eq. 11) rather than the slope, making the IMF "bottomlight" at high redshift.Such evolving IMFs could, in principle, work in favour of the trends of [α/Fe] with stellar mass and SNR with SSFR since they produce either more SN II or fewer SN Ia progenitors at earlier times when massive early-type galaxies create most of their stars.This scenario remains to be tested, and moreover the issue of an evolving IMF is open to considerable debate given the large uncertainties on its constraints.On a different note, Meurer et al. (2009) has claimed evidence for an IMF that depends on galactic surface brightness (or surface density) as a plausible explanation for an observed variation in Hα/FUV flux ratio.However, they invoke variations that are an order of magnitude larger than the deviation of our best-fitting slope from the standard value.Finally, very recently, Calura & Menci (2009) have claimed that a constant IMF cannot account for the trends of [Z/H] and [α/Fe] with velocity dispersion in elliptical galaxies and have proposed an IMF with a slope depending on the SFR (x = 1.35 for low star-forming systems and x = 1 for high star-forming systems) in order to explain them.However, their chemical evolution is not coupled to a semi-analytic model, but is computed a posteriori with SFHs extracted from the SAM of Menci et al. (2008), and therefore the flows of enriched gas from the galaxies into the halos and back again are not tracked, unlike in the our model. In future work we will apply this model to other questions such as the abundances of different components of spiral galaxies like the Milky Way (e.g. the disk, bulge, and stellar halo), abundances in clusters vs. the field, abundances in the intra-cluster gas, and to the evolution of metals over cosmic time. Table A1.Structure of the grid where the star formation and abundance histories are stored.All quantities are in Gyr. Age Range Size of bins 0.00 -0.12 0.01 0.12 -0.40 0.02 0.40 -1.12 0.04 1.12 -2.72 0.08 2.72 -11.68 0.32 11.68 -1.28 The initial size of the bins is 0.01 Gyr because it is the maximum possible value of the time-step in our simulations. Figure 2 . Figure2.The fraction of baryons in the form of stars as a function of halo mass for model galaxies in the SAM+GCE with the best-fitting parameters (x = 1.1 & A = 0.015, black squares), with standard parameters (x = 1.3 & A = 0.05, green squares), and in the original SAM without GCE (red squares).The blue lines mark the empirical relation, and 1-σ uncertainties, derived byMoster et al. (2009). Figure 3 .Figure 4 . Figure 3. Smoothed average star-formation histories for model early-type galaxies in (from bottom to top) the SAM+GCE with the best-fitting parameters (x = 1.1 & A = 0.015), with standard parameters (x = 1.3 & A = 0.05), and in the original SAM without GCE, binned by the stellar mass of the galaxy at z = 0. Figure 5 . Figure5.The gas fraction for model galaxies in the SAM+GCE with the best fitting parameters (x = 1.1 & A = 0.015, black symbols), with standard parameters (x = 1.3 & A = 0.05, green symbols), and in the original SAM without GCE (red symbols) as a function of u − r colour.Stars and triangles depict earlytype and disc galaxies, respectively.The thick blue line marks the median of the sample fromKannappan (2004), and the thin lines mark the 1 − σ deviation. Figure 6 . Figure 6.Average metallicity distributions for model early-type galaxies in (from bottom to top) the SAM+GCE with the best fitting parameters (x = 1.1 & A = 0.015), with standard parameters (x = 1.3 & A = 0.05), and in the SAM without GCE, binned by stellar mass. Figure 7 .Figure 8 . Figure7.The relationship between metallicity and stellar mass for the galaxies in our simulations and in the observational sample of T00.Symbols -pink squares: original SAM; red crosses: SAM+GCE with x = 1.3 and A = 0.015; blue crosses: SAM+GCE with x = 1.3 and A = 0.05; black stars with error bars -galaxies fromTrager et al. (2000a), reanalysed as described in the text.Note the poor agreement of the model galaxies with the observations in all cases. Figure 9 .Figure 10 . Figure 9. Relation between the [α/Fe] ratio and stellar mass.Symbols -red crosses: SAM+GCE with x = 1.3 and A = 0.015; blue crosses: SAM+GCE with x = 1.3 and A = 0.05; black stars with error bars: galaxies from Trager et al. (2000a), reanalysed as discussed in the text.Note again the poor agreement of the model galaxies with the observations. Sullivan et al. (2006) as a function of specific SFR.Here we show all model galaxies, regardless of morphology.Red stars are SAM+GCE with x = 1.3 and A = 0.015; blue stars SAM+GCE with x = 1.3 and A = 0.05; black squares are observations fromSullivan et al. (2006)and black crosses fromMannucci et al. (2005).The conversion of galaxy type into specific SFR for the Mannucci et al. data points is the same as inSullivan et al. (2006). Table B1 . Velocity dispersions, ages, metallicities, enhancement ratios and dynamical masses of the sample galaxies.
2009-10-29T13:42:14.000Z
2009-05-26T00:00:00.000
{ "year": 2009, "sha1": "e3e8cb8f7521979d46bd504b5490708781d98498", "oa_license": null, "oa_url": "https://academic.oup.com/mnras/article-pdf/402/1/173/18575473/mnras0402-0173.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "e3e8cb8f7521979d46bd504b5490708781d98498", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
14571364
pes2o/s2orc
v3-fos-license
VARIATIONAL DENOISING OF DIFFUSION WEIGHTED MRI In this paper, we present a novel variational formulation for restoring high angular resolution diffusion imaging (HARDI) data. The restoration formulation involves smoothing signal measurements over the spherical domain and across the 3D image lattice. The regularization across the lattice is achieved using a total variation (TV) norm based scheme, while the finite element method (FEM) was employed to smooth the data on the sphere at each lattice point using first and second order smoothness constraints. Examples are presented to show the performance of the HARDI data restoration scheme and its effect on fiber direction computation on synthetic data, as well as on real data sets collected from excised rat brain and spinal cord. 1. Introduction. Observing the directional dependence of water diffusion in the nervous system can allow us to infer structural information about the surrounding tissue. Axonal membranes and myelin sheath present a barrier to molecules diffusing in directions perpendicular to the white matter fiber bundles whereas in directions parallel to the fibers, the diffusion process is less restricted [10]. This results in anisotropic diffusion that can be observed using magnetic resonance (MR) measurements by the utilization of magnetic field gradients [47]. In general, the acquired MR signal depends on the strength and the direction of these diffusion sensitizing gradients. Repeated measurements of water diffusion in tissue with varying gradient directions provide a means to quantify the level of anisotropy as well as to determine the local fiber orientation within the tissue. In a series of publications, Basser and colleagues [6,7,8] have formulated an imaging modality called "diffusion tensor MRI (DT-MRI or DTI)" that employs a second order, positive definite, symmetric diffusion tensor to represent the local tissue structure. They have proposed several rotationally invariant scalar indices that quantify different aspects of water diffusion observed in tissue, similar to different "stains" used in histological studies [4]. Under the hypothesis that the preferred orientations of water diffusion will coincide with the fiber directions, one can determine the directionality of neuronal fiber bundles. This fact has been exploited to generate fiber-tract maps that yield information on structural connections in human [8,34,38,22] as well as rat brains [42,63,55,40,39] and spinal cords [54]. Despite its apparent success, DT-MRI has significant shortcomings when the tissue of interest has a complicated geometry. This is due to the relatively simple tensor model that assumes a unidirectional -if not isotropic-local structure. In the case of orientational heterogeneity, DT-MRI technique is likely to yield incorrect fiber directions, and artificially low anisotropy values. This is due to the Gaussian model implicit in DTI that allows only one preferred direction for water diffusion. In order to overcome these difficulties several approaches have been taken. Qspace imaging, a technique commonly used to examine porous structures [13], has been suggested as a possible solution [59]. However this scheme requires strong gradient strengths and long acquisition times [5], or significant reduction in the resolution of the images. Q-space imaging requires many images to be acquired since the space of diffusion encoding gradients is sampled on a 3D lattice. As a more viable alternative Tuch et al. have proposed to do the acquisition such that the diffusion sensitizing gradients sample the surface of a sphere [53,52]. In this high angular resolution diffusion imaging (HARDI) method, one does not have to be restricted to the tensor model and instead, it is possible to calculate diffusion coefficients along many directions. This method does not require more powerful hardware systems than those required by DT-MRI. Several groups have already performed HARDI acquisitions in clinical settings and have reported 43 to 126 different diffusion weighted images acquired in 20 to 40 minutes of total scanning time [30,52,33] indicating the feasibility of the high angular resolution scheme as a clinical diagnostic tool. In Figure 1, we present a matrix of simulated voxels showing renderings of DTIbased estimates of orientation and HARDI-based orientation estimates computed using the scheme we present in Section 2.1. The orientation heterogeneity is evident from the HARDI-based renderings at each voxel since HARDI measurements can resolve multiple dominant directions of molecular diffusion in a voxel, a lacking feature of DTI. Since the HARDI data acquisition is very nascent, not many techniques of processing the HARDI data have been reported in literature. In the following section we will review the recently reported techniques of HARDI data denoising, which may be done prior to further analysis or visualization. 1.1. Review. We will first briefly describe the physics of acquisition and then point to various recent restoration techniques followed by methods for computing anisotropy measures from HARDI. This will be followed by an overview of our method. 1.1.1. Physics of Diffusion MR and HARDI Acquisition. The random process of diffusion of water molecules is described by the diffusion displacement PDF p t (r). This is the probability that a given molecule has a diffusion displacement of r after time t. The relation between the measured MR signal, and the diffusion displacement PDF is given by [13] (1) where S(q) is the MR signal when a diffusion gradient pulse of strength G and duration δ is applied yielding the wave vector q = γδG where γ is the gyromagnetic ratio for protons. S 0 is the image acquired with no diffusion encoding gradient applied. The above formula indicates that water displacement probabilities are simply the Fourier transform of S(q)/S 0 . It is the orientational modes of p t (r) that are taken to be the underlying fiber directions. The HARDI processing proceeds by acquiring diffusion weighted images with many diffusion encoding gradient directions, effectively sampling a spherical shell of the q-space (the space of diffusion encoding gradients) as described by Tuch [51]. It is desired that this sampling minimize the average angle between gradient directions so that the diffusion signal may be accurately reconstructed. The gradient direction for each image has been chosen to correspond to the vertices of an icosahedron which has been repeatedly subdivided. Our data sets include diffusion-weighted images acquired with the application of diffusion gradients along 81 or 46 directions in addition to one image with no diffusion weighting. Since the process of diffusion is known to have antipodal symmetry [30], we need to sample only one of the hemispheres in q-space. 1.1.2. Restoration. Processing of HARDI data sets has received increased attention lately and a few researchers have reported their results in literature. The use of spherical harmonic expansions have been quite popular in this context since the HARDI data primarily consists of scalar signal measurements on a sphere located at each lattice point on a 3D image grid. Tuch et al. [53,52] developed the HARDI acquisition and processing and later Frank [30] showed that it is possible to use the spherical harmonics expansion of the HARDI data to characterize the local geometry of the diffusivity profiles. Although elimination of odd-ordered terms and the truncation of the Laplace series provide some level of smoothing, there is no discussion of smoothing the data across the lattice points. Chen et al. [20] find a regularized spherical harmonic expansion by solving a constrained minimization problem. However the expansion is a truncated spherical harmonic expansion of order four, restricting the level of complexity that can be modeled using this approach. In [33], Jansons and Alexander described a new statistic, persistent angular structure, which was computed from the samples of a 3D function. In this case, the function described displacement of water molecules in each direction. The goal in their work was to resolve voxels containing one or more fibers. However, there was no discussion on how to restore the noisy HARDI data prior to resolution of the fiber paths. More recently, Descoteaux et al., [23,24], proposed an analytical solution to the reconstruction of the diffusion orientation distribution function (ODF). They model the signal using a spherical harmonic function of order eight and fit this model to the noisy data using a regularization constraint involving the Laplace-Beltrami operator for smoothing the HARDI data over the sphere of directions at each voxel. Their analytic form for the ODF reconstruction requires a numerical solution to a linear system and they do not consider regularization across the 3D lattice which can be important in order to obtain a piecewise smooth representation of the given HARDI data. Wiest-Daessle et al. [62,61] described several variants of non-local mean denoising applied to diffusion MRI. The approach which is applicable to HARDI involved considering the dataset as a vector-valued image, however this approach does not respect the directional relationship among the images. Assemlal et al. also employ only spatial regularization approaches to robustly determine the diffusion ODF [1] and PDF [2] fields. Savadjiev et al. [46] formulate a novel spatial regularization in terms of the underlying 3D curves which represent neuronal fibers. In contrast to HARDI denoising, DT-MRI denoising has been more popular and numerous techniques exist in literature. For sampling of the techniques used to denoise DT-MRI, we refer the reader to [50,17,60,57,58,19,27,9,28,3,32]. Most of these works use a linearized Stejskal-Tanner equation [47] describing the MR signal decay with the exception of Wang et al., [57,58]. Using the Stejskal-Tanner equation as is, is quite important in preserving the accuracy of the restored data and this was shown in the experiments in [58]. Another important constraint in the DTI restoration is the positive definiteness of the tensors, in this context, work in [18] introduced an elegant differential geometric framework to achieve the solution. The work in [57,58] and [60] chose alternative methods to impose the positive definiteness of the restored tensor fields namely, a linear algebraic and a PDE-based method respectively. Approaches to filtering based on the Riemannian geometry of the manifold of symmetric positive-definite matrices have been reported [31,14]. 1.2. Overview of our modeling scheme. In this section we present a novel and effective variational formulation that will directly estimate a smooth signal S(θ, φ) and the probability distribution of the water molecule displacement over all directions p(θ, φ), given the noisy measurement whereŜ is the signal measurement taken on a sphere of constant gradient magnitude over all (θ, φ), b is the diffusion weighting factor, D(θ, φ) is the apparent diffusivity as a function of the direction expressed by the polar and azimuthal angles on the sphere and η(θ, φ) is Rician noise. The noise is due to additive Gaussian noise corrupting the complex-valued k-space measurements. However, for high signal-tonoise ratios we may consider η to be Gaussian distributed. A variational formulation for denoising using a data constraint based on the Rician likelihood was given by Basu et al. [9]. However, this leads to a highly nonlinear evolution equation since it involves the ratio of two Bessel functions. A modification to the non-local means algorithm which can handle Rician noise was presented by Descoteaux et al. [25]. However, neither of these approaches address smoothing over the spherical domain. In contrast, Clarke et al. [21] propose a robust method for estimating fiber orientation distributions in the presence of Rician noise, but they do not consider smoothness constraints over the voxel lattice. In this work we will assume a high SNR so that the Gaussian additive noise is a good assumption. Since we are performing high-field ex-vivo experiments, we can acquire many images and use averaging to increase the SNR so that this assumption is valid. The variational principle involves smoothing S values over the sphere and across the 3D image lattice. The key factor that complicates this problem is that the domain of the data at each voxel in the lattice is a sphere. One may use the level-set techniques developed by Tang et al., [48] to achieve this smoothing. However, when data sets are large, it becomes computationally impractical to apply the level-set technique at each voxel independently to restore these scalar-valued measurements on the sphere. Alternative approaches to solving variational problems over nonplanar domains have been described in recent literature. Cecil et al. [15] propose several numerical approaches to dealing with discontinuous derivatives due to periodic boundaries encountered when solving problems on S 1 and S 2 . Liu et al. [37] proceed by finding a conformal mapping from the surface to the plane, then solving the problem in the 2D parameter space. Bogdanova et al. [12] presented explicit formulations of differential operators on parametric surfaces in terms of the Riemannian metric. Since our input data are sparsely distributed over a triangulated sphere (gradient directions are computed by subdividing an icosahedron, we simply use the spherical triangles as our computational domain. We arrive at a computationally efficient solution to this problem by using the finite element method (FEM) on the sphere and choosing local basis functions for the data restoration. Unlike the reported work on spherical harmonic basis expansion of the diffusivity function on the sphere [29,43,20], the FEM basis functions have local support and are more stable to perturbations due to noise in the data. From the denoised data we will compute a probability, p t (θ, φ), of molecular diffusion over a sphere of directions. The rest of this paper is organized as follows: Section (2) contains a variational formulation of the HARDI denoising problem including smoothing the scalar signal over a sphere of directions at each 3D lattice point and across lattice points, computation of probability of water molecular diffusion over the sphere of directions and several measures of anisotropy computed from the field of probability densities. In section (3), we present several experimental results depicting the performance of our algorithms on synthetically generated and real data sets. Finally, we conclude in section (4). Appendices A and B contain the details of the finite element basis used and the element as well as the global equations. 2. Formulation of the HARDI restoration. Normally, the diffusion weighted images are quite noisy especially when acquired using large field gradients. One can reduce some amount of noise by signal averaging for each gradient direction used. However, this by itself does not preserve the details in the data. We now present a variational formulation for effective denoising of the HARDI data. 2.1. Variational smoothing. We propose a membrane-spline deformation energy minimization for smoothing the measured imageŜ(x, θ, φ). The variational principle for estimating a smooth S(x, θ, φ) is given by where Ω is the domain of the image lattice and S 2 is the sphere on which the signal measurements are specified at each voxel. The first term of Equation (3) is a data fidelity term which makes the solution to be close to the given data. The degree of data fidelity can be controlled by the input parameter µ. The second term is a regularization constraint enforcing smoothness of the data over the spherical domain at each voxel. The minimizer of this energy term is a membrane spline over the sphere which is in Sobolev space H 1 (S 2 ) [49]. The third term is another regularization term which causes the solution to be piecewise smooth over the spatial domain (the 3D voxel lattice). The minimizer of this TV norm is in the space BV (R 3 ), functions of bounded variation [35]. g(x) inhibits smoothing across discontinuities in S over the lattice. More on this in section (2.3). The choice of membrane spline smoothness over S 2 is motivated by the partial volume effect in MRI. The signal at each voxel is the average over a volume much larger than a single axonal fiber. Within this volume there may be fibers of varying orientation and regions of isotropy. Though the diffusivity function may be nearly discontinuous over S 2 at a point near a fiber bundle, it is highly unlikely for the volume average to be so. For this reason, we do not use TV norm minimization over the spherical domain. 2.2. Finite element method based smoothing of S(θ, φ). We will consider a deformation energy functional which is a weighted combination of the thin-plate spline energy and the membrane spline energy, which is commonly used in computer vision literature for smoothing scalar-valued data in ℜ 3 (see McInerney et al., [41], Lai et al., [36]). In our case, the data at each voxel is an image on the sphere, S(θ, φ), so the problem is inherently 2 dimensional. The diffusion-encoding gradient directions are taken as the vertices of a subdivided icosahedron, to achieve a nearly uniform sampling of gradient directions over the sphere. We map this piecewise planar approximation of the sphere to the global FEM coordinate system (u, v) by setting (u = θ, v = φ) for each gradient direction. This domain is triangulated so each face of the subdivided icosahedron will have a corresponding triangle in the (u, v) domain. A periodic boundary condition is imposed so that S(2π, v) = S(0, v). The area element in the (u, v) domain is du dv = sin φdθdφ. A similar mapping was used by McInerney & Terzopoulos [41] and Vemuri and Guo [56] for finite elements over a spherical domain. Note that, after mapping, the data can be seen as a height field over the (u, v) plane. The smoothness of the height function, z(u, v), will be enforced by the smoothing functional The weight on the membrane term is α and the weight on the thin-plate term is β. Once we have computed a smooth z(u, v), the result will then be mapped back to the image on the sphere, S(θ, φ). The data energy due to virtual work of the data forces, f , and virtual displacement, z, is By the principal of virtual work, the spline system is in equilibrium when the total work done by all forces is zero for all virtual displacements. The restoration of S(θ, φ) at each voxel is formulated as the energy minimization defining the equilibrium condition of the system. We use polynomial shape functions, N i , as a basis for the unknown smooth approximation, z, of the data over the u, v plane. We may write z as where N is a (1 × n) row vector, and q is a column vector of nodal variables. The domain, Ω, is partitioned into triangular elements, Ω j , each with their own local shape functions. The shape functions, in terms of local (barycentric) coordinates are given in Appendix A. For each element j, we have, In the rest of this section we will derive linear equations for the element potential energies, E j p , and data energies, E j d , in terms of the coefficients q j . Finally, we will assemble a global linear system, and solve for q. This will allow us to evaluate z(u, v) using Equation 7. The global potential energy is the sum of the energies of each finite element, where the local potential energy function for each element is given by The element strain vector (given by Dhatt and Touzot [26]) is which may be rewritten as Since q j is constant over each element we can derive the element stiffness matrix, K, in terms of D and B giving us the element strain energy as, We will model the data constraint as springs pulling z(u, v) toward the measured values z 0 (u, v), as illustrated in Figure(2). The force at each point will obey f = k(z − z 0 ), where k is the spring constant. For small displacements the spring constant, k = µ 2 where µ is the data constraint coefficient from Equation (3). The element deformation energy due to virtual displacement z(u, v) is given by We can split the deformation energy into two terms : We may now balance the deformation energy and data energy by solving the following linear system: The global linear system for smoothing the entire mesh may be obtained by appropriately summing the local element matrices, as detailed in Appendix B. The global system is symmetric, and has a sparse banded structure with 18 nonzero diagonal bands. Since the global matrix is positive-definite, an efficient solution to q is obtained via Cholesky factorization. 2.3. Spatial smoothing of S(x). We are now ready to describe the smoothing of the data across the 3D lattice. There are many existing methods that one can apply to this problem as discussed earlier. Smoothing the raw vector-valued data, S(x), is posed as a variational principle involving a first order smoothness constraint on the solution to the smoothing problem. Note that the data at each voxel are m measurements of S over a sphere of directions and can be assembled into a vector after the smoothing on the spherical coordinate domain has been accomplished. We propose a weighted TV-norm minimization for smoothing this vector-valued image S. This smoothing scheme reduces the effect of inter-region blurring, a drawback Gaussian convolution and isotropic diffusion suffer. Our method is a modified version of the work in Blomgren et. al., [11]. The novelty here lies in the choice of the weighting i.e., the coupling term between the channels. The variational principle for estimating a smooth S(x) is given by where, Ω is the image domain, µ is a regularization factor and m is the number of images. The first term here is the regularization constraint on the solution to have a certain degree of smoothness. The second term in the variational principle makes the solution faithful to the data in the L 2 sense. We have used the coupling function g(x) = 1/(1 + ||∇GA(x)|| 2 ) for smoothing HARDI, where GA is the generalized anisotropy index defined inÖzarslan et al., [45] and is computed from the variance of normalized diffusivity. For a more detailed discussion on GA, we refer the reader to [45]. This selection criterion preserves edges in anisotropy while smoothing the rest of the data. This anisotropy measure is chosen since it can be computed without explicitly computing the ODF, and it is our goal to smooth the data prior to ODF computation. An image of the coupling term for a typical slice is shown in Figure (3). Here we have used a different TV-norm than the one used by Blomgren and Chan [11]. The TV n,m norm is an L 2 norm of the vector of TV n,1 norms ( Ω ∇S i (x) 2 dx) for each channel. We use the L 1 norm instead, which is known to have better discontinuity preservation properties. The gradient descent form of the above minimization is given by The use of a modified TV-norm in equation (20) results in a looser coupling between channels than when using the TV n,m norm. This reduces the numerical complexity of Equation (21) and makes solution for large data sets feasible. The gradient descent of the vector-valued image smoothing using the T V n,mnorm TV n,m (S(x)) = m i=1 [TV n,1 (S i )] 2 presented in [11] is given by, Note that the TV n,m norm appears in the gradient descent solution of the vectorvalued minimization problem. Considering that our data sets consist of up to 82 images, corresponding to (magnetic field) gradient directions, calculating the TV n,m norm by numerically integrating over the 3-dimensional data set at each step of an iterative process would be prohibitively expensive. In contrast using our modified TV-norm described earlier leads to a more efficient solution. We are now ready to present the numerical solution to equation (21). Fixed-Point Lagged-Diffusivity. Since the m Equations(21) are coupled only through the function g, we can drop the subscript on S with no ambiguity (later the subscript will refer to spatial coordinates.) In this section we will discuss the numerical solution for each channel, S, of the vector-valued image S. Equation (21) is nonlinear due to the presence of ∇S i in the denominator of the first term. We linearize Equation (21) by using the method of "lagged-diffusivity" presented by Chan and Mulet [16]. By considering ∇S to be a constant for each iteration, and using the value from the previous iteration we can instead solve Here the superscript denotes iteration number. Equation (23) can be recast in the form We now discretize the above equation in the following. Discretized Equations. To write Equation (24) as a linear system (AS t+1 = f t ), we discretize the Laplacian and gradient terms. Using central differences for the Laplacian we have We define the standard central differences to be We can rewrite Equation (24) in discrete form using the definitions in Equation (26) −S x−1,y,z − S x,y−1,z − S x,y,z−1 This results in a sparse linear system. The matrix of coefficients of S t+1 has 7 nonzero bands, and is given by The matrix in Equation (28) is symmetric and diagonally dominant. We employ the conjugate gradient descent to solve this system. The solution of Equation (28) represents one fixed-point iteration. This iteration is continued until |S t −S t+1 | < c, where c is a small prespecified tolerance. (1) can now be evaluated by computing the quantity S(q)/S 0 and performing the FFT. If the signal, S, is assumed to decay mono-exponentially from the origin of q-space (where S(0) = S 0 ), one can interpolate the signal values for arbitrary q. It is then possible to extrapolate (using the monoexponential decay model) from the spherical coordinate locations to grid points in cartesian space and then perform the FFT on this extrapolated data. The result is a probability of water molecule displacement over a small time constant. Since the quantity of interest is primarily the direction of water displacement, one can integrate out the radial component of p t (r) to get p t (θ, φ). This is commonly referred to as the diffusion orientation distribution function or simply ODF. Computing the ODF with this method is computationally expensive since it requires a 3D FFT at each voxel, and then a numerical integration for each direction. For the sake of efficiency, we will compute a probability profile (not the ODF), which will make processing large datasets feasible. This probability profile, written as p t (r, θ, φ) quantifies the probability that a water molecule diffuses through a sphere of fixed radius, r. A more detailed treatment byÖzarslan et al. Computing probabilities. The probability in Equation can be found in [44]. This scheme provides a fast way to calculate the orientation profiles. In our implementation we have evaluated the series given in [44] up to l = 6 terms since the reconstructed surfaces have very simple shapes which can be accurately represented using a truncated spherical harmonics series, and r 0 was set to 17.5µm. An alternative approach is to use the Funk-Radon Transform proposed by Tuch [51], however this introduces smoothing due to a spherical convolution step which would make evaluation of our denoising algorithm more difficult. To enhance the visual impact of the probability profiles we apply a sharpening transform to the distribution by subtracting a uniform distribution (sphere) from each profile, as shown in Figure (4). The radius of the sphere is the minimum of the probability over all directions. By performing this operation the direction of maximum probability becomes more apparent. 3. Experimental results. The denoising and rendering techniques described in the previous section were first applied to a synthetic HARDI dataset. This dataset was generated using the technique described byÖzarslan et al. in [45]. The dataset was designed to depict a region of curving fibers, a region of straight fibers, and a crossing between the two. A total of 81 acquisition directions are simulated with b = 1500 s/mm 2 . A small sample of the probability surfaces p(θ, φ) computed from the synthetic data, taken from near the crossing region, is shown in Figure (5a). The real-valued synthetic data was corrupted with Gaussian noise of zero mean, and variance σ 2 = 0.005. p(θ, φ) surfaces computed from the noisy data (without any denoising) are shown in Figure (5b). The same voxels are shown -after smoothing over the spherical manifold at each voxel independently -in Figure ( figure 5e) depict better smoothing than those in either of 5c) or 5d), visually indicating that one needs to perform smoothing on the sphere and across the lattice and not just one or the other. From Figure (5b), it can be seen that the noise has a large influence on the smoothness of the distribution. As expected from the variational formulation, the spikes of noise present in the raw data have been smoothed while preserving the overall shape of the S profile. This smoothness is evident in the computed probability profiles as well. Table 1. Error between ground-truth probabilitiesp and probabilities computed from restored synthetic data when SNR = 14. Table 2. Error between ground-truth probabilitiesp and probabilities computed from restored synthetic data when SNR = 5. A quantitative evaluation can be obtained by comparing the distributions computed from the smoothed data with the ground-truth by using the square root of J-divergence (symmetrized KL-divergence) as a measure. This divergence is defined as (29) d(p, q) = J(p, q) where In Table (1) we compare the distances, d(p, p), between the densities computed from the original synthetic data, (p), and the unrestored data, the data restored only using the FEM method, the data restored using only the TV-norm minimization, and the data restored using both techniques. For each technique, the mean distance, µ(d(p, p)), between the densities in corresponding voxels and the standard deviation, σ(d(p, p)), is presented. As evident from Table (1), the TV restoration has superior performance over the FEM technique in terms of the mean error. The combination of techniques has a lower mean error and standard deviation of the error than either the L 2 -norm based or the TV-norm based restoration. Note also that the error achieved by applying smoothing over the sphere prior to smoothing over the voxel lattice is lower than when the order is reversed. Since TV-norm minimization can be seen as a nonlinear diffusion process, performing the denoising in this order propagates smoothed intensities within homogeneous regions. In subsequent experiments we perform the denoising in this order. The denoising algorithm was applied to a dataset consisting of one non-diffusion weighted image and 46 diffusion weighted images of a rat spinal cord. Our data were acquired using a 14.1 Tesla (600 MHz) Bruker Avance Imaging spectrometer system with a diffusion weighted spin echo pulse sequence. Imaging parameters were : TR = 1400 ms, TE = 25 ms, Delta = 17.5 ms, delta = 1.5 ms, b high = 1500s/mm 2 , b low = 0s/mm 2 , diffusion gradient strengths = 0 mT/m with 28 averages were measured for b = 0s/mm 2 and diffusion gradient strengths = 733s/mm 2 with 7 averages were measured for each of the 46 diffusion weighting-gradient directions. The 46 directions were derived from the tessellation of a hemisphere. The image field of view was 4.3×4.3×12mm 3 , acquisition matrix was 72×72×40. The approximate SNR for the S 0 and diffusion weighted images were 58 and 50 respectively. The parameter values used for the restoration were µ = 0.97, α = 0.02, β = 0.0, k = 1.0. Axial slices before and after denoising are shown for the non-diffusion weighted image in Figure (6) and one diffusion weighted image in Figure (7). The ringing artifacts visible near the sample boundary in Figure (6) have been noticeably decreased. Note that the edges in the image have been well preserved. Figures (8) and (9) show restored probability profiles from rat brain and spinal cord datasets. The brain data were acquired using a 17.6 Tesla (750 MHz) Bruker Avance Imaging spectrometer system with a diffusion weighted spin echo pulse sequence. Imaging parameters were : TR = 2000 ms, TE = 28 ms, Delta = 17.8 ms, delta = 2.2 ms, b high = 1500s/mm 2 , b low = 0s/mm 2 , 6 averages for each of the 81 diffusion weighting-gradient directions. The 81 directions were derived from the tessellation of a hemisphere. The image field of view was 150 × 150 × 300µm 3 , acquisition matrix was 100×100×60. The approximate SNR for the S 0 and diffusion weighted images were 206 and 177 respectively. Figure (8b) shows a detail from the rat hippocampus. The piecewise smoothing behavior of the algorithm is evident within the anisotropic hippocampus region. This region has been smoothed independently of the more isotropic surrounding regions. The spherical smoothing term has also suppressed some peaks of the distribution which were probably due to noise in the acquired data. Figure (8c) shows a detail from the rat corpus callosum. The data dependent coupling term in the restoration algorithm has permitted intraregion smoothing within the corpus callosum while preventing interregion smoothing. Note that the fiber directions within the corpus callosum have been well preserved. Figure (9) shows details from the rat spinal cord dataset. Here the noise reduction can be seen to enhance the coherence of structures in the inner core of grey matter. The data were processed by a MATLAB implementation of the algorithm running on a system with Intel Quad Core QX6700 2.66 GHz CPU and 4 GB RAM. The computation times for the finite element smoothing over the sphere depends on the number of diffusion-encoding gradient directions in the image acquisition. For the spinal cord data with 46 directions the time was 0.018 seconds per voxel, and for the brain dataset with 81 directions the time was 0.038 seconds per voxel. The computation time for the TV-norm minimization problem for each diffusion weighted image depends on the size of the acquisition matrix. For the spinal cord the resolution was 72 × 72 × 40 and the computation time per image was 28.3 seconds. For the brain dataset the resolution was 100 × 100 × 60 and computation required 82.4 seconds per image. 4. Conclusion. In this paper, we presented a new variational formulation for restoring HARDI data and an FEM technique for implementing the restoration. Our formulation of the HARDI restoration involves two types of smoothness constraints. The first is smoothness over the spherical domain of acquisition directions, and the second is smoothness between neighboring voxels in the Cartesian domain. The smoothing technique is capable of preserving discontinuities in the data. This was demonstrated on synthetic and real anatomical data. By using J-divergence as a measure of distance between distribution, we were able to show quantitatively that the combination of restoration techniques performs better than either technique alone. Appendix A. Local element coordinates. We now present the coordinate system for the local elements. For local elements, triangular elements are used with a barycentric coordinate system (γ, ξ, η). Each coordinate is in the range [0, 1] and γ + ξ + η = 1 for points on the triangle. The global coordinates, (u, v), can be computed from the local coordinates by The Jacobian, J, of the transformation between coordinate systems is defined by Integrals over the (u, v) domain to be converted to integrals over the local (ξ, η) domain by Using the Gauss-Radau quadrature rules given in [26], we can approximate the integral in Equation (33) by the summation where η i,j = r i (1 − ξ j ), wj j = a j (1 − ξ j ), ξ j , and wi i are given in Table 3. Derivatives over (u, v) can be written in terms of local coordinates by applying the chain rule: The partial derivatives of ξ and η with respect to u and v can be computed by inverting the Jacobian The inverse of J is given by We use the fifth order element shape functions given by Dhatt and Touzot [26]. This element guarantees C 1 (surface normal) continuity across triangles. The quintic basis functions are given by The quintic shape functions have nodal variables which can be written in terms of local or global coordinates as, The local and global nodal variables are related to each other by Appendix B. Global matrices. We wish to construct global matrices so that the energy balance over the entire FEM mesh is given by the linear system where K is a (6n × 6n) matrix since we have 6 variables per node. We will consider the simple case of 2 elements. Expanding the element Equation (19) in terms of nodal variables for element 0 we get where each q j l is a (6 × 1) column vector of nodal variables. We expand each K j to be (6n × 6n) by inserting rows and columns of zeros corresponding to each node of the mesh. Also expand f j to (6n × 1). The global K and q are obtained by summing the expanded matrices from each element in the mesh. For our 2 element example we have
2019-04-21T13:07:49.088Z
2009-10-01T00:00:00.000
{ "year": 2009, "sha1": "38e6560f1d318d90bb9768c00e24c3a3cf8a0374", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3934/ipi.2009.3.625", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "93d1892aee4f2a32476897ab10ce8080ca8ff95d", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Medicine", "Mathematics" ] }
56138747
pes2o/s2orc
v3-fos-license
Performances of an expanding insect under elevated CO 2 and snow cover in the Alps Abstract: Variations of phenology and distribution have been recently highlighted in numerous insect species and attributed to climate change, particularly the increase of temperature and atmospheric CO2. Both have been shown to have direct and indirect effects on insect species of various ecosystems, though the responses are often species-specific. The pine processionary moth, Thaumetopoea pityocampa (Lepidoptera, Notodontidae) is an important pest of conifers in the Mediterranean region, and has been recently shown to expand its altitudinal range in the Alps, including the mountain pine Pinus mugo as a novel host. We had the opportunity to transplant colonies of the pine processionary moth to a high elevation site well outside of the current range of the insect (Stillberg, Davos, Switzerland, 2180 m), where trees of the mountain pine have been grown for five years under ambient and elevated CO2 concentrations (ca. 570 ppm). The aim of the study was to evaluate the response of first instar larvae to extreme conditions of temperature and to an altered performance induced by the change of host metabolism under elevated CO2. Larval mortality and relative growth rate did not differ between host trees grown in ambient or elevated CO2. As extended snow cover may be an important mortality factor of larval colonies on the dwarf trees of mountain pine, we tested the survival of colonies transplanted at two extreme sites of Eastern Alps. The snow cover extended over more than one month proved to be an important mortality factor of larval colonies on mountain pine. We concluded that the first instar larvae of the pine processionary moth are not concerned by unusually low temperature and CO2 increase whereas they can be later strongly affected by snow accumulation. The decrease of snow cover observed in the last decades, however, may reduce such a risk. Introduction Numerous studies have shown that it is possible to detect the effects of a changing climate on ecological systems (McCarty 2001, Parmesan 2006), taking into account all components of food webs (Samways 2007).However, the study of climate change effects on ecosystems is complex because of many potentially interacting factors, e.g., species, genotype.A principal driver of cli mate change is the increase in atmospheric CO2 concentration.It has increased by 31% since the pre-industrial time, from 280 ppm to more than 370 ppm (Karl & Trenberth 2003) and is expected to increase in the next 100 years to 500-900 ppm, depending on models of projection (Solomon et al. 2007).This change has a significant indirect effect on herbivorous insect by altering quantity and quality of food supply.The survival and performance of insect could be affected by the variation in tissue C/N ratio, water con tent, leaf toughness and carbon-based de fence compounds occurring in their host plant (Hunter 2001, Schädler et al. 2007).However, the responses of different plant species show enormous interspecific vari ation, as do herbivores feeding on them (Harrington et al. 2001).Recent studies have shown that herbivorous insects grown in el evated CO2 may respond with a lower devel opment and higher mortality (Coviella et al. 2002, Knepp et al. 2007); other studies showed no variation of performance but a higher fecundity in female and increasing of lipid concentration in males (Goverde et al. 2002), and again no effect on growth rate, larval instar duration and pupal weight (Ka rowe 2007).However, Zvereva & Kozlov (2006) pointed out in their review that in sect's fitness is reduced under elevated CO2, although the negative effects can be mitig ated by a corresponding temperature in crease.In addition, responses of different host species to increased CO2 may differen tially affect the performance of oligophag ous/polyphagous herbivores (Agrell et al. 2006, Schädler et al. 2007).A possible effect is the change of food preference with poten tial consequences for population dynamics of the plants (Goverde & Erhardt 2003, Agrell et al. 2005, Agrell et al. 2006), although there are studies where no preference shifts were observed (Díaz et al. 1998, Lederberger et al. 1998).The first experiments of CO2 en richment were carried out under controlled conditions on detached plant material or dir ectly on plants grown in greenhouse, and the artificiality of these experiments may mask processes of ecological relevance and lead to detection of effects not occurring in the field (Hunter 2001, Goverde et al. 2002, Knepp et al. 2005, Knepp et al. 2007).However, data are now available from FACE (Free Air CO2 Enrichment) facilities set up in the field, even in the tree canopy (e.g., Hättenschwiler & Schafellner 2004), as well as for sites loc ated at the alpine treeline (Asshoff & Hät tenschwiler 2005, Asshoff & Hättenschwiler 2006). The pine processionary moth (Thaumeto poea pityocampa -Denis & Schiffermüller 1776) is an insect pest of pines that is ex panding its range as a consequence of cli mate change (Battisti et al. 2005), including novel hosts that may become readily ex ploited, as it was observed for the mountain pine in the Alps.The mountain pine group includes two major forms known as upright (Pinus uncinata Ramond) and dwarf or creeping (Pinus mugo Turra), differing mainly in their growth habit but showing of ten intermediate forms (Vidakovic 1991).As the stands of mountain pine are extensively distributed beyond the historical range of T. pityocampa, it is likely that new problems will arise if the current trend of expansion will be maintained.However, the expansion could be limited by the accumulation of snow on the branches during the winter, when the larvae need to leave the nest for feeding (Buffo et al. 2007, Geri 1983). The main objective of the study is twofold, first to assess survival and performance of young larvae of processionary moth in high-elevation FACE facility on mountain pine trees during the summer.We tested the hypothesis that a change in food quality, due to the increase in CO2, affects insect per formance in an environment where the lar vae cope with extremely low temperature.The second aim is to evaluate whether the insect winter survival in stands near to the border of the range is affected by the snowpack.The creeping habit of mountain pine furthers the cover of the nests for extended periods of time, and this could create a risk of colony mortality.We have also analysed the snow cover data of a possible expansion area, to predict insect performance. FACE experiment The experiment was carried out at the treeline FACE site established in 2001 at Stillberg (Davos) in the Swiss Central Alps (see Hättenschwiler et al. 2002 andHanda et al. 2005 for details about CO2-enrichment system and experimental set-up).The site is located on a slope oriented towards North-East at 2180 m elevation.The long-term an nual precipitation is 1050 mm, the average temperature is -5.8 °C in January and 9.4 °C in July (Schönenberger & Frey 1988).The soil is classified as a Ranker (U.S. system: Lythic Haplumbrept) with a 10-cm-deep or ganic soil underlain by siliceous bedrock (Paregneis - Schönenberger & Frey 1988).The experimental site is located in an affor estation area planted in 1975 with three tree species (Larix decidua L., Pinus cembra L., Pinus uncinata Ramond).In 2001, an area of approximately 2500 m 2 situated at 2180 m of elevation was selected to provide an experi mental setup for a CO2-enrichment study at tree line (Hättenschwiler et al. 2002, Handa et al. 2005).At the time of the experiment, there were 9 trees of mountain pine (P.un cinata upright form) that were grown for the previous 5 years under an elevated CO2 at mosphere (ca.570 ppm) from June to September and 10 trees grown under ambi ent CO2 (ca 370 ppm -see Hättenschwiler et al. 2002 andHanda et al. 2005 for details about CO2-enrichment system and experi mental set-up).The leaf chemistry of needles of the enrichment experiment was shown in previous studies (Hättenschwiler et al. 2002, Handa et al. 2005).We used sets of insects for the experiment consisting of artificially created groups of 50 first instar larvae.At the end of July 2005, 30 egg batches were collected from several trees in Venosta/Vinschgau valley, Italian Alps and transferred to laboratory.After hatching, groups of 50 larvae were formed by picking individuals from different colon ies.The larvae were provided with needles of mountain pine in Petri dishes.The food was renewed every 2-3 days and the dead larvae replaced with new ones.On August 8 th we transferred and exposed the larvae groups on ten trees under ambient and nine trees under elevated CO2 conditions, all be longing to the enrichment experiment; we also added ten naturally established trees of mountain pine (P.mugo semi-prostrate form) growing on the same slope at slightly lower elevation (2030 m) to control for a pu tative temperature effect.The trees were of a size similar to those used in the enrichment experiment.On each tree, we selected one lateral branch that was protected with a 0.1 mm mesh net sleeve, about 40 cm long and including all the needle age classes occurring on that branch.On every branch, a group of 50 first instar larvae was gently transferred from the Petri dish to the needles inside the sleeve.On September 9, the sleeves were opened and their content (faeces, dead lar vae, fallen needles) transferred into a vial.Then the larvae were taken from the twig with a brush and put into a Petri dish.Later the amount of faecal pellets was measured by volume assessed in a micropipette, larvae were counted and weighed, and the larval in star was assessed.After freezing at -20°C, the dry weight was obtained after 24 h of ex posure at 60°C.The relative growth rate (RGR mg -1 mg -1 day) was calculated for the groups of 50 larvae using both dry and fresh weight and using the initial weight from three groups of 15 larvae.At each site, tem perature was recorded hourly by data loggers (Hobo, Onset Computer Corporation, Pocas set, MA).At the end of the experiment we took a sample of three needle pairs for each tree to measure needle toughness, by determ ining the force needed to penetrate the needles using a calibrated penetrometer con structed for this purpose.Each pine needle was fixed between two metal plates, and the probe (Bohemia insect pin ® , diameter 0.55 mm) of the penetrometer was inserted through the central part of the upper, roun ded surface.Three readings were taken per needle, avoiding the distal parts. Transplant experiment The transplant experiment was carried out in the two sites in South-Eastern Alps.The first (Resia) is located in an area of recent colonization by the insect, on a north-facing slope at an altitude of 500 m.The second site (Tanamea) is situated at 850 m on a northw est facing slope, in an area where the insect is not occurring yet.Both sites have natural stands dominated by mountain pine but with presence of Pinus nigra at lower frequency, especially in the second site.At the end of July 2002, 90 egg batches were collected in pine stands close to the experimental area.The egg batches were exposed in the field, before hatching.At Resia site 30 egg batches were exposed on each of the two host plants, whereas at Tanamea we exposed 30 egg batches only on P. mugo, because of the scarcity of P. nigra.Each egg batch was tied to the branch of a separated tree, and the lar val development was checked every two weeks, noting the larval development stage (based on the size of faeces visible though the silk), nest size and recent feeding.After moulting to the fifth instar (end of March), the nests were collected and placed individu ally in a pot.The pot was closed with a net to avoid the larvae escape.The larvae were regularly fed; on the bottom of the pot we positioned sand to consent the pupation of the larvae, that was observed in May.During the adult emergence period (July) the pots were checked every four days and all present adults were collected and subsequently the sand winnowed to find diapausing cocoons. As snow resulted to be an important limit ing factor to insect survival, we analysed daily data of the snow in addition to temper ature in the period December -April.Four sites close to the experimental area were chosen in the elevation range of 540-900 m; we selected the mean height of snow as vari able, which was analysed for the period 1972-2007. Statistical analyses The larval performance (% survival and RGR) and needle toughness on the three dif ferent tree conditions were analysed with a one-way ANOVA.In all cases in which AN OVA was employed, the basic assumption of homogeneity of variance was met, and vari ables were tested without any transforma tion.Tukey's test was used for pairwise comparison of means.The larvae distribu tion in different stages was analysed by a χ 2 test on the three tree conditions, comparing observed frequencies with those expected of the trees.The number of faecal pellet was analysed by ANCOVA using the number of larvae as a covariate.The temperatures in different sites were analysed by a Student's t-test.The survival in the transplant experi ment was analysed by a χ 2 test, comparing observed frequencies at different sites and host plant species with those expected calcu lated on the base of medium values.The data of mean height of snow was analysed with discontinuity analysis that allowed to identi fy the period and the year with the highest probability of a breakpoint in the data set.All the analyses were done with STATISTICA software (StatSoft Italia 2005), except for the snow data, for which we used the open source package STRUCCHANGE written in R language (Zeileis et al. 2002). Transplant experiment Some egg batches did not hatch and thereby we utilized 27 colonies on P. nigra and P. mugo at Resia and 21 colonies on P. mugo at Tanamea.There was no difference in the colony survival on P. mugo and P. nigra located at Resia before (χ 2 = 3.64, d.f.= 1, p = 0.056) and after (χ 2 = 1.02, d.f.= 1, p = 0.311) the cold period.When comparing the two sites on P. mugo, the survival was similar before the cold period (χ 2 = 1.763, d.f.= 1, p = 0.184) but differed dramatically after (χ 2 = 6.373, d.f.= 1, p = 0.0115), as total mortality was observed in Tanamea, where colonies were buried in snow for more than one month (Tab.1). The mean height of snow cover showed a decreasing trend in the period 1972-2006 (Fig. 4), although there was strong variation among years.We individuated the period 1984-1994 as the more likely time when a change has occurred (at 90% confidence level), and the year 1986-87 as the more likely breakpoint in the data set. Discussion CO2 enrichment had no effect on the herbi vorous insect, Thaumetopoea pityocampa, although there was significant variation at the plant level, consisting in an increase of non-structural carbohydrates and a reduction of specific leaf area (Handa et al. 2005).Both survival and performance of larvae did not differ between individuals feeding on plants of mountain pine exposed to ambient and elevated CO2.However, the larvae ex posed on naturally established mountain pines growing at slightly lower altitude showed higher mortality than those on plants at the upper site, but a significant difference was observed only when contrasted to sur vival on elevated CO2 trees.As expected, the larval stage reached by the larvae at lower site was more advanced compared to the up per site due to temperature-dependent growth.In spite of the generally low temper ature for the development of first instar lar vae, about 10 °C lower than that observed at cold sites at the edge of the range (unpub lished data), we observed a considerable sur vival also on a novel host, the mountain pine, that opens the question about the pos sibility of a successful development of the insect at range's edge.Final survival of transplanted colonies at two extreme sites on mountain pine showed that snow cover is preventing larval activity and establishment, whereas low temperature is not the major constraint factor.However, as snow cover is decreasing in the last decades, we may ex pect that range expansion will further occur in T. pityocampa. Mountain pine responded to CO2 enrich ment with an increase of non-structural car bohydrates (Hättenschwiler et al. 2002), as it has been pointed for other conifers (Griffin et al. 1996, Runion et al. 1999).The increase was due to variation in starch fraction, whereas the concentration of soluble sugars was unchanged (Handa et al. 2005).The in crease of starch may positively affect herbi vore insects, improving the digestion effi ciency and increase fat reserves (Goverde et al. 1999, Asshoff & Hättenschwiler 2005).However, it may involve a lower N concen tration per unit mass and an increase of C/N ratio, that indicates a lower food quality for herbivorous insects (Mattson 1980).The studies carried out under similar situations have shown contrasting responses on larval performance.Several highlighted a negative response (Hättenschwiler & Schafellner 1999, Chen et al. 2005), whereas others showed no variations (Goverde et al. 2002, Williams et al. 2003) or a positive effect (Goverde et al. 1999).Asshoff & Hät tenschwiler (2006), in an experiment carried out in the same FACE site, showed that al teration in needle chemistry had a reduced effect on larch bud moth performance.The response of the pine processionary moth could be due to opposite effects, such as the dilution in N content and the increase of starch and sugars that may be phagostimulat ory to herbivores (Stiling & Cornelissen 2007).Moreover, the same quantity of fae ces produced in the two treatments does not support the hypothesis of compensatory feeding, that has been associated with in crease of C/N ratio and lower tissue quality (Williams et al. 1994), although some recent studies have suggested that a differential post-ingestive adsorption may compensate for diminished food quality (Barbehenn et al. 2004).In the study of Stastny et al. (2006), the performance of the processionary moth larvae did not differ between three host-plant species with different nitrogen content, showing a high plasticity.This may display such a post-ingestive compensatory beha viour.Leaf toughness, however, can be a serious constrain to larval development espe cially in early instars (Tanton 1962, Zovi et al. 2008).It is common that plant phenotype may show different resistance to herbivore attack (Preszler & Price 1988).In our experi ment there were two phenotypes of mountain pine (naturally established vs. introduced) which differed in needles toughness, a phys ical hurdle to larval feeding, that may ex plain the higher larval mortality on native mountain pine grown at lower altitude.Al though leaf toughness may be an important determinant of larval survival on some hosts, it rarely results in total mortality as the insect shows a high adaptive potential to tough needles (Zovi et al. 2008). Extreme temperature and snow cover ap pear to be the most important abiotic factors for assessing the impact of climate change on alpine ecosystem (Inouye et al. 2000, Theurillat & Guisan 2001).The snow cover of the nest was a major constraint factor for survival during winter in our transplant ex periment.The creeping habit of P. mugo, a species most widespread at the insect range edge, makes it possible that colonies become buried by snow when the snow is as high as 20 cm (unpublished data).The mean snow depth, the duration of continuous snow cover and the number of snowfall days in the Swiss Alps all have showed very similar trends in the period 1931-99: a gradual in crease until the early 1980s followed by a statistically significant decrease towards the end of the century, mainly at low and mid altitudes (Beniston 1997, Laternser & Schneebeli 2003).Our data from the northeastern Alps showed a similar trend to the Swiss Alps.There is no universal agreement, however, on snow cover in the past century, as Brown (2000) did not detect evidence of a significant long-term decrease of spring snow in the northern hemisphere. There are few studies that consider the ef fect of the change in snow cover on ecosys tems.Wipf et al. (2006) showed shifts in ve getation communities as a results of experi ments of snow removal, furthermore in a study about growth of Norway spruce sap lings the decrease in snow cover showed two opposite effects, a positive one due to de crease of attack by the black snow mold (Herpotrichia nigra) and to longer growing season, and a negative response of saplings to exposure to low temperatures (Cunning ham et al. 2006).Vanbergen et al. (2003) suggested that the reduction of snow cover in the Scottish moorland, conceivable with cli mate change, had increased the probability of winter moth (Operophthera brumata) out breaks on Calluna vulgaris, through im provement of adult emergence and higher breeding success.Finally, Atalopedes campestris is a lepidopteran that cannot sur vive for a longer period under snow as lar vae, and its future expansion could be fa voured by the decrease of snow (Crozier 2004).If snow cover reduction may relax the conifer hosts from the attack of snow-associ ated pathogens like the black snow mold, it may contribute to worse the attack of organ isms that are normally limited by snow, such as the winter moth and the pine procession ary moth.In the latter case, it can be easily predicted that the expansion on mountain pine will continue as long as temperature in crease will be associated with a decrease of snow cover and the insect will not be limited by higher CO2.Interestingly, the temperature and the land-use change have been con sidered as predictors of range expansion of mountain pine in the alpine region (Dirnböck et al. 2003), likely leading to an altitudinal shift of plant and insect communities in mountain habitats. Fig. 1 - Fig. 1 -Larval survival Mean percentage (± SE) of survival of first instar larvae of Thaumet opoea pityocampa transplanted at different sites and CO2 regimes at Stillberg, Switzerland, during the summer 2005.Different letters indicated significant differences in pairwise com parison of means % survival (Tukey test, p < 0.01). Fig. 2 - Fig. 2 -Larval development.Larval instar reached by the larvae at the end of the CO2 experi ment on different host plant types. Fig. 3 - Fig. 3 -Needle toughness (in newtons) of different host plant types.Error bars show one SE.Different letters indicate significant differences in pairwise comparisons of means (Tukey test, p < 0.01). Fig. 4 - Fig. 4 -Mean snow cover during winter period (December-April) from 1972-73 to 2006-07 at four weather stations between 540 and 900 m in the Alps of Friuli district, NE Italy.The grey area indicates the period characterized by the higher probability of change (at 90% con fidence level) and the dotted line indicates the mean value of the two phases before and after the year of more probable breakpoint (1986-87). Tab. 1 -Survival of colonies transplanted on different host pines at range's edge in Friuli district, north-eastern Italy.
2018-12-05T05:04:58.695Z
2008-08-27T00:00:00.000
{ "year": 2008, "sha1": "e87a5b9f9765db100c1c270af20acd8128ae9e83", "oa_license": "CCBYNC", "oa_url": "https://iforest.sisef.org/pdf/?id=ifor0466-0010126", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "e87a5b9f9765db100c1c270af20acd8128ae9e83", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Biology" ] }
46411394
pes2o/s2orc
v3-fos-license
Blended sea level anomaly fields with enhanced coastal coverage along the U.S. West Coast We form a new ‘blended’ data set of sea level anomaly (SLA) fields by combining gridded daily fields derived from altimeter data with coastal tide gauge data. Within approximately 55–70 km of the coast, the altimeter data are discarded and replaced by a linear interpolation between the tide gauge and remaining offshore altimeter data. To create a common reference height for altimeter and tide gauge data, a 20-year mean is subtracted from each time series (from each tide gauge and altimeter grid point) before combining the data sets to form a blended mean sea level anomaly (SLA) data set. Daily mean fields are produced for the 22-year period 1 January 1993–31 December 2014. The primary validation compares geostrophic velocities calculated from the height fields and velocities measured at four moorings covering the north-south range of the new data set. The blended data set improves the alongshore (meridional) component of the currents, indicating an improvement in the cross-shelf gradient of the mean SLA data set. Background & Summary More than 20 years of altimeter data have greatly improved our understanding of upper ocean processes, including large scale ocean circulation 1,2 , mesoscale variability 3,4 , sea floor topography 5,6 , climate variability [7][8][9] , and the distribution within eddies of chlorophyll concentration 10 and macrofuna feeding 11 . In coastal regions, however, altimeter observations are often of questionable accuracy due to factors including land contamination 12,13 , imprecise tidal corrections 14 and incorrect removal of atmospheric effects 15,16 . These issues limit the use of altimeter-derived data products in coastal areas 17 , as discussed in detail in Vignudelli et al. 18 and Cipollini et al. 19 . They further complicate the already difficult task of producing uniformly gridded fields from the sparsely sampled along-track data in coastal regions where space and time scales are shorter than in the open ocean. Specific efforts to correct and improve nearshore along-track altimeter data include the use of customized tidal modelling 20,21 , use of special editing and higher-rate data 22,23 , recomputing the atmospheric corrections 15,16,24 , and waveform retracking 13,25,26 . Saraceno et al. 27 were able to improve weekly mean, coastal sea level observations along the U.S. West Coast between 40°and 45°N for the 13-year period 1993-2005 by first removing all altimeter observations within 37 km of the coast. Based on data from five tide gauge stations located within the study area, Saraceno et al. 27 then created a virtual array of low-pass filtered tide gauge stations at 0.2°i ntervals along the coast. Finally, the tide gauge derived sea level data were interpolated from the coast to the offshore AVISO fields using the Delaunay triangulation method 28,29 . Compared to the original AVISO data, this methodology significantly improved the accuracy of alongshore geostrophic currents derived from their blended, weekly SLA data set, in comparison to in situ current meter observations over the shelf. The data set presented here extends the work of Saraceno et al. 27 by improving the temporal resolution from weekly to daily SLA fields, over a longer period, and by expanding the region to include the entire U.S. West Coast. We use an inverse-distance weighted interpolation method to blend low-pass filtered, daily mean tide gauge observations with daily, 0.25°latitude x 0.25°longitude AVISO SLA fields. This data set covers the 22-year period 1 January 1993-31 December 2014 from 32-48.5°N and 135-115°W. It should be emphasized that the data set we are producing does not attempt to retrieve improved altimeter data in the coastal domain. As in Saraceno et al. 27 , it substitutes tide gauge data in the region within approximately 55-70 km of the coast for the problematic altimeter data to produce a blended SLA data set. Much of our validation (see the 'Technical Validation' section below) of the improvement of the new data set, in comparison to the original AVISO daily data, also follows Saraceno et al. 27 by comparing geostrophic velocities derived from the SLA fields to observed velocities from moorings, now stretching from northern Washington to southern California. This comparison amounts to an indirect validation of the gradients in the surface height fields and requires some discussion. Even if the altimeter fields perfectly represent the ocean surface dynamic height fields, there will be differences between geostrophic currents derived from the gradients of those heights (as calculated over some finite distance) and velocities measured within the water column by current meters at single locations. These differences are discussed further in the 'Technical Validation' section. We emphasize, however, that the primary data set produced here consists of SLA height fields. For convenience, we also provide geostrophic velocity fields calculated in the simplest manner (centered differences over approximately 40-50 km). Those preferring to use more sophisticated methods of calculating gradients should use the SLA fields to do so. The paper is organized as follows: in the Methods and Data Record sections we describe the data sets and methods used to generate the blended AVISO-tide gauge data set (referred to as AVISO+TG hereafter) as well as the in situ velocity observations used to validate and verify this data set. The results of our validation and verification efforts are described in the Technical Validation section. AVISO altimeter fields The 0.25°latitude × 0.25°longitude gridded SLA altimeter fields were produced by DUACS/SSALTO (Data Unification and Altimeter Combination System/Segment Sol multi-missions d'ALTimetrie, d'orbitographie et de Localisation précise) and distributed by CLS (Collecte Localis Satellites) and AVISO (Archivage, Validation, Interprétation des données des Satellite Océanographiques). The data are available at http://www.aviso.altimetry.fr/en/data.html. Here we use the DUACS 2014 (v15.0) Delayed Time (DT) 'all-sat-merged' daily mean fields 30 for the period 1 January 1993 through 31 December 2014. The 'all-sat-merged' fields consist of datasets from up to four satellites at any given time. Using all available missions for a given time period improves sampling and long wavelength error determination, thus producing a higher quality data set, but one that is not homogeneous over the entire time span of the data record. DUACS uses a 20-year reference period of 1993-2012 to adjust each of the along-track data sets to match the 'reference' altimeters (TOPEX/Poseidon, Jason-1 and Jason-2), which span the entire time period. They then apply an offset to each along-track data set to make it consistent with a global mean SLA value of zero during 1993, before processing and mapping the along-track data onto their global grid. Thereafter, finding the global mean value from the gridded data during any subsequent period shows the mean global sea level rise since 1993. Fig. 1 and by the blue dots in Fig. 2. The Mean Sea Level tidal datum was used in this analysis. NCEP reanalysis I fields The atmospheric surface pressure fields used in the inverse barometer 'correction' to tide gauge heights come from the National Centers for Environmental Prediction (NCEP) Reanalysis I project, using an analysis/forecast system to perform assimilation of past data from 1948 to the present 31 (data available at http://www.esrl.noaa.gov/psd/data/gridded/data.ncep.reanalysis.surface.html). The dynamical model and data assimilation system remains unchanged over the reanalysis period. While this avoids perceived climate jumps associated with changes in the operational data assimilation system, the reanalysis system is still affected by changes in assimilated observations 32 . For the work presented here, six-hourly surface pressure fields for the period 1 January 1993-31 December 2014 were linearly interpolated so as to correspond to the hourly tide gauge observations described above. Ablain et al. 9 report significant improvements in sea level height estimates at mesoscale and regional spatial scales using ECMWF (European Centre for Medium-Range Weather Forecasts) ERA-Interim Reanalysis 33 fields rather than operational ECMWF fields to calculate certain atmospheric corrections. In an effort to validate our decision to use the NCEP Reanalysis fields here rather than, for example, the ERA-Interim fields we calculated the root mean square (RMS) error between daily mean sea level pressure data collected by NOAA Buoy 46050 (http://www.ndbc.noaa.gov/station_page.php ?station=46050) and the ERA-Interim and NCEP Reanalysis grid cells closest to the 46050 location (44.656°N, 124.526°W) for the period 1993-2014. We find the RMS errors for the ERA-Interim and NCEP Reanalysis fields to be 1.067 and 1.014 hPa, respectively. Given that atmospheric pressure is the only variable used from the atmospheric models and the RMS error is smaller for the NCEP Reanalysis fields, we consider them to be of sufficient quality to be used in the generation of our blended AVISO+TG data set. In situ current velocities In this study, we use in situ time series of current velocities estimated from Acoustic Doppler Current Profilers (ADCP) that were mounted on four moorings located off the coasts of Washington, Oregon and California (Supplementary Table 1). An indirect validation of the altimeter SLA fields is carried out through comparisons between these in situ water velocities and geostrophic velocities derived from the AVISO+TG and AVISO data sets. The locations of these four moorings are shown in Fig. 2 (magenta dots). The RISE (River Influences on Shelf Ecosystems) NOrth (RINO; data available at http://www. bco-dmo.org/dataset/3586/data) mooring was deployed off the coast of Washington as part of a multiyear, National Science Foundation funded, interdisciplinary study of the Columbia River plume 34 The long-term mooring site on the Oregon shelf known as NH-10 was established in close proximity to the Newport Hydrographic (NH) line 35 The California Current Ecosystem coastal upwelling mooring (CCE-2; data available at http:// mooring.ucsd.edu/index.html?/projects/cce/cce2_data.html) is operated as a collaboration between Scripps Institution of Oceanography, the NOAA PMEL (Pacific Marine Environmental Laboratory) carbon and ocean acidification group, the NOAA Southwest Fisheries Science Center, and University of California, Santa Barbara. The mooring, which was first deployed in 2010, is located at 34.2°N, 120.7°W, approximately 35 km west of Point Conception on the continental slope in approximately 770 meters of water. The ADCP bin depths 16.5, 16.5, 15 and 17 meters for each of the four moorings RINO, NH-10, M2 and CCE-2, respectively, were chosen based on data availability as well as an attempt to select a bin depth that was common and consistent across all four moorings, while also being below most of the effects of wind-driven Ekman currents. All ADCP data were hourly averaged and low-pass filtered using the same 40-hour Loess filter 37 as was used for the tide gauge data. The low-pass filtered data were then averaged to create daily mean time series. Finally, the mean for each mooring time series was removed. When forming differences between a time series from a mooring and altimeter-derived velocities, the mean of the altimeter-derived data set was removed over a period identical to the mooring period. Synthetic tide gauge stations An inverse barometer (IB) correction was applied to each of the 16 tide gauge station hourly time series according to the following equation 38 : where P atm is the time varying atmospheric pressure derived from six-hour NCEP Reanalysis I sea level pressure fields. The scale factor 9.948 is based on the empirical value of the IB response at mid-latitudes 39 . The hourly IB corrected tide gauge data were then low-pass filtered using the same 40-hour Loess filter 37 used for the ADCP velocity data and averaged to create daily mean time series. Given the removal of high frequency signals by the low-pass filter and daily averages, a Dynamical Atmospheric Correction was not used. An along-coast 0.125°grid was created between 32 and 48.5°N. Each of the 16 tide gauge time series was assigned the latitude of the nearest of these grid cells (Fig. 1, top panel). The data were then spatially interpolated at each time step using an inverse-distance weighted interpolation method 40 to create a synthetic, gap-free tide gauge data set that consisted of 133 time series. To maintain consistency with the 20-year reference period of the AVISO fields, a 20-year mean (1993-2012) was subtracted from each of the 133 time series to produce the final field of synthetic coastal tide gauge sea level anomalies (Fig. 1, bottom panel). Finally, the 0.125°synthetic tide gauge data set was averaged to the 0.25°AVISO latitude grid and mapped to the most coastal grid cell of the AVISO grid next to the U.S. West Coast between 32 and 48.5°N. Blending AVISO and tide gauge observations Daily, gridded SLA fields for the period 1 January 1993-31 December 2014 and region 32-48.5°N and 135-115°W were extracted from the global AVISO fields. Because the mean AVISO SLA gridded values over the 20-year reference period from 1993-2012 are not zero, the 20-year mean was formed and removed at each grid point, for consistency with the tide gauge data. This mean is the average sea level rise during the 20-year reference period, approximately 2.5 cm. It also includes spatial variability in the form of noise, with magnitudes of several millimetres. This is created by imprecisions in the mean sea surface and other details of the processing used to map the data on to a uniform grid by DUACS. After removing the mean and noise, we refer to this data set as the 'adjusted' AVISO SLA fields. Next, all AVISO observations within 3 grid cells (approximately 55-70 km) of the coast were removed (at the two most northern lines of grid points, it was necessary to remove 5 and 6 grid points, respectively, to eliminate noise caused by stronger tides and a relatively wide shelf near the mouth of the Juan de Fuca Straits). Using the inverse-distance weighted interpolation method referenced above 40 , we interpolated between the tide gauge data at the coast and the remaining daily offshore adjusted AVISO data to generate the final, blended AVISO+TG data set. If one wants to 're-adjust' these fields to match the original AVISO fields, the mean of the original fields at each grid point over the 20-year reference period should be formed and added to our AVISO+TG data set. For convenience, this 20-year (1993-2012) mean SLA field is provided with our AVISO+TG SLA fields. Geostrophic current estimates Following methods described in Saraceno et al. 27 , geostrophic currents were estimated for each of the daily AVISO+TG fields. The zonal and meridional geostrophic velocity components at each grid point were estimated using centered differences as: respectively, where f is the Coriolis parameter, g is the gravitational acceleration and d is the distance between the grid points used in the calculation (approximately 40-50 km). To estimate values as close to the coast as possible, SLA data adjacent to the coast were linearly extrapolated to values at the next gridpoint (over the land) before using the centered difference formula at the grid point next to the coast. The same equations were used to derive geostrophic velocities for each of the original daily mean AVISO fields and our adjusted AVISO fields. Given the smoothing inherent in the creation of the gridded SLA fields, the gradients calculated over these approximately 40-50 km distances proved suitable for the validation efforts and provided velocities on as small a scale as possible for comparison to the current meters. Data Records We form a new data set of blended SLA fields by combining gridded daily fields derived from altimeter data with coastal tide gauge data. Within approximately 55-70 km of the coast, the altimeter data are discarded and replaced by a linear interpolation between the tide gauge and remaining offshore altimeter data. A 20-year mean is subtracted from each time series (tide gauge or altimeter) before combining the data sets to form the blended sea level anomaly data set. Geostrophic velocity anomaly fields are formed from the surface heights. For each year (1993-2014) daily mean SLA, as well as u and v geostrophic velocity anomaly fields are made available as CF compliant NetCDF files. The 22 individual NetCDF files have been added to a single TAR (Tape ARchive) file, which can be accessed at https://ir.library.oregonstate.edu/xmlui/handle/1957/ 57170 (Data Citation 1). Each NetCDF file contains the latitude and longitude grid cell coordinates where the values provided indicate the center of each grid cell. These grid cell coordinates are identical to the original 0.25°AVISO fields. The gridded 20-year (1993-2012) mean of the SLA fields that was removed is also made available, for those wishing to recreate the original AVISO fields in the offshore region. Technical Validation Synoptic scale variability Examples of the blended AVISO+TG daily SLA and derived geostrophic velocity fields are shown in Fig. 2a,b (left panels) for downwelling and upwelling periods, respectively. These are typical of winter and spring conditions in the northern California Current System (CCS). The original AVISO daily mean fields are shown in the middle panels. The differences between our AVISO+TG data and the adjusted AVISO data are presented in the right panels, where only non-zero values are represented by the vectors and colours next to the coast in the region where AVISO data were removed and replaced by the interpolated tide gauge data. During periods of downwelling-favorable winds, such as occurred on 25 February 1999 (Fig. 2a), the AVISO+TG fields tend to have a stronger and more continuous positive SLA signal along the coast (more than 10 cm higher) between 38°and 48.5°N, relative to AVISO fields. During periods of upwelling-favorable winds, such as occurred on 10 May 1999 (Fig. 2b), the pattern is reversed with relatively low SLA occurring along the coast between 34°and 48.5°N. The relatively high (low) SLA along the coast results in northward (southward) alongshore current velocities over the shelf that are several 10's of cm s − 1 greater and often opposite in direction to those derived from AVISO fields alone. These enhanced coastal fields in the example daily AVISO+TG sea surface heights and velocities (Fig. 2a,b) respond rapidly to 2-8 day synoptic scale wind forcing 41 associated with wintertime storms and summertime upwelling and relaxation events. The fact that most of the observed daily variability results from the tide gauge data is demonstrated by examination of the geostrophic velocities derived from the heights. These are compared to nearby in situ ADCP current measurements from the RINO, NH-10, M2 and CCE-2 moorings in Figs 3,4,5,6, respectively. For each comparison, the mean values for each time series for the common periods were removed. The locations of these four moorings as well as the comparison grid cells are shown as black and magenta circles in Fig. 2, respectively. Summary statistics (standard deviations) of zonal and meridional current velocities as well as current magnitudes for all four mooring locations are presented in Table 1. Summary statistics of the differences (ADCP minus AVISO[+TG]) as well as the correlation results are shown in Tables 2,3 calculation, not absolute heights or offsets in those heights. On the current meter side, the mooring observations are of complete velocities, including ageostrophic components such as the wind-driven Ekman transports and smaller-scale motions. Previous studies of coastal upwelling systems have confirmed an approximate geostrophic balance for the alongshore component of observed velocities, which is not true for the cross-shore components. Since coastal winds are often polarized in the alongshore direction, cross-shore currents are strongly affected by onshore-offshore Ekman transports in the upper ocean and return flow below that. Off Oregon, early analyses of current meter velocities and hydrographic surveys by Smith 42 found alongshore currents to be in approximate geostrophic balance, with horizontal scales between 50 and 70 km. In a coastal region off northern California, this was quantified using shipboard ADCP currents and CTD surveys by Kosro and Huyer 43 , finding a correlation of r = 0.73 between the measured currents at 30 m and the cross-current pressure gradients. Their data also confirmed that currents over the shelf were more strongly polarized into the alongshore directions than farther offshore. Off central California in deep water approximately 500 km offshore, similar correlation values of 0.6-0.7 were found between current meter and geostrophic velocities calculated from altimeter data along tracks that crossed over the current meter location 44 . Thus, correlation values of approximately 0.6-0.7 serve as a benchmark for our correlations between observed and geostrophic currents. The altimeter height fields also contain noise, which is amplified by the spatial differences used in the geostrophic calculation, more so for differences calculated over short distances. The geostrophic velocities in the above studies used differences over scales of 40-60 km, similar to the approximately 40-50 km differences used here. Thus, our comparisons and correlations are expected to be similar to the above studies in the degree to which they are affected by amplified noise and ageostrophic motion in the moored data sets. They are also likely to be most useful when considering the alongshore currents, rather than the cross-shore currents. Staying within the AVISO grid, we calculate meridional and zonal components of the geostrophic and ADCP velocities, which are approximately in the alongshore and cross-shore directions, respectively, given the generally north-south direction of the coastline. For completeness in Figs 3,4,5,6 we show the zonal and meridional components of velocity in the upper and middle panels, although we expect the meridional components to be more nearly alongshore and thus more closely in geostrophic balance. This is especially true at the two more northern locations, where the coastline is nearly north-south and the current meters are over the shelf, closer to the coast and more polarized in the alongshore direction. In the two southern locations, the coast is oriented more NW to SE, the current meters are farther offshore over the slope and the comparison geostrophic gridpoints are even farther offshore of the current meter. Tables 2,3,4,5 show that correlations of the meridional components of the daily mean ADCP and altimeter-derived currents are higher for the AVISO+TG than the AVISO SLA for all except the most southern mooring, where they are the same. The average correlation of all four mooring locations increases from 0.37 to 0.49 for the AVISO+TG meridional currents. There are higher correlations for the two more northern moorings and the best comparison is found at NH-10, the location with the longest record and most energetic measured current magnitudes. Correlations at NH-10 between the ADCP and SLA-derived meridional currents increase from 0.51 for AVISO to 0.73 for AVISO+TG ( Table 3). The value of 0.73 is similar to benchmark noted above, although it is slightly lower than the correlation of 0.83 at the same mooring that Saraceno et al. 27 calculated using weekly mean observations and a shorter record. The standard deviation of the difference between moorings and altimeter-derived velocities also decreases for the AVISO+TG fields, again more notably at the NH-10 location. To test whether correlations might be greater deeper in the water column, farther removed from Ekman layer effects, the ADCP meridional velocities at NH-10 from approximately 30 meters were also compared with the meridional geostrophic velocities derived from the AVISO+TG and AVISO data sets. Correlations between the ADCP and altimeter-derived meridional currents were 0.46 for AVISO and 0.68 for AVISO+TG, lower than the values observed at 16.5 meters. Thus we do not believe that the use of deeper observed currents would avoid Ekman effects and improve the comparisons. Statistics are mixed for the zonal components of velocity, generally associated with the cross-shelf directions. Correlations between measured and geostrophic velocities are generally very low, except at NH-10. Although the correlations increase slightly from 0.55 to 0.59 for the AVISO+TG geostrophic velocities at that location, the standard deviation of the differences between measured and geostrophic velocities also increases for the AVISO+TG SLA. Thus, no clear picture emerges from comparisons of the zonal components, which are not expected to provide a good test of the SLA fields through geostrophic balances. The velocity magnitudes are most useful in comparisons of their standard deviations, indicating the relative energy in the currents. At all four locations (Table 1), the AVISO+TG geostrophic currents are more energetic than the AVISO currents, and both are weaker than the measured currents. At all except the most southern locations, the increased energy is due to the meridional component, as evident in the Figs 3,4,5,6. Supplementary Fig. 1 presents an expanded view of the comparison at NH-10 for the years 2002 through 2004. Looking at the meridional component, periods of agreement and disagreement can be found, but it is clear that the tide gauge data have allowed the AVISO+TG SLA-derived velocities to respond to the synoptic forcing in a relatively realistic manner. The AVISO-only velocities, in contrast, are much less energetic, have almost no variability at time scales shorter than one month and often appear to be out of phase with the measured currents even on monthly time scales. A somewhat similar pattern is seen at Grays Harbor, where the record is short enough to allow examination of synoptic fluctuations. At the two southern locations, the zonal component becomes more energetic, to the extent that at CCE-2 the zonal component of the AVISO+TG velocities appear to contain most of the synoptic variability. Given the change in coastal orientation and the more offshore location, it is possible that the coastal jet at this more offshore site has been diverted partially into the zonal orientation. This is suggested by the fact that the correlations between ADCP and geostrophic currents in the zonal orientation at CCE-2 increases from 0.20 for AVISO to 0.29 for AVISO+TG currents. Overall, the velocities at the southern moorings that are over the continental slope and more distant from the coast provide less useful validations than the moorings over the shelf at the northern locations. Long-term and seasonal variability The long-term (1 January 1993-31 December 2014) standard deviations of the AVISO+TG fields are shown in Fig. 7 (top). The velocity standard deviations represented by the principal axis ellipses and surface height standard deviations represented by the colours both indicate the relatively higher variability occurring along the coast between 32°and 48.5°N, in comparison to the AVISO fields (Fig. 7, bottom). The polarization of the variance ellipses in Fig. 7 confirms that the AVISO+TG geostrophic currents have greater alongshore variability relative to the AVISO currents, which is quantified by the standard deviation values presented in Table 1. They also demonstrate the rapid decrease in the polarization of the altimeter-derived velocities as one moves offshore, which affects the comparisons at the southern locations (36. 7°N and 34.2°N). The January AVISO+TG climatology field (Supplementary Fig. 2; top left panel) has elevated SLA near the coast north of 40°N relative to the January AVISO climatology field (bottom left). In comparison to the AVISO fields, these elevated SLA occur closer to the coast, i.e. to the east of the 200 m isobath (black line). South of 40°N and offshore, the AVISO SLA are slightly higher than the AVISO+TG field. In April, the AVISO+TG SLA are more negative next to most of the U.S. West Coast than the AVISO field. Likewise, SLA values are more negative north of 41°N in the July and October AVISO+TG fields, relative to AVISO. These comparisons demonstrate that the AVISO+TG fields have stronger signals next to the coast in monthly fields, as they do in the daily fields (Fig. 2). In the offshore region, the slightly higher SLA values of the AVISO fields reflects the fact that the long-term mean of the AVISO fields during the 20-year reference period is approximately 2.5 cm, rather than zero for the AVISO+TG fields. Figure 8 shows that EOF analyses using monthly mean AVISO and AVISO+TG SLA fields yield very similar results for the first four modes of variability, which combine to explain approximately one third of the total variance. The first two modes are dominated by the seasonal cycle, with peaks and troughs in winter and summer, respectively. The first mode accounts for 17.1 and 15.6% of the variances, respectively, for AVISO and AVISO+TG. The wide band of high SLA next to the coast shows that this mode corresponds to the large-scale California Current north of 36°N, with poleward and equatorward flow anomalies in winter and summer, respectively. The second mode represents 8.9 and 9.1% of the variance in the AVISO and AVISO+TG fields, respectively, and is concentrated in a narrow band next to the coast over the shelf. The seasonal timing of the second mode SLA anomalies is the same as for the first mode. For both first and second modes, the spatial patterns of the AVISO+TG data produce stronger and more continuous signals next to the coast than the AVISO data. The third mode accounts for 4.8% of the by the Oceanic Niño Index (ONI) at the NOAA Climate Prediction Center, available at http://www.cpc.noaa.gov/products/precip/CWlink/MJO/enso.shtml. See also McClatchie et al. 45 and Wolter and Timlin 46 . All four modes can contribute to interannual variability, as seen during the strong 1997-1998 event, while the moderate event in 2002-2003 receives contributions from the first three modes (especially mode 2). A moderate event in 2009-2010 is more strongly influenced by modes 1 and 2, as is a weak event in 1994-1995. The mode 3 amplitude time series contains negative values during La Niña events that are considered moderate (1998)(1999)(2007)(2008) and weak (1995)(1996), based on the ONI index and references above. The similarity of the EOF modes (AVISO+TG versus AVISO) indicates that the use of the AVISO+TG data set causes no problem in analyses of interannual variability, as represented by the dominant EOF modes. It does, however, strengthen the interannual signals in the region adjacent to the coast. Interannual Variability It should be noted that the AVISO+TG mode 1 spatial pattern does show a discontinuity where the offshore AVISO fields intersect with the nearshore interpolated data. This can be noticeably reduced by applying a 3 × 3 grid cell median filter to the monthly SLA fields prior to the EOF calculation, without significantly impacting the structures visible in the AVISO+TG mode 1 spatial pattern. A 5 × 5 median filter removes the discontinuity completely but also reduces the extremes in the large-scale spatial patterns (Supplementary Fig. 3).
2016-03-04T08:47:34.539Z
2016-03-01T00:00:00.000
{ "year": 2016, "sha1": "c50f6e322f4d355eb73af46a29f3989d724fb0e2", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/sdata201613.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "43f0694ecbf00738617abbafea723dc4e23b3620", "s2fieldsofstudy": [ "Environmental Science", "Geography" ], "extfieldsofstudy": [ "Medicine", "Geology" ] }
215790466
pes2o/s2orc
v3-fos-license
Aberrant Expression of High Mobility Group Box Protein 1 in the Idiopathic Inflammatory Myopathies Introduction High Mobility Group Box Protein 1 (HMGB1) is a DNA-binding protein that exerts inflammatory or pro-repair effects upon translocation from the nucleus. We postulate aberrant HMGB1 expression in immune-mediated necrotising myopathy (IMNM). Methods Herein, we compare HMGB1 expression (serological and sarcoplasmic) in patients with IMNM with that of other myositis subtypes using immunohistochemistry and ELISA. Results IMNM (n = 62) and inclusion body myositis (IBM, n = 14) patients had increased sarcoplasmic HMGB1 compared with other myositis patients (n = 46). Sarcoplasmic HMGB1 expression correlated with muscle weakness and histological myonecrosis, inflammation, regeneration and autophagy. Serum HMGB1 levels were elevated in patients with IMNM, dermatomyositis and polymositis, and those myositis patients with extramuscular inflammatory features. Discussion Aberrant HMGB1 expression occurs in myositis patients and correlates with weakness. A unique expression profile of elevated sarcoplasmic and serum HMGB1 was detected in IMNM. INTRODUCTION The idiopathic inflammatory myopathies (IIMs) are a group of systemic autoimmune diseases characterized primarily by muscle inflammation, but also potentially accompanied by a range of extra-muscular manifestations. In adults, the term encompasses dermatomyositis (DM), polymyositis (PM), inclusion body myositis (IBM) and immune-mediated necrotising myopathy (IMNM; also called necrotising autoimmune myopathy, NAM). The etiology of these conditions remains obscure and the pathogenic mechanisms likely differ between the subtypes, given their distinct histopathological and immunological features. Immune-mediated necrotising myopathy has been only relatively recently described and the molecular mechanisms underlying the immune attack on muscle are poorly understood (Allenbach and Benveniste, 2013;Allenbach et al., 2018). It is becoming increasingly apparent that a dysregulated innate immune system contributes to the initiation and perpetuation of the IIMs, with roles for type I interferon (IFN), toll-like receptors (TLRs), various cytokines and the alarmin, High Mobility Group Box Protein 1 (HMGB1), now well-established (Day et al., 2017). HMGB1 is a ubiquitous non-histone nuclear DNA-binding protein that can, under certain physiological and pathological conditions, undergo extra-nuclear translocation where it may act as a signal of tissue damage and a pro-inflammatory mediator (Lotze and Tracey, 2005). In response to injurious stimuli, inflammatory cells actively secrete HMGB1 in a controlled manner, which requires post-translational modification of the protein (Gardella et al., 2002;Lu et al., 2014). HMGB1 is also rapidly passively released from necrotic cells, following nuclear membrane breakdown (Scaffidi et al., 2002). This protein is implicated in a broad range of conditions, including sepsis, malignancy and autoimmune diseases (Harris and Raucci, 2006;Diener et al., 2013;Magna and Pisetsky, 2014). With regards to IIM, extra-nuclear HMGB1 expression has been demonstrated in the muscle of mice with experimental autoimmune myositis (Wang and Qin, 2016) and in muscle from IBM (Muth et al., 2015), PM, and DM (Ulfgren et al., 2004;Grundtman et al., 2010) patients. HMGB1 positive fibers in PM and DM muscle are non-necrotic, suggesting active release of HMGB1 from the muscle cell nucleus (Ulfgren et al., 2004). Of note, sarcoplasmic HMGB1 expression in IIM patients has not been confirmed in all reports (Cseri et al., 2015). Patients with new-onset DM and PM have elevated serum HMGB1, and these levels correlate with survival and the presence of interstitial lung disease (ILD) (Shu et al., 2016). In addition to descriptive research, a pathogenic role for HMGB1 in muscle is suggested by experimental studies. For instance, myocytes or myofibres exposed to recombinant HMGB1 demonstrate intracellular protein aggregation, increased cell death (Muth et al., 2015), aberrant MHC-I expression (Grundtman et al., 2010;Muth et al., 2015) and impaired calcium release during repeated tetanic stimulation, suggesting enhanced muscle fatigue (Grundtman et al., 2010;Zong et al., 2013). However, HMGB1 is a multifaceted protein that exerts different effects depending on its redox state, the presence of post-translational modifications, the complexes it forms with other stimulatory or inhibitory proteins and the cellular receptor through which it ultimately signals. HMGB1 has been shown to induce muscle regeneration in mouse models of ischaemic myopathy (De Mori et al., 2007) and a non-oxidisable mutant form of exogenous HMGB1 promotes muscle and liver regeneration in mice via interaction with the CXCR4 receptor (Tirone et al., 2018). Cytosolic HMGB1 is a crucial regulator of autophagic responses to cellular stress, where autophagy is a beneficial physiological process enabling cellular proteins to be degraded and recycled (Tang et al., 2010). Within IIM muscle, HMGB1 co-localizes marker of autophagy (Cappelletti et al., 2014). As such, while HMGB1 appears to play a role in IIM pathophysiology and may have direct negative effects on muscle function, it may paradoxically aid in muscle restoration. The role of HMGB1 in IIM appears complex and clearly warrants further investigation. Moreover, research evaluating expression of this protein in IIM has focused on DM, PM and IBM and these discoveries may not be readily extrapolated to the condition of IMNM. Herein we compare expression of HMGB1 in IMNM patients with that of other IIM patients. We correlate these findings with clinical, histopathological and serological parameters. To our knowledge, this is the largest cohort study evaluating HMGB1 in IIM and the first to describe sarcoplasmic and serum levels of HMGB1 in IMNM. Subjects Muscle tissue, serum and clinical data were obtained from the South Australian Myositis Database (SAMD), a registry of patients with PM, DM, IBM, non-specific IIM (NSIIM) and 'necrotising myopathy.' Recruitment to the SAMD is based on histological criteria which have previously been described for PM, DM, and IBM (Limaye et al., 2013). All cases of PM, DM and IBM adhered to published classification criteria (Hoogendijk et al., 2004;Rose and Group, 2013). Cases are recorded as 'necrotising myopathy' if there is myofibre necrosis and an absence of histological features consistent with other neuromuscular conditions. This was considered to be immunemediated if the treating clinician documented a diagnosis of IMNM or NAM. Cases of NSIIM have muscle inflammation with insufficient biopsy criteria to allow subclassification (e.g., scattered perimysial or endomysial inflammation that does not surround or invade myofibres). Muscle from 62 patients diagnosed with IMNM between 2001 and 2016, was included. Forty-five patients with a diagnosis of DM, PM, or IBM were included for comparison in addition to 15 patients with NSIIM and 17 controls. Control muscle constituted biopsies obtained from subjects with diffuse myalgia, weakness or unexplained creatine phosphokinase (CK) elevations, but which lacked any myopathic features. Control serum samples were collected from healthy volunteers. Some cases were excluded from this study due to lack of clinical information, alternative diagnosis or technical difficulties (Supplementary Figure 1). Muscle Biopsy Muscle samples were obtained via open surgical biopsy or needle biopsy. Specimens were placed into transverse orientation, mounted on cork, frozen in isopentane cooled using liquid nitrogen (Cash and Blumbergs, 1994) and stored in liquid nitrogen until sectioning. Consecutive 9 µm-thick cryostat muscle sections were placed on coated slides. H&E staining was performed on the first and last sections of each series. Sections intended for MHC I, MHCn, CD68, CD45 and LC3 staining were air dried for 30 min then stored at −80 • C until use (1 -68 days). Sections intended for HMGB1 staining were air dried for 30 min, fixed for 10 min in 10% neutral buffered formalin, air dried for 30 min then stored at −80 • C until use (1 -33 days). On the day of staining, antigen retrieval was achieved on slides intended for HMGB1 staining by immersion in sodium citrate buffer (pH 6.0) followed by heat induced epitope retrieval utilizing microwave treatment. Positive controls were performed for each antibody in every staining procedure. A negative control was performed by omitting primary antibody in every staining procedure, both for an unfixed slide and a formalin-fixed microwave-retrieved slide. For ten cases, a formalin-fixed, antigen-retrieved section underwent staining with a rabbit IgG isotype control (Invitrogen, 31235), at the same concentration as the HMGB1 antibody. Lymphoid tissue was used to confirm HMGB1 staining in nonmuscle tissue. Modified Gomori Trichrome Staining Consecutive 9 µm-thick cryostat sections of unfixed fresh frozen IBM muscle, normal muscle and a positive "ragged red" control were stained with Harris' haematoxylin for 30 min, washed then stained with modified Gomori trichrome stain (laboratoryprepared, pH 3.4) for 15 min followed by differentiation with 0.2% aqueous acetic acid. Quantification of Immunohistochemical Staining Slides were graded using traditional microscopy for HMGB1, MHC I, LC3 and MHCn staining in a semi-quantitative manner by a muscle pathologist (SO) (Supplementary Table 1). The H&E stained sections were graded for degree of necrosis by the same pathologist, where necrosis was defined as muscle cells exhibiting a combination of the following features: swelling, hyalinization, hypereosinophilia, pallor, myophagocytosis. Slides were graded by twice on separate occasions; median grades are reported. A second trained investigator (JD) validated the grading scale by manually counting positive and negative myofibres in 10 randomly selected high power fields (HPFs, magnification × 400) and calculating the percentage of HMGB1+ fibers. Manual cell counts of CD45+ leucocytes and CD68+ macrophages were also performed. Evaluation of each histological parameter was completed for the entire cohort before proceeding to the next parameter. At the time of histological evaluation, investigators were blinded to the clinical details and the grades assigned to other immunoproteins. Serum Collection and Analysis A commercial ELISA kit was used to measure serum concentrations of HMGB1 (Cloud-Clone Corporation, Texas, United States). Each sample was diluted 1/100 in phosphate buffered saline and tested in duplicate. One sample was re-tested on each ELISA plate. Statistics Statistical analysis was performed using STATA version 14.0. Values were expressed as the median and the interquartile range (IQR). Two group comparisons were performed using the Mann-Whitney U test. When three or more groups were compared, a Kruskal-Wallis H test was conducted to identify whether a statistically significant difference existed, followed by a post hoc Dunn's test. A Bonferroni correction was applied for multiple comparisons. Fisher's exact test was used to analyze categorical data. Spearman correlations were performed to analyze associations between radiological grades and continuous or ordinal parameters. Number of cases analyzed are indicated if a full data set was unavailable. P-values < 0.05 were considered significant. Immunohistochemistry Grading Reliability Intra-rater reliability was high (κ > 0.70) for all histological parameters. The average percentage of HMGB1+ myofibres correlated strongly with grades assigned by the muscle pathologist (r s 0.83, p < 0.01). Sarcoplasmic HMGB1 levels are elevated in muscle from IIM patients but levels differ according to disease subtype HMGB1 sarcoplasmic immunostaining was low grade and stippled in histologically normal control muscle ( Figure 1A) and comparatively strong in IIM patients ( Figure 1B). As expected, muscle nuclei and infiltrating immune cells were strongly HMGB1 positive (e.g., Figure 1B). Sarcoplasmic expression was highest in IMNM followed by IBM and PM (Figure 2). Both IBM and IMNM patients exhibited significantly elevated levels of sarcoplasmic staining compared to those with DM and NSIIM. Sarcoplasmic HMGB1 expression correlates with multifactorial processes in IIM Sarcoplasmic HMGB1 grades correlated strongly (r s 0.62 -0.77, p < 0.01) with the degree of muscle cell necrosis for all IIM subtypes except NSIIM, suggesting that necrosis is an important driver of sarcoplasmic HMGB1 staining even in those subtypes where this is not the dominant histological feature. Both macrophage and CD45+ leucocyte infiltration were strongly associated with sarcoplasmic HMGB1 expression (r s 0.63, 0.61, p < 0.001), although these correlations were less robust and not significant for IBM patients. In fact, for IBM patients, the degree of MHCn+ (regenerating) fibers (r s 0.77, p < 0.001) and LC3+ staining (r s 0.75, p < 0.001) correlated most strongly with HMGB1 expression. On a cellular level, these processes colocalized within individual muscle fibers. Many necrotic cells exhibited strong positive staining for HMGB1 (Figures 3A,B). Regenerating fibers were frequently present in IIM muscle and always stained positively for HMGB1 (Figures 3C,D). In IBM, abnormal HMGB1 staining was visualized in fibers exhibiting features of abnormal cytoplasmic protein inclusions, vacuolar change and autophagic protein accumulation (Figures 3E-H). Isotype control staining was negative (Supplementary Figure 2). Only occasionally was HMGB1 positivity noted in relatively normal appearing mature myofibres. This typically occurred in areas of inflammation and might reflect active secretion of HMGB1 by activated myofibres. Analysis of sarcoplasmic HMGB1 levels by individual autoantibodies was limited by small numbers. Sarcoplasmic HMGB1 levels were high in all anti-SRP+ IMNM patients (Grade ≥ 2, n = 5) whereas a range of staining patterns were observed in anti-Ro+ IIM, antisynthetase+ IIM and anti-HMGCR+ IIM. Reduced sarcoplasmic HMGB1 staining in some IIM patients may reflect corticosteroid exposure We found a modest negative correlation between cumulative corticosteroid dose and sarcoplasmic HMGB1 expression (r s −0.30, p < 0.01). Patients with inflammatory arthritis had lower muscular HMGB1 staining than those without arthritis (p < 0.05). These patients also had higher cumulative prednisolone exposure (325 mg vs. 0 mg, p = 0.04), which may explain the comparatively reduced sarcoplasmic HMGB1 expression. Sarcoplasmic HMGB1 correlates with muscle weakness Twenty-four IIM patients had MMT8 assessments at the time of muscle biopsy and there was a strong negative correlation between strength and sarcoplasmic HMGB1 levels. This was true for IMNM (r s −0.57, p = 0.03, n = 14) and non-IMNM IIM patients (r s −0.75, p = 0.01, n = 19). Sarcoplasmic HMGB1 expression consistently correlated with every bedside clinical disease activity index tested (Figure 4). There was a modest correlation with serum CK level (r s 0.31, p < 0.01, n = 95). There were no correlations between HMGB1 grades and indices of ( disease-related damage (MDI scores) or patient disability (HAQ scores). Sarcoplasmic HMGB1 expression modestly negatively correlated with symptom duration in IMNM patients (r s −0.38, p < 0.01, n = 50) but trended toward a positive correlation in IBM patients (r s 0.62, p = 0.07, n = 9), likely reflecting that different processes are responsible for HMGB1 expression in these diseases. Serum HMGB1 levels are elevated in patients with PM, DM, and IMNM Serum HMGB1 was elevated in IIM compared with healthy controls (p < 0.001), however, levels varied markedly by IIM subtype ( Figure 5A). As such, patterns of serum HMGB1 expression differed from that observed within muscle. For instance, despite exhibiting notable sarcoplasmic staining, circulating levels of HMGB1 in IBM patients did not differ from controls. Conversely, high serum HMGB1 was detected in DM patients despite these patients exhibiting low sarcoplasmic levels. Patients with IMNM had notably high intramuscular and high serum HMGB1 levels. There were no significant differences in serum HMGB1 levels between IIM patients who had serum collected early in the disease process (within 6 months of diagnosis) and those whose serum collection was more delayed (Figure 5B). There was no correlation between serum HMGB1 levels in IIM patients and symptom duration. Sarcoplasmic and serum levels of HMGB1 did not correlate in the 58 patients with both samples available, possibly owing to the multifactorial nature of intramuscular HMGB1 expression and because these samples were collected at different time-points of the disease trajectory. Extramuscular features are associated with elevated serum HMGB1 levels High serum HMGB1 levels were observed in IIM patients with RP, ILD and inflammatory joint disease ( Table 2). Myalgia was common and was also associated with significantly elevated serum HMGB1. Myalgic patients also had more polyarthralgia (10/33, 30% versus 4/36, 11%, p = 0.046) compared with nonmyalgic IIM patients, but did not exhibit higher serum CK, myonecrosis or muscle inflammation. Together this suggests that myalgia may be systemically driven or reflect joint inflammation, rather than intramuscular pathology per se. Patients with anti-Ro52+ antibodies had significantly elevated serum HMGB1 levels ( Table 2). We did not observe any association between circulating HMGB1 levels and muscle disease activity measures, antisynthetase antibodies or the presence of cancer. DISCUSSION Understanding the molecular events underpinning IIM pathophysiology and how this differs between subtypes is critical in the pursuit of developing targeted therapies, which is increasingly the goal in rheumatological practice. Herein we evaluated expression of HMGB1 in all forms of IIM, including subtypes in which levels of expression of this protein have been hitherto unknown, such as IMNM and NSIIM. Structures demonstrating HMGB1 expression included infiltrating immune cells, necrotic myofibres and those exhibiting regeneration, autophagy and mitochondrial dysfunction. We additionally observed strong HMGB1 staining in all muscle nuclei and low-grade staining in our histologically normal controls. Given small amounts of HMGB1 are present cytosol under normal cellular conditions (Kuehl et al., 1984;Tang et al., 2010), the sarcoplasmic HMGB1 observed in our control tissue is likely physiological. As such, we have shown HMGB1 to be associated with multiple processes in the muscle microenvironment ranging from physiological to pathological, damaging to restorative. The relative balance and temporal evolution of these processes may explain the varying degrees of HMGB1 expression we observed across IIM subtypes. While it seems paradoxical that one protein could be associated with multiple processes, it is in keeping with the complex functional properties of HMGB1. The biological actions of HMGB1 are strikingly diverse and evolve over time owing to its unique biochemistry, its ability to undergo post-translation modifications, complex with other proteins and signal through a multitude of receptors. As others have emphasized (Magna and Pisetsky, 2014), HMGB1 should be conceptualized as an ensemble of proteins rather than a single species and with a fixed structure or function. Overall, our data imply a deleterious role for HMGB1 in IIM, as sarcoplasmic expression to correlated with clinical disease activity and histological inflammation and necrosis. This is consistent with evidence demonstrating HMGB1 to have pro-inflammatory properties and correlate with inflammatory disease activity. In addition to cytokine and chemokine properties, HMGB1 can activate the classical pathway of complement (Kim et al., 2018), where complement deposition is a key mediator of myonecrosis (Engel and Biesecker, 1982). It is conceivable that HMGB1 released from necrotic myofibres could trigger further local muscle cytolysis and perpetuate further damage in a self-sustaining process. Therapeutic blockade of HMGB1 in inflammatory disease states in is under consideration and been evaluated in experimental models of drug-induced liver injury (Lundback et al., 2016) and inflammatory arthritis (Kokkola et al., 2003), with promising results. However, data herein and elsewhere links HMGB1 with ostensibly beneficial processes such as myofibre regeneration (De Mori et al., 2007;Tirone et al., 2018) and metabolic functions such as autophagy (Muth et al., 2015;Tang et al., 2010). Accelerated tissue regeneration is observed following administration of recombinant HMGB1 in mouse models of muscle injury (De Mori et al., 2007). This myoregenerative effect is particularly pronounced when HMGB1 is administered in a fully reduced isoform (Tirone et al., 2018). As such, promoting certain HMGB1 pathways therapeutically might accelerate tissue repair in various clinical scenarios. Development of therapeutics that specifically inhibit isoforms of HMGB1 contributing to inflammatory pathology or promote those isoforms involved in reparative processes would clearly be the most desirable strategy. Of note, regenerating myofibres have been implicated in the elaborate pathophysiological mechanisms that underpin IIM (Tournadre and Miossec, 2013), and attempts to enhance myoregeneration with exogenous HMGB1 may not be prudent in these diseases. Further research evaluating the role of specific HMGB1 isoforms in IIM and the role of regenerating myofibres in disease perpetuation is clearly required before therapeutic intervention exploiting HMGB1 pathways can be considered. We observed elevated serum HMGB1 levels in IMNM, PM and DM patients, however, the source of circulating HMGB1 may differ between subtypes. In IMNM, this likely reflects rapid, passive release of HMGB1 into the extracellular space due to myonecrosis. Indeed, HMGB1 has been used as a marker of necrosis in experimental studies of tumor pathophysiology (Jeon et al., 2013;Kang et al., 2014). However, HMGB1 can also be released from activated immune cells present in inflamed tissues; this may explain the association between HMGB1 levels in serum and the presence of extra-muscular autoimmune manifestations, such as RP, joint disease and ILD. Previous studies have demonstrated elevated HMGB1 in the serum and/or broncho-alveolar fluid of IIM-related ILD and other inflammatory fibrotic lung conditions (Ebina et al., 2011;Shu et al., 2016;Ying et al., 2017;Shimizu et al., 2018). These results support a clinical role for measuring serum HMGB1 levels; this could supplement muscle biopsy in the subtyping of IIM and, potentially, screening for IIM-related ILD (Shu et al., 2016). Our findings suggest that this assessment will have discriminatory utility even in patients with well-established disease who are receiving immunomodulatory therapy, although further studies evaluating the sensitivity and validity of this minimally invasive test are required. Importantly, HMGB1 that is actively secreted by inflammatory cells undergoes critical post-translational modifications (acetylation) of 'nuclear localization sites' (NLSs) contained within the protein in order to exit the nucleus (Lotze and Tracey, 2005). Conversely, passive release does not involve NLS modification and thus necrotic cells do not generate hyperacetylated HMGB1 (Yang et al., 2015). An assay that differentiates these HMGB1 isoforms could allow clinicians to quantify the degrees of necrosis versus inflammation in individual patients, and could conceivably aid in IIM subtyping. The oxidation state of HMGB1 also differs according to the mechanism of cellular release but, considering this can rapidly alter in the extracellular milieu , assays determining the degree of NLS acetylation may have more discriminatory value. Unfortunately, HMGB1 isoform analysis is challenging, requires high-end mass spectrometry instrumentation and is currently successfully performed by only one research group worldwide on a collaborative basis (Yang et al., 2012). Development of further reliable isoform assays would advance scientific understanding regarding the role of HMGB1 isoforms in disease pathogenesis, information vital for clinical translation of therapeutics targeting HMGB1. This study has several limitations. It is descriptive in nature and the complex mechanisms underpinning the associations we have observed cannot be determined. We did not perform HMGB1 receptor staining or have access to HMGB1 isoform analysis, which would have provided added insights. Our sample size was small, owing to the rare nature of these disorders. However, this is a large study evaluating HMGB1 expression in IIM and the first to describe elevated levels in the muscle and serum of IMNM patients. These important findings may inform future critical mechanistic studies regarding the role of HMGB1 in autoimmune muscle disorders. DATA AVAILABILITY STATEMENT The datasets generated for this study are available on request to the corresponding author. ETHICS STATEMENT This study was approved by the Central Adelaide Local Health Network Ethics Committee, Adelaide, Australia. We confirm that we have read Muscle and Nerve's position on issues involved in ethical publication and affirm that this report is consistent with those guidelines. AUTHOR CONTRIBUTIONS JD: hypothesis revision, conducting experiments, acquiring and analyzing data, writing manuscript, and acquiring funding. VL: SAMD custodian, hypothesis initiation and revision, acquiring data, manuscript revision, acquiring funding, and providing materials. JH: hypothesis initiation and revision, providing funding, manuscript revision, and providing materials. SP: hypothesis revision, manuscript revision, acquiring data, and providing funding. PE: hypothesis revision, experimental design, and laboratory supervision of JD. PH and SO: hypothesis revision, acquiring data, and providing materials. KC: hypothesis revision, experimental design, conducting experiments, and laboratory supervision of JD. All authors: approval of the final manuscript.
2020-04-17T13:05:40.672Z
2020-04-17T00:00:00.000
{ "year": 2020, "sha1": "d04cf09f5d384e498ec304457ca4535dde41b299", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3389/fcell.2020.00226", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d04cf09f5d384e498ec304457ca4535dde41b299", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
96877467
pes2o/s2orc
v3-fos-license
The McClelland approximation and the distribution of π-electron molecular orbital energy levels Abstract: The total π-electron energy E of a conjugated hydrocarbon with n carbon atoms and m carbon–carbon bonds can be approximately calculated by means of the McClelland formula mn g E 2 ≈ , where g is an empirical fitting constant, g ≈ 0.9. It was claimed that the good quality of the McClelland approximation is a consequence of the fact that the π-electron molecular orbital energy levels are distributed in a nearly uniform manner. It will now be shown that the McClelland approximation does not depend on the nature of the distribution of energy levels, i.e., that it is compatible with a large variety of such distributions. The total π-electron energy E is one of the most thoroughly studied theoretical characteristics of conjugated molecules that can be calculated within the Hückel molecular orbital (HMO) approximation. 1,24][5][6][7][8] Long time ago McClelland proposed the simple approximate formula: 9 mn g E 2 ≈ (1 where n is the number of carbon atoms and m the number of carbon-carbon bonds, and where g is an empirically determined fitting parameter, g ≈ 0.9.In the meantime a large number of other (n,m)-type approximate expressions for E have been proposed, but, as demonstrated by detailed comparative studies, 10−13 none of these could exceed the accuracy of Eq. (1). In 1983 the present author discovered 14 that a result closely similar to Eq. ( 1) can be obtained by assuming that the HMO energy levels are uniformly distributed.Eventually such a distribution-based approach to E was elaborated in more detail. 15,16The conclusion of the works 14−16 was that the McClelland approximation (Eq.( 1)) is connected with the assumption that the HMO π-electron energy levels of conjugated hydrocarbons are distributed in a (nearly) uniform manner. GUTMAN The reasoning by means of which this conclusion was obtained will be briefly repeated. If λ 1 , λ 2 ,..., λ n are the Eigen values of the molecular graph representing the respective conjugated molecule, then: As is well known, 1,2 the graph Eigen values satisfy the relation: Without loss of generality, Eqs. ( 2) and (3) may be rewritten as: where Γ(x) is the probability density of the distribution of the graph Eigen values.It should be mentioned in passing that the exact expression for Γ(x) is: with δ denoting the Dirac delta-function. In situations when the actual form of the probability density Γ(x) is not known (i.e., when the spectrum of the molecular graph is not known), one tries to guess an approximate expression for it, denoted by Γ*(x), which must satisfy the conditions: and, of course, Γ*(x) ≥ 0 for all values of x.Then the quantity E*, is expected to provide a reasonably good approximation for the total π-electron energy E. In the works, 14,16 the simplest possible choice for Γ*(x) was tested, namely, The form of the function ( 7) is shown in Fig. 1. The parameters a and b can easily be determined from the conditions Eq. ( 4) and ( 5), resulting in By inserting the conditions given by Eq. ( 7) back into Eq.( 6), one obtains: E * = a 2 bn which combined with Eq. ( 8) yields: mn g E 2 * * = (9) with g* being a constant equal to 2 3 / .Not only is the algebraic form of the expression ( 9) identical to the McClelland approximation (Eq.( 1)), but also the value of the multiplier g* = 0.8660 is remarkably close to the (earlier) empirically determined value for g.Fig. 1.The form of the probability density (Eq.7) for a = 3 and b = 1/6.The Eigen values of the molecular graph are assumed to be uniformly distributed within the interval (-a,+a), i.e., within the interval (a,-a), the probability density is assumed to be constant (equal to b).Outside this interval, the probability density is set to be equal to zero. Thus, it can be seen that by assuming a uniform distribution of the Eigen values of a molecular graph, the McClelland formula (Eq.( 1)) can be reproduced.What has hitherto been overlooked is that formula (1) can also be deduced by using many other probability densities. OBTAINING FORMULA (1) FROM A VARIETY OF MODEL FUNCTIONS Γ*(x) Suppose that the model based on Eq. ( 7) is required to be upgraded by including the information that the MO energies around the non-bonding level (corresponding to x = 0) are more numerous than those far from the non-bonding level, see diagram 1 in Fig. 2.This can be achieved by means of the function: for -a ≤ x ≤ +a and otherwise Γ*(x) = 0 (10) Then, by direct calculation in a fully analogous manner as described in the preceding section, formula ( 9) is obtained with g* = 4 5 / .970 GUTMAN If, however, the opposite is assumed, namely that the MO energies around the non-bonding level are less numerous than those far from the non-bonding level (see diagram 2 in Fig. 2), and therefore set 2 for -a ≤ x ≤ +a and otherwise Γ*(x) = 0 (11) then Eq. ( 9) is again obtained, this time for g* = 4 15 / .The model function Γ* may be made still more complicated, with two minima or two maxima (diagrams 3 and 4 in Fig. 2), i.e., for -a ≤ x ≤ +a and otherwise Γ*(x) = 0 (12) for -a ≤ x ≤ +a and otherwise Γ*(x) = 0 (13) but Eq. ( 9) is still obtained with g* = ) 2 12 /( 21 5 and g* = ) 187 2 /( 7 5 , respectively. Hitherto, it was required that the model function be symmetric with regard to x = 0, i.e., that Γ*(-x) = Γ*(x), i.e., that the pairing theorem be obeyed. 1,2However, even this plausible restriction is not necessary, as shown by the examples: for -a ≤ x ≤ +a and otherwise Γ*(x) = 0 ( 14) for -a ≤ x ≤ +a and otherwise Γ*(x) = 0 (15) Also the functions ( 14) and ( 15) imply the validity of Eq. ( 9), with g* = 16 3 3 / and g* = ) 2 16 /( 5 9 , respectively.The forms of the functions ( 14) and ( 15) are shown in diagrams 5 and 6 in Fig. 2. In order to further demonstrate the arbitrariness of the form of the model function that leads to the McClelland approximation, an example with a singularity at x = 0 was constructed (see diagram 7 in Fig. 2): for -a ≤ x ≤ +a and otherwise Γ*(x) = 0 (16) In spite of the (physically impossible) property of the model function ( 16) that Γ(x) → ∞ for x → 0, Eq. ( 9) is also obtained with g* = 3 5 / .10)), 2 (Eq.( 11)), 3 (Eq.12)), 4 (Eq.( 13)) and 7 (Eq.( 16)), the probability density is symmetric with respect to x = 0.In the models 5 (Eq.( 14)) and 6 (Eq.( 15)), it is chosen to be highly asymmetric.In the models 1 and 2, the probability density is chosen so as to have, respectively, a maximum and a minimum at x = 0.In the models 3 and 4, there are two maxima and two minima, respectively.In the model 7, the probability density has a singularity at x = 0.The parameters a and b are chosen to be the same as in Fig. 1. GUTMAN CONCLUDING REMARKS In the seven examples for Γ*(x) given in the preceding section, Eq. ( 9) is always arrived at, but the multiplier g* assumes different numerical values.In our opinion this detail is of lesser importance.Namely, it is possible to construct model functions Γ*(x), such that g* in Eq. ( 9) has any desired value. For instance, if for some t for -a ≤ x ≤ +a and otherwise Γ*(x) = 0 (17) then )( ( ) By varying the parameter t, the multiplier in Eq. ( 9) assumes values between 8 / 5 3 = 0.8385 and 4 15 / = 0.9682, see Fig. 3. Therefore, the model function (17) can always be chosen so as to exactly "reproduce" the empirically determined value of g in the McClelland formula (Eq.( 1)).This, of course, would be fully artificial and without any scientific justification.9) on the parameter t of the probability density (17).In order that Γ*(x) be positive valued, it must be t > -1. Fig. 2 . Fig.2.Several probability densities resulting in approximate expressions for the total π-electron energy of the McClelland type.Note that whereas in the models 1 (Eq.(10)), 2 (Eq.(11)), 3 (Eq.12)), 4 (Eq.(13)) and 7 (Eq.(16)), the probability density is symmetric with respect to x = 0.In the models 5 (Eq.(14)) and 6 (Eq.(15)), it is chosen to be highly asymmetric.In the models 1 and 2, the probability density is chosen so as to have, respectively, a maximum and a minimum at x = 0.In the models 3 and 4, there are two maxima and two minima, respectively.In the model 7, the probability density has a singularity at x = 0.The parameters a and b are chosen to be the same as in Fig.1. Fig. 3 . Fig. 3. Dependence of the multiplier g* in Eq. (9) on the parameter t of the probability density(17).In order that Γ*(x) be positive valued, it must be t > -1.
2018-11-29T12:56:48.511Z
2007-01-01T00:00:00.000
{ "year": 2007, "sha1": "d60a846a055c3ac6011578924a3a05bf72d32c10", "oa_license": null, "oa_url": "http://www.doiserbia.nb.rs/ft.aspx?id=0352-51390710967G", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "d60a846a055c3ac6011578924a3a05bf72d32c10", "s2fieldsofstudy": [ "Chemistry", "Physics" ], "extfieldsofstudy": [ "Chemistry" ] }
225736160
pes2o/s2orc
v3-fos-license
Electromagnetic and microwave absorption properties of iron pentacarbonyl pyrolysis-synthesized carbonyl iron fibers The present study executed iron pentacarbonyl pyrolysis to synthesize one-dimensional structured carbonyl iron fibers (CIFs) via carrier gas induced flow. The obtained CIFs with a diameter of 100–300 nm and length–diameter ratio of more than 20, are actually composed of a large number of nanocrystalline aggregates. We investigated the dependence of the structure, morphology, and static magnetic and electromagnetic properties of the CIFs on the pyrolysis temperatures. CIFs synthesized at 300 °C (denoted as CIF-300) exhibited optimal microwave absorption properties dependent on the fiber structure and well-matched impedance. An optimal reflection loss of −58.1 dB was observed at 13.8 GHz with a matching thickness of 1.43 mm. Furthermore, CIF-300 presented a broad effective absorption bandwidth (RL ≤ −10 dB) of 5.66 GHz with a thickness of 1.44 mm, indicating that it could be applied in practical applications from 3.74 GHz to 18.0 GHz by tuning its thickness from 1.0 mm to 4.0 mm. This paper not only reveals that the CIFs synthesized at 300 °C have great potential application in microwave absorbing materials (MAMs) with thin thicknesses, wide absorption bandwidths, and strong absorption intensities, but also provides a simple approach to prepare metal fibers. Introduction Nowadays, electromagnetic (EM) interference and pollution in both the military and governmental arenas have become serious concerns due to the rapid development of communication technology, thus requiring the urgent development of highefficiency microwave absorbing materials (MAMs). Past signicant efforts have been devoted toward exploring ideal light weight, thin thickness, and broad absorption bandwidth MAMs. [1][2][3] In general, MAMs can be categorized into two loss mechanisms: dielectric loss, and magnetic loss. 4,5 Magnetic loss MAMs exhibit strong absorption, thin thickness, and broad absorption bandwidth, which benets from both dielectric and magnetic loss. 6,7 Carbonyl iron (CI) has received recognition as compared to other MAMs due to its high saturation magnetization (M s ), high Curie temperature, substantial magnetic permeability, and signicantly wide absorption bandwidth. 8 Nonetheless, as a magnetic metal MAMs, CI has a relatively high density ($7 g cm À3 ), as well as a high lling content when used as MAMs (generally more than 70 wt%), [9][10][11] thereby resulting in a signicant increase in EM absorber weight. In our previous research, 12 the effects of particle size on electromagnetic properties of spherical CI was investigated, and the lling content reached 80 wt%. Although the ake CI, obtained by ball milling, shows improved microwave absorption performance, its lling content is still high. [13][14][15][16] In recent years, hollow or porous structured CI have also been employed in highperformance MAMs applications. Yin et al. 17 adopt pitting corrosion to prepare hollow CI microspheres. The calculated reection loss (RL) of the hollow CI for 0.5 mm or 1.0 mm thickness is much better than that of the initial one, whereas the lling content is still as high as 80 wt%. Wang et al. 18 produced an annealing and selective pitting corrosion method for the synthesis of porous CI akes, of which the results presented an RL of less than À20 dB over the range of 2.9 GHz to 20 GHz following thickness tuning from 0.9 mm to 4.5 mm. Nan et al. 19 construct synchronous microstructure with Fe micro-bers and ake CI, thin and light microwave absorbers operating in the broadband range were obtained under the state of low content because of the synergy between the two components and the synchronous orientation of the llers. The ber magnetic MAMs have received recognition for their novel chemical and physical properties. 20,21 Novel electroless plated Fe-Co binary hollow bers exhibit excellent EM properties within GHz frequency ranges at a lling content of only 30 wt% as well as adjustable so magnetic properties through their ber alignments. 22 Shen et al. 23 synthesized magnetic hexaferrite SrFe 12 O 19 /a-Fe composite nanowires that presented an optimized RL value of À51.1 dB at a thickness of 3.0 mm and an absorption bandwidth exceeding À20 dB encompassing the entire G-band, X-band, and 20% Ku-band at a thickness of 4.0 mm. The one-dimensional (1D) ber structure can enable magnetic MAMs to maintain good absorption performance while the density and lling content of MAMs are greatly reduced. 22,23 Magnetic metals with ber structures have been examined to completely comprehend the potential of metallic MAMs as EM absorption materials due to their lightweight and even distribution in composite materials. Barakat et al. 24 adopted a two-step method of electrospinning and calcination to prepare CoNi bimetallic nanobers, which revealed better magnetic properties compared with those of Co-doped Ni and pristine Ni nanobers. Nie et al. 25 and Li et al. 26 fabricated iron bers and Fe 55 Ni 45 bers with diameter of 5 mm by magnetic-eld-induced thermal decomposition, respectively. However, the required equipment in their experiment is complex. Therefore, it is of great signicance to prepare 1D iron bers with enhanced EM properties through a facile process. In the present work, we report a carrier gas induced ow process for the fabrication of 1D structured carbonyl iron bers (CIFs) to achieve lightweight and high-efficiency MAMs. Compared to the magnetic-eld-induced thermally decomposition process of metal bers reported in previous research, [25][26][27] our method does not require complex equipment. The phase structure, morphology, static magnetic, and EM properties of CIFs were thoroughly determined. We also systematically examined the optimized pyrolysis temperatures of iron pentacarbonyl (Fe(CO) 5 ) that were required to produce highperformance CIFs, as well as its microwave absorption performances and loss mechanism. The CIFs synthesized at 300 C, termed CIF-300, exhibited a thin absorber thickness, strong absorption intensities, and broad absorption bandwidth. Besides, this work provides a simple approach to prepare metal bers (such as iron bers or nickel bers) as efficient MAMs. Fig. 1 presents a schematic illustration of the CIFs synthesis process, which followed the pyrolysis of Fe(CO) 5 . CIFs generation was executed based on the following equation: Fe(CO) 5 ¼ Fe + 5CO. 20 mL of Fe(CO) 5 (99.9%; supplied by Shaanxi XingHua Chemical Co. Ltd., China) was rst decanted into the evaporator, aer which argon (Ar) gas was injected into the experimental system for 2 h to remove the air in the tube furnace and evaporator. Subsequently, the argon supply of the evaporator was turned off. Aer the tube furnace was heated and maintained under the required temperature, the argon supply of the tube furnace was shut off and the argon supply of the evaporator was turned on. Fe(CO) 5 vapor with Ar gas was continuously injected into the tube furnace at a rate of 1 L min À1 for 1 h. Product collection proceeded using a magnet in the conical ask. The system was permitted to cool down to room temperature under the protection of Ar gas, aer which the CIFs were obtained. The nal CIFs were denoted as CIF-250, CIF-300, CIF-350 and CIF-400, corresponding to the pyrolysis temperatures of 250 C, 300 C, 350 C and 400 C, respectively. Characterizations The present study employed Rigaku D/max-2400 X-ray diffraction (XRD) with Cu Ka irradiation at 40 kV and 40 mA to measure the CIFs phase structure. Scanning electron microscopy (SEM, ZEISS Merlin Compact) was employed to characterize the CIFs morphology. The carbon contents and surface chemical composition of the CIFs were analyzed by high frequency infrared carbon/sulphur determinator (LEKO, CS230), X-ray photoelectron spectrometry (XPS) (EscaLab 250Xi). Raman spectra were obtained by a cryogenic matrix isolated Raman spectroscopic system (LabRAM HR Evolution). A Quantum Design PPMS vibrating sample magnetometer (VSM) was employed to characterize the CIFs hysteresis loops. The CIFs were uniformly mixed with paraffin wax at a mass ratio of 45 wt% to measure the complex permittivity (3 r ¼ 3 0 À j3 00 ) and permeability (m r ¼ m 0 À jm 00 ). Traditionally, 0.55 g paraffin wax was put into a crucible to heat to melt, 0.45 g CIFs was added and stir it quickly to make the mixture evenly. Aer cooling, grind with a mortar, then melt and stir, repeat 3 times. Finally, the powder was lled into a coaxial clapper mold and compacted into a circular coaxial sample with an outer diameter of 7.0 mm, an inner diameter of 3.04 mm and a length of 2.0-3.5 mm. A vector network analyzer (Keysight E5071C) was employed to quantify the 3 r and m r via the transmission/reection coaxial line method within 2 GHz to 18 GHz. Structure and morphology The present study characterized the crystalline structures of CIFs synthesized at various pyrolysis temperatures (250 C, 300 C, 350 C, and 400 C) by XRD (Fig. 2). All of the samples exhibited three well-resolved diffraction peaks at 2q ¼ 44.8 , 65.1 , and 82.4 , which can be denoted to the (110), (200), and (211) a-Fe planes with a body-centered cubic structure (JCPDS card no. 06-0696), 18 respectively. The intensities of diffraction peaks for the a-Fe are enhanced with the increasing pyrolysis temperature, indicating gradual improvements in the Fe nanoparticle crystallinity. The average grain sizes of CIF-250, CIF-300, CIF-350, and CIF-450, which are estimated by the Scherrer formula, were 12.0, 15.8, 16.4 and 17.9 nm, respectively. It is shown that the increase of pyrolysis temperatures can enhance the growth of the grain size. Fig. 3 presents the morphologies of the synthesized CIFs at various pyrolysis temperatures. The CIFs with diameters ranging from 100 nm to 300 nm are formed with the aggregation of nanocrystalline grains. All four samples demonstrate rough ber surfaces with some exhibiting granular protuberances and are essentially assembled by the accumulation of nanoparticles. The length-diameter ratio of CIF-250 and CIF-300 are more than 50. Yet, when the pyrolysis temperature is further increased, the length-diameter ratio decreases, and the length-diameter ratio of CIF-400 is about 20. It can be seen that the pyrolysis temperature signicantly affects the nanostructures of CIFs. On the one hand, Fe(CO) 5 vapor introduced into the tube furnace is decomposed at a certain temperature to obtain iron nanocrystalline grains. Under the induction of airow and spontaneous magnetization of magnetic nanocrystal grains at high temperature, iron nanocrystal grains aggregate, assemble and grow to form CIFs. At a given decomposition temperature, the decomposition reaction rate and the growth rate of the crystal are slow, the heat and mass transfer at the crystal growth interface is stable, and the crystal grows evenly in all directions and forms a granular shape. On the other hand, with the increase of pyrolysis temperature, the decomposition rate of Fe(CO) 5 is accelerated. Due to the imbalance of heat and mass transmission, the growth rate of each crystal surface in the crystal is of great different, resulting in the length-diameter ratio of the ber and the particles that make up the CIFs become smaller. Furthermore, the nanocrystalline ber particles presented a signicant number of gaps, thus increasing the specic surface area and reducing the CIFs density. The carbon contents of CIF-250, CIF-300, CIF-350, and CIF-400 were 1.22, 2.05, 2.86, and 2.74 wt%, respectively, which were rst increased with the pyrolysis temperatures, and then decreased. During the pyrolysis process of Fe(CO) 5 , two chemical reactions occur: Fe(CO) 5 The rst reaction is endothermic, and the second is exothermic. The increased pyrolysis temperature is conducive to promote the activation energy of molecules in the rst reaction, intensifying the pyrolysis of Fe(CO) 5 and increasing the concentration of CO, which is benecial to the second reaction, thus increasing the percentage of carbon. However, the excessive temperature will inhibit the pyrolysis reaction of CO, thereby reducing the carbon content in the CIFs. In order to investigate the element composition and atom valence state of the CIFs, the XPS spectrum of CIF-300 was analyzed, as shown in Fig. 4. The XPS survey spectrum demonstrates that Fe, C, and O elements were presented on the surface of CIF-300 (Fig. 4a). In detail, signals of Fe 3p, Fe 3s, C 1s, O 1s, Fe 2p, Fe LMM, and O KLL can be seen. Therefore, O elements may originate from surface iron oxides. For the C 1s HR-XPS spectrum (Fig. 4b), the three peaks are corresponded to C-C/C]C (284.6 eV), C-O (285.0 eV), and C]O (288.4 eV), respectively. The O 1s HR-XPS spectrum can be deconvoluted into two peaks as displayed in Fig. 4c. Peaks at 529.8 eV belong to the metal oxides, and peaks at 531.1 eV belong to C]O. For the Fe 2p HR-XPS spectrum (Fig. 4d) Raman spectrum was utilized to further analyze the state of Fe element in CIFs, and the results are shown in Fig. 5. All the samples exhibit ve distinct peaks located at 218 cm À1 , 282 cm À1 , 413 cm À1 , 601 cm À1 , and 1308 cm À1 , respectively, which are corresponded to Fe 2 O 3 phase. 28,29 The Raman peak appearing at 218 cm À1 assigned to A 1g mode, and those at 282 cm À1 and 611 cm À1 were assigned to E g modes. 28 Whether the peak observed at 1308 cm À1 is classied as a hematite twomagnon scattering is still controversial. 29 In addition, the peak located at 1340 cm À1 cannot be identied because it has a similar position to that of a peak of Fe 2 O 3 . Fe 2 O 3 comes from the oxidation of CIFs in the air. Since there is no characteristic peak of Fe 2 O 3 in the XRD analysis, the content of Fe 2 O 3 should be very low and only exist on the surface of CIFs. Static magnetic properties The magnetic hysteresis loops of the synthesized CIFs are presented in Fig. 6, which demonstrate typical magnetization hysteresis loops of ferromagnetic materials with high saturation magnetization (M s ) and low coercivity (H c ) values. The M s values for CIF-250, CIF-300, CIF-350, and CIF-400 are 175.8, 205.8, 179.8, and 200.8 emu g À1 , respectively. Obviously, the pyrolysis temperature should be the main reason for the difference of M s values. On one hand, the crystalline degree and grain size of Fe nanoparticles are enhanced through the high pyrolysis temperature, which aids in organizing the magnetic domain orders under an external eld and enhances M s . 30 On the other hand, the M s value is also signicantly dependent upon the number of atoms per unit volume. 5 The content of carbon, which is non-magnetic and makes little contribution to magnetization, were rst increased and then decreased with the pyrolysis temperatures. Both of these two facts are considered to be responsible for the uctuation of M s . The M s value increase as the pyrolysis temperature increased from 250 to 300 C, indicating that the enhancement of grain size on M s is greater than the inhibition of carbon content on M s . When the pyrolysis temperature increased to 350 C, the M s value decreased, indicating that the inhibition of carbon content on M s is greater than the enhancement of grain size on M s . As the pyrolysis temperature further increased to 400 C, the increasement of M s is mainly due to the drop in carbon content. The lower-right inset of Fig. 6 presents details surrounding zero magnetic elds, wherein CIF-250, CIF-300, CIF-350, and CIF-400 present H c values of 60.5, 21.6, 190.1, and 175.7 Oe, respectively. The H c signicantly correlates with the grain size and magnetic anisotropy, 12 such that grain size growth increases the H c for grain sizes smaller than the single domain critical size of iron (28 nm). 31 The XRD results revealed a CIFs grain size of less than 28 nm for all of the synthesized samples, indicating the effect of the grain size on the observed uctuations in H c . Electromagnetic properties The 3 r and m r of the CIFs/paraffin composites loading with 45 wt% of CIFs were investigated within 2 GHz to 18 GHz to characterize the EM properties of the synthesized CIFs. Based on the transmission line theory, the 3 0 and m 0 dene the storage capability of electric and magnetic energy, respectively, whereas the 3 00 and m 00 dene the loss capability of electric and magnetic energy, respectively. 32 Fig. 7a and b present the 3 0 and 3 00 of the CIFs synthesized at different pyrolysis temperatures, respectively. Both 3 0 and 3 00 of CIF-250, CIF-300, and CIF-350 drastically decrease in the entire measured frequency range. When the pyrolysis temperatures increased from 350 C to 400 C, the 3 0 and 3 00 decline slightly. CIF-250 demonstrates the highest 3 0 and 3 00 values than the other three samples. This may be due to the decrease in the length of the CIFs as the pyrolysis temperature increases. When the lling ratio is the same, the electrical conductivity of the CIFs/paraffin composites decrease, resulting in a decrease of 3 r . The dielectric loss tangent (tan d e ¼ 3 00 /3 0 ) of the samples are shown in Fig. 7c. CIF-250 presents a relatively high tan d e across the entire tested frequency range due to its high energy storage and dissipation abilities. In comparison, the other three samples present constant tan d e values within 2 GHz to 6 GHz, of which the tan d e gradually decreased with the increased pyrolysis temperatures across the remaining frequencies. The resonant peaks around 8, 13, and 17 GHz suggest the execution of several polarization relaxation processes in the composites beneath the radiation of rotating EM waves. a gradual decrease tendency with increasing frequency. Notably, among the four samples, CIF-300 exhibits the highest m 0 at the majority of the test frequency range, indicating that CIF-300 has higher magnetic energy storage capabilities as compared to CIF-250, CIF-350, and CIF-400. CIF-250 shows the highest m 00 at the 2 GHz to 5 GHz. In comparison, CIF-400 exhibits the highest m 0 and tan d m within 7 GHz to 18 GHz, indicating its optimal magnetic energy dissipation capabilities. A positive correlation is observed between the CIFs tan d m and the pyrolysis temperatures within 7 GHz to 18 GHz. Furthermore, the curves of tan d m present three obvious resonance peaks around 4 GHz and 14 GHz. The magnetic loss the GHz range is largely originated from natural ferromagnetic resonance and eddy current loss. 33 The magnetic loss only initiate from the eddy current loss given the constancy of C 0 ¼ f À1 (m 0 ) 2 m 00 regardless of frequency. 34 However, the C 0 values of the CIFs synthesized at various pyrolysis temperatures varies within 2 GHz to 13 GHz (Fig. 8) and do not exhibit signicant changes in the remaining frequencies. Therefore, the resonance peaks around 4 GHz are assumed to originate from natural resonances, whereas the peaks around 14 GHz are a result of eddy current loss. Microwave absorption properties Based on the transmission line theory, the RL of a single-layer absorber could be evaluated as follows: 35 where Z in is dened as the absorber input characteristic impedance, Z 0 denes the intrinsic impedance of the free space, c is the velocity of light, d is the absorber thickness, and f is the incident EM wave frequency. The RL value of common applications must be lower than À10 dB to allow 90% of the incident EM wave to be attenuated. The RL was evaluated according to the eqn (1) and (2) to investigate the effect of the pyrolysis temperatures on the microwave absorption characteristics of the CIFs. Fig. 9 presents the 3D plots of the calculated RL for the synthesized CIFs at variable absorber thicknesses varying from 0.5 mm to 4.0 mm. Although all of the samples exhibit the incident EM wave loss, their concrete performances differ greatly and are sensitive to the pyrolysis temperature. CIF-250 exhibit relatively poor microwave absorption properties (Fig. 9a). A minimum reection loss (RL min ) of only À14.24 dB at 17.94 GHz is observed at an absorber thickness of 1.14 mm. The RL values of CIF-300 exceed À10 dB for 3.74 GHz to 18 GHz with the absorber thicknesses ranging from 1.0 mm to 4.0 mm (Fig. 9b). The RL min reach up to À58.1 dB at 13.8 GHz with an absorber thickness of 1.43 mm. When the pyrolysis temperature increased from 350 C to 400 C, CIF-350 and CIF-400 present RL min of up to À20.67 dB at 12.98 GHz and À19.47 dB at 8.38 GHz with absorber thicknesses of 1.85 mm and 2.67 mm, respectively. The RL peaks of the CIFs shi to the low frequency area as the absorber thickness increased (Fig. 9). The thickness dependence of the RL peaks can be illustrated through the quarter-wavelength matching model: t m ¼ nc=ð4f m ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi |3 r ||m r | p Þðn ¼ 1; 3; 5;.Þ; 4 where t m denes the matching thickness of RL. Fig. 10a presents the effective absorption bandwidth (RL # À10 dB) of the synthesized CIFs. All four samples exhibit effective absorption bandwidths of more than 4 GHz within the thickness range of 1.24 mm to 1.75 mm. CIF-300 exhibits a maximum bandwidth of 5.66 GHz with a thickness of 1.44 mm, which is the broadest of all of the four samples. Further, the inset of Fig. 10a presents the bandwidth correlating to the RL less than À20 dB. CIF-300 exhibit a bandwidth of more than 1.8 GHz with a thickness of 1.78 mm. While for the other three samples, the bandwidths corresponding to the RL less than À20 dB can be hardly seen. Fig. 10b demonstrates the comparison of the RL min of the CIFs synthesized at various pyrolysis temperatures, wherein the RL values (below À10 dB) of CIF-250 and CIF-300 are observed at a wide thickness of 1.0 mm to 4.0 mm. In comparison, the RL values of CIF-350 and CIF-400 are below À10 dB at thicknesses ranging from 1.27 mm to 4.0 mm. CIF-300 with RL values of less than À20 dB concentrates in the thickness of 1.1 mm to 2.5 mm. In comparison, the RL values of the other three specimens barely reach À20 dB. Smaller absorber thickness is ideal for practical applications. Therefore, CIF-300 demonstrates the optimal microwave absorption performance at relatively thin coating thicknesses in contrast to the other three samples. Furthermore, the performances of the CIFs with those of typically published relative absorbers were further evaluated (Table 1). CIF-300 demonstrates broader absorption bandwidth, lower loading content, RL min value, and matching thickness d as compared to those of hollow CI, 17 porous CI akes, 18 aligned Fe microber, 21 SrFe 12 O 19 /a-Fe nanowires, 23 Fe 55 Ni 45 ber, 26 and polycrystalline iron bers. 36 Compared to those of spherical CI, 12 aky CI, 16 and Fe nanowire, 37 the loading content and matching thickness d of the CIF-300 are relatively lower, which is benecial to reduce the weight of absorbing coating. Moreover, the aky CI 14 exhibit broader absorption bandwidths and lower matching thickness. However, their loading content and RL min value are also higher than the CIF-300. Although the Fe 10 Ni 90 submicro bers 20 have a lower lling content, its RL min and absorption bandwidths value are also lower. Thus, the CIFs synthesized with the pyrolysis temperatures of 300 C, demonstrating stronger absorption intensities, broader absorption bandwidth, and thinner thicknesses, are ideal for highperformance practical MAMs. In general, the impedance matching and attenuation constant (a) signicantly affect microwave absorption. MAMs with excellent absorption properties exhibit proper characteristic impedances that are equal/close to that of the free space, thereby reaching zero reection. In addition, these samples should also present sufficiently strong EM wave attenuation abilities. A delta-function method effectively evaluate the impedance matching degree based on the equation: 38,39 where K and M are determined by 3 r and m r as follows: M ¼ Based on the above equations, a smaller delta value results in better EM impedance matching. Fig. 11 presents the contour plots of the quantied delta for the CIFs synthesized at various pyrolysis temperatures. The green area (D < 0.4) of CIF-300 is larger than the other three samples. The green area of CIF-250 and CIF-300 are more concentrated in the thin absorber thickness region as compared to CIF-350 and CIF-400. The RL and the delta values of CIF-300 ( Fig. 9b and 11b) exhibit the same Fig. 10 Comparison of (a) effective absorption bandwidth (RL # À10 dB) and (b) RL min values of CIFs synthesized at various pyrolysis temperatures, wherein the insert in (a) denotes the bandwidth correlating to a RL of less than À20 dB. trend, indicating that the enhanced microwave absorption properties of CIF-300 possibly due to good EM impedance matching. The microwave absorption performance of MAMs is also signicantly dependent upon the inside microwave attenuation abilities, expressed as the attenuation constant (a). The a values of the CIFs synthesized at various pyrolysis temperatures were evaluated as follows: 32 As demonstrated in Fig. 12, the a of all the samples are gradually increased with the increasing of frequency. The values of a decrease with the increasing of pyrolysis temperatures. When the pyrolysis temperatures elevated from 350 C to 400 C, the values of a decline slightly. CIF-250 and CIF-300 show larger a than the other two samples, particularly in the high-frequency range, indicating superior attenuation ability for the incident EM waves. Based on these outcomes, CIF-300 shows a balanced impedance matching and enhanced attenuation constants, thus demonstrating excellent microwave absorption performance. Conclusions In summary, the present study synthesized 1D structured CIFs via a carrier gas (Ar) induced ow process. We characterized the phase structure, morphology, static magnetic, and microwave absorption performance of the CIFs. The obtained CIFs with diameters of 100 nm to 300 nm and length-diameter ratios of more than 20 aggregated a large number of nanocrystalline grains. CIF-300 exhibited excellent microwave absorption properties with a ling content of 45 wt%, which presented an optimal reection loss of À58.1 dB at 13.8 GHz with an absorber thickness of 1.43 mm. Moreover, CIF-300 presented a broad effective absorption bandwidth (RL # À10 dB) of 5.66 GHz (thickness of 1.44 mm) and can be used for practical applications between 3.74 GHz to 18.0 GHz by tuning its thickness within 1.0 mm to 4.0 mm. The results promote the CIFs synthesized at 300 C for potential MAMs applications with thin thicknesses, wide absorption bandwidths, and strong absorption intensities. Conflicts of interest There are no conicts to declare.
2020-06-25T09:04:34.739Z
2020-06-19T00:00:00.000
{ "year": 2020, "sha1": "19c377ef04b0b6bcf68c9f925059a91f8c562f54", "oa_license": "CCBYNC", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2020/ra/d0ra00222d", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2e7fa061f8bf05090824e9390df9d52b0b8bcfdc", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
244831457
pes2o/s2orc
v3-fos-license
Spin-thermoelectric effects in a quantum dot hybrid system with magnetic insulator We investigate spin thermoelectric properties of a hybrid system consisting of a single-level quantum dot attached to magnetic insulator and metal electrodes. Magnetic insulator is assumed to be of ferromagnetic type and is a source of magnons, whereas metallic lead is reservoir of electrons. The temperature gradient set between the magnetic insulator and metallic electrodes induces the spin current flowing through the system. The generated spin current of magnonic (electric) type is converted to electric (magnonic) spin current by means of quantum dot. Expanding spin and heat currents flowing through the system, up to linear order, we introduce basic spin thermoelectric coefficients including spin conductance, spin Seebeck and spin Peltier coefficients and heat conductance. We analyse the spin thermoelectric properties of the system in two cases: in the large ondot Coulomb repulsion limit and when these interactions are finite. Another possibility of conversion of spin waves to electronic spin current and vice versa has been studied in a hybrid system involving both metallic and magnonic reservoir [45][46][47] . Efficient conversion of spin current can be also achieved by coupling the magnetic insulator and metallic electrodes through a quantum dot 48,49 . Spin thermoelectric effects have been also observed in antiferromagnetic hybrid systems. Specifically, thermal generation of spin current from the insulating antiferromagnets through the longitudinal spin Seebeck effect has been reported 50 and described theoretically 51 . Thermally generated spin transport in magnetic multilayered structures consisting of nonmagnetic metals, antiferromagnetic insulators and/or ferromagnetic insulators has been recently studied [52][53][54][55] . Moreover, a large enhancement of thermally generated spin current has been reported in a hybrid system with normal metal, antiferromagnetic and ferromagnetic insulators layered structure 56 . Apart from that, giant magneto-spin-Seebeck effect has been predicted in all-insulating spin valve with antiferromagnetic insulator sandwiched between two ferromagnetic insulator layers 57 . In the present paper we investigate spin thermoelectric effects in a system consisting of a quantum dot coupled to magnetic insulator and metallic leads. Magnetic insulator is a source of magnons, whereas magnetic metal is a reservoir of electrons. In turn, the QD works as a converter of spin current of magnonic type to spin current of electronic type and vice versa. The process of converting magnon current to electron spin current by means of temperature difference set between the two leads is schematically drawn in Fig. 1 and can be understood as follows. Assume that the whole system is placed in a magnetic field B directed opposite to the z-axis and the dot level is split due to this field ε ↓,↑ = ε d ± gµ B B/2 . For the sake of simplicity, assume that intradot Coulomb repulsion is infinitely large (then the dot can be occupied at most by one electron) and dot's bare level ε d is placed at the chemical potential of metallic lead µ . Therefore, QD is occupied by an electron with spin up orientation as ε ↑ < µ . Next, assume that temperatures of magnetic insulator and metallic electrodes are set to be T m > T e . In this situation, magnons flow from the magnonic reservoir to the dot. Absorption of a magnon by QD excites the spin-↑ electron which is simultaneously accompanied by (its) spin-flip process. As a result, spin-↓ electron of energy ε ↓ can flow from the QD to metallic lead emptying it. Furthermore, an electron with spin ↑ can tunnel from metallic lead to the dot. Thus, the magnon current flowing from the magnonic reservoir is converted into pure spin current of electronic type in the metallic electrode. On the other hand, when the temperature of the electronic reservoir is higher than that of the magnonic one, T m < T e , the spin-flip processes on the dot excite magnons in the magnonic reservoir which is associated with conversion of spin current of electronic type to magnon current. The paper is organized in the following way: section "Theoretical description" contains the theoretical description of the considered system and it is divided into three parts. In the first part we describe the details of the model. The second part is devoted to the derivation of electron and heat current formula, whereas in the third Figure 1. Schematic picture showing the idea of converting magnon current to electron spin current by means of temperature difference. The red parabola symbolizes the magnon reservoir, whereas blue curve stands for density of electrons in the metallic lead. The blue area below the curve denotes states occupied by electrons and the white space above the curve are empty states. Furthermore, red color is associated with higher temperature than blue one, i.e. T m > T e . Zeeman split dot's energy level is depicted by two black solid horizontal lines. The splitting of dot's energy level equals �ε = ε ↓ − ε ↑ = gµ B B . Magnon, carrying energy gµ B B , is depicted as red wavy arrow, whereas blue dot with vertical arrow denotes an electron with a given spin. www.nature.com/scientificreports/ part we introduce linear response theory for spin thermoelectric effects. In section "Results and discussion" we describe the obtained results and provide the discussion of them both for large U limit and for case of finite U. Finally, we provide short conclusion section. Theoretical description Model Hamiltonian . The system taken into consideration consists of a single-level quantum dot (QD) attached to magnetic insulator (MI) and metallic electrodes and is schematically presented in Fig. 1. The system is under the influence of an external magnetic field B. The system is modeled by Hamiltonian of the form: where the first term, H e = kσ ε kσ c † kσ c kσ , describes electrons in the left metallic lead. Here, ε kσ is the singleparticle energy of an electron with a wavevector k and spin σ =↑, ↓ . The second term describes a single-level quantum dot and acquires the form: with ε dσ = ε d −σ gµ B B/2 denoting the dot's level energy. The dot's degeneracy is lifted by an external magnetic field B ( ε d is the bare dot's level energy). Here, g is the Lande factor of the dot, µ B is the Bohr magneton, while σ = +(−) for σ =↑ (↓) . The second term in (2) refers to the intradot Coulomb repulsion between electrons of opposite spins with U being the relevant Hubbard parameter. Tunneling of electrons between the QD and metallic lead is described by: where V kσ are the corresponding tunneling matrix elements. Magnetic insulator is described by the H m in (1) which is modeled by Heisenberg Hamiltonian restricted to the nearest neighbours interactions; Here, i, j denotes summation over nearest neighbours, J ex ( J ex > 0 ) is the corresponding nearest-neighbour exchange integral, while g m is the Lande factor of the magnetic insulator. Note that Lande factors for QD and magnetic insulator differ. Due to the large tunability of quantum dots, one can meet the criterion g ≥ g m which allows for nonzero magnon current. This condition is essential from the point of view of magnon filtering as only magnons with energy equal to the Zeeman splitting of the dot level can be transferred through the QD i.e. only when the equality holds, ε q = �ε with �ε = gµ B B . Note also that QD and magnetic insulator are placed in the same magnetic field B. In the following we assume B > 0 as only in this case both energy and angular momentum conservation can be obeyed when absorbing/emitting a magnon. Introducing the operators S ± αi = S x αi ± iS y αi and performing the Holstein-Primakoff transformation 58 : Assuming that �a † i a i �/(2S) ≪ 1 one can expand the square roots and rewrite the Hamiltonian (4) in the Fourier space taking only quadratic terms as follows; where ε q is the spin wave energy for the wavevector q and acquires the form, Here, z denotes the number of nearest neighbors and γ q = 1/z δ l e iq·δ l is a geometric factor that depends on the crystal structure where the sum is over the position vectors δ l of the nearest neighbors. Note that condition g ≥ g m becomes clear when one considers the above dispersion relation together with energy conservation ε q = �ε . Apart from that, we neglect higher-order terms of the expansion which can lead to temperature dependence of the magnonic dispersion relation. These corrections are small for relatively low temperatures, and thus, can be neglected 59 . For completeness, we also neglect variation of the spontaneous magnetization of magnetic insulator with temperature. This is justified as long as we assume a low temperature regime and take into account the relatively small range of temperature in our considerations 60 . The last term in Eq. (1) describes exchange coupling between the quantum dot and the magnetic insulator and making the same procedure, as for magnetic insulator Hamiltonian, it can be expressed as; where j q depends generally on the distribution of interfacial spins and on coupling between these spins and the quantum dot. Here, this coupling will be treated as a parameter. Method. In order to calculate spin current generated by the temperature difference between magnonic reservoir (magnetic insulator) and metallic electrode we employ Pauli's master equation method which correctly describes the transport properties for weak coupling regime. Thus, we assume that the couplings with external electrodes are treated perturbatively. The master equations in the stationary limit can be written as; with α = e denoting rate associated with tunneling of electrons, whereas α = m corresponds to magnon rate. The electron tunneling rates acquire the form; is the Fermi-Dirac distribution function with µ σ being the electrochemical potential in the metallic electrode α = e for spin σ , while T e is the corresponding temperature. Furthermore, and Ŵ σ e denotes tunneling strength between metallic lead and dot which is assumed to be independent on energy in accordance with wide band approximation. This allows us to parametrize the coupling strength as Ŵ σ e = 2π�|V kσ | 2 �ρ e = Ŵ e for σ =↑, ↓ , where �|V kσ | 2 � is the corresponding average over k and ρ e stands for the density of electron states in the metallic lead α = e . In turn, the magnonic tunneling rates are nonzero only for transitions between the dot's states | ↑� and | ↓� and are given by; where σ = +1 for σ =↑ and σ = −1 for σ =↓ . Here, n + (ǫ) is the Bose-Einstein distribution function, Apart from that, Ŵ m stands for coupling strength between dot and magnonic reservoir and can be written as Ŵ m = 2π�|j q | 2 �ρ m with ρ m being density of magnon states in the insulating lead and �|j q | 2 � denoting relevant average. Generally, the density of magnon states reveals nontrivial dependence on the energy. Oppositely to the transport of electrons, for which only states within the range k B T around Fermi level are crucial and thus, the density of states can be regarded as flat, in the case of bosons (here magnons) wide band approximation doesn't work in general and explicit energy dependence of density of states should be considered. However, in the case of (two-dimensional) yttrium iron garnet structure its density of states can be considered constant in the relatively large range of energy 61 . Employing this feature we assume energy independent coupling strength with a magnonic reservoir. After calculating relevant transition rates by using Eq. (8) and corresponding probabilities using Eq. (7) we obtain magnon current flowing from magnonic reservoir to the dot defined by the formula; whereas spin current flowing from magnonic electrode is given by J m s = − J m . As we assumed the case of B > 0 then each magnon caries the spin angular momentum with the z component equal to − , the magnon current J m , defined as the number of magnons transmitted from magnonic reservoir to the QD in a unit time, is equal to the corresponding spin current divided by − . Therefore, the spin current and magnon current have opposite signs. In turn, spin current flowing from metallic electrode is determined by angular momentum conservation, J e s = −J m s . J e s can be directly expressed by means of corresponding charge currents flowing in two spin channels in electronic reservoir, i.e. J e s = 2e (I e ↑ − I e ↓ ) . As no net current can flow through the system one concludes that, I e ↑ + I e ↓ = 0 . Finally, one derives the following formula for spin current flowing from magnonic reservoir to electronic one relevant for U → ∞ 49 ; . In turn, the heat current associated with magnonic current is given by Similarly, one can obtain the formulas for spin and heat currents for finite values of parameter U. However, we don't present them here as they acquire more complex forms. Spin thermoelectric effects-linear response theory. Previously, we introduced spin-dependent chemical potential µ σ in the metallic lead which may be induced by spin accumulation or may result from externally applied spin bias. The spin bias, V s is given by eV s ≡ �µ s = µ ↑ − µ ↓ . Thus, one can write; where upper (lower) sign corresponds to σ =↑ ( σ =↓ ). Generally, the temperatures associated with two spin channels can be different. Here, we neglect this effect and assume the same temperature for both spin species, i.e. T ↑ e = T ↓ e ≡ T e . Furthermore, we parametrize the temperatures in metallic and magnonic reservoir by T α = T ± �T/2 , where upper (lower) sign corresponds to α = m ( α = e ) and T = T m − T e is temperature bias. Assuming that temperature and spin biases are small, i.e. for T ≪ T and �µ s ≪ µ one expands magnon (spin) and heat currents, Eqs. (12) and (13), up to linear order and obtains; where G s is spin conductance which for U → ∞ acquires the following form; with F (x) = k B T exp (x/k B T) . Apart from that, κ s = (ε ↓ − ε ↑ )L s is magnetic contribution to heat conductance (in the absence of spin bias, i.e. when �µ s = 0 ) and L s = ε ↓ −ε ↑ T G s . Note that the above linear response matrix reflects Onsager symmetry. The singularity of the Onsager matrix corresponds to the so-called tight coupling limit 62,63 , for which the strict proportionality between the heat and magnon currents occurs. This feature leads to far-reaching consequences that will be described in the next section. Spin conductance derived for arbitrary U is presented in the Supplementary Information. Results and discussion Spin Seebeck and spin Peltier effects. Defining spin Seebeck coefficient as spin voltage drop generated by temperature difference under condition of vanishing spin current one obtains; In turn, spin Peltier coefficient is defined as ratio of heat current to spin current under condition of vanishing temperature bias; Note that both spin Seebeck and Peltier coefficients acquire the above forms disregarding the value of parameter U i.e. S s and π s are described by the same formulas for finite U and for U → ∞ cases. Both S s and π s are functions of energy transferred by magnon ( ε ↓ − ε ↑ = gµ B B ) and don't depend on dot's level position. Especially, in the case of the latter coefficient the dependence on magnon energy is physically clear as it is equal to energy exchanged between external leads. It clearly shows how much heat is carried per unit particle (magnon). Moreover, spin Seebeck and spin Peltier coefficients are directly related with each other resembling the same symmetry between corresponding coefficients of the conventional thermoelectric phenomena. In the case of spin counterparts of thermoelectric effects, the spin Peltier phenomenon can be regarded as the back-action of the spin Seebeck effect i.e. the spin Seebeck effect will drive a spin current which by means of spin Peltier effect will transfer the heat from the hot to the cold junction. In turn, the spin Seebeck coefficient is proportional to energy carried by magnon and inversely proportional to the temperature. Zero temperature limit should be regarded carefully as no magnons can be created and thus the spin Seebeck coefficient vanishes as temperature tends to zero. However, this case is excluded as we assumed that T ≪ T . Note also that utilized here master equation method requires condition k B T ≫ Ŵ , and thus, the results are reliable only when the condition is fulfilled. The temperature dependence of spin Seebeck coefficient leads to high values of S s for low temperature regime i.e. for k B T ≪ gµ B B , which means that one has to apply a relatively large spin bias voltage to compensate thermally-induced spin current. In turn, for higher temperatures it is easier to compensate thermally-induced spin current as the spin Seebeck coefficient decreases with increasing temperature. This feature is a consequence of the competition between Bose-Einstein and Fermi-Dirac distributions. On the one hand, the number of magnons in the magnetic insulator reservoir grows with increasing temperature and one naively expects that more magnons can be transferred through the system. On the other hand, smearing the Fermi distribution around the Fermi level as temperature grows leads to a decreasing rate of tunneling electrons through the junction between QD and metallic lead. Heat conductance. Defining heat conductance as ratio of heat current to temperature bias under condition of vanishing spin current; (14) µ σ = µ ± �µ s 2 , www.nature.com/scientificreports/ one quickly concludes that κ = 0 for both finite U and U → ∞ cases as a consequence of the tight coupling limit. This will lead to the figure of merit Z s T ≡ G s S 2 s /κ → ∞ indicating that the device works at Carnot efficiency (see Supplementary Information for the proof). This is a straightforward consequence of vanishing heat current as spin current is assumed to be zero [compare Eq. (12) with Eq. (13)]. In other words, when the build-up spin bias �µ s is induced by temperature difference T it compensates both spin and heat currents. This phenomenon is in strong opposition to the case of purely electric system with both electrodes being reservoirs of electrons, where vanishing of charge current doesn't imply vanishing of heat current, i.e. a flux of electrons flowing from hot reservoir to cold one transfers higher energy than the same flux of electrons flowing from cold to hot electrode which leads to finite heat conductance. One should note that in real systems phonons transfer the energy and will contribute to thermal conductance. Hence, the lattice thermal conductance will remove the infinity of ZT although it may still be large. However, one should remember that ZT is linear response quantity which characterizes the device's performance close to zero power and gives only a little insight outside the linear response regime. Usually, ZT → ∞ does not give maximal efficiency at finite power output. Moreover, when the Carnot efficiency is achieved the system must be reversible and then usually the power output vanishes 64 . Finally, introducing above defined transport coefficients, Eq. (15) can be rewritten as; which also clearly shows that heat conductance κ vanishes. Spin conductance. Limit of U → ∞. As the spin conductance (16) acquires a more complex form we calculated it numerically for various sets of parameters and presented the obtained results graphically. In Fig. 2 we show spin conductance dependence on the dot's level position for indicated values of temperature (a) and applied magnetic field (b) and calculated under condition of infinitely large ondot Coulomb repulsion ( U → ∞ ). For the sake of simplicity we assumed that µ = 0 and Ŵ e = Ŵ m . First of all, one can notice that the spin conductance is not symmetric with respect to zero dot's level position which can be attributed to the fact that the two leads are of different type, one is fermionic and the other one is bosonic. Generally, the position of the maximum in conductance is a function of both applied magnetic field and the temperature and can be found from the formula; with x = gµ B B/k B T . One can deduce that for symmetric coupling, Ŵ e = Ŵ m , and for finite temperature the maximum is situated at the positive value of dot's level position. Moreover, with increasing temperature (for given magnetic field) the maximum of conductance moves away from zero to positive values of dot's level energy and simultaneously the width of the peak of the spin conductance grows. The last feature results from the temperature dependence of the Fermi function. The intensity of spin conductance is a function of both the temperature and the applied magnetic field i.e. energy of the magnon. Figure 2a and the inset show that maximum of the peak is nonmonotonic function of temperature. Firstly, it grows with increasing the temperature and after reaching maximal value at certain temperature it decreases with further increase of temperature. In turn, when increasing the magnetic field the maximum of spin conductance monotonically decreases as shown in Fig. 2b. This behavior www.nature.com/scientificreports/ follows directly from the Bose-Einstein distribution function, which leads to a decrease of magnons' density with increasing magnetic field and consequently to lower transmission of the magnons. In turn, temperature dependence of the spin conductance results rather from the competition between Bose-Einstein and Fermi-Dirac distributions, similarly as temperature dependence of spin Seebeck effect explained earlier. An increase of temperature leads to enhancement of density of magnons in the magnetic insulator electrode and simultaneously it smears the Fermi distribution around the Fermi level. As a result, for low temperatures there are not many magnons and consequently small magnon current is flowing, and hence, small spin conductance. For higher temperatures more magnons are excited in the magnonic reservoir, and thus, larger spin conductance is noticed. However, further increase of temperature leads to decrease of transmitted magnons despite its increasing density in the magnonic reservoir. This effect can be understood by looking at the temperature dependence of the Fermi distribution. For sufficiently high temperature the distributions of electrons (in the metallic electrode) with energies ε = ε ↓ and ε = ε ↑ differ only a little. Thus, the probability of tunneling of an electron with spin σ to or from the metallic electrode becomes more and more similar with increasing temperature which leads to suppressions of charge currents in both spin channels and consequently spin current becomes diminished. Furthermore, the width of the peak rather weakly depends on the magnetic field-it slowly grows with increasing B. Moreover, the position of the maximum of the spin conductance moves to lower values of dot's energy level with increasing the magnetic field (at constant temperature) oppositely to the temperature dependence described above. In Fig. 3 we present spin conductance dependence on the dot's level position calculated for different values of (a) [(b)] coupling strengths to magnonic [electronic] reservoir with constant coupling to electronic [magnonic] one. One can notice that the width of the resonance in spin conductance only weakly depends on coupling to the magnonic reservoir and up to value Ŵ m = 2Ŵ e is almost constant. For larger values of Ŵ m , i.e. for Ŵ m > 2Ŵ e , small increase of the width can be observed. In turn, the width of the peak becomes larger with increasing Ŵ e . On the other hand, intensity of spin conductance grows monotonically with increasing any of the couplings due to enhancement of magnon and electron tunneling rates. Moreover, one can notice that for asymmetric couplings the conductance's maximum can be situated at positive or negative dot's level energies depending on the ratio Ŵ m / Ŵ e . Specifically, when Ŵ m / Ŵ e > tanh (gµ B B/2k B T) the maximum occurs for positive values of dot's energy level, whereas for Ŵ m / Ŵ e < tanh (gµ B B/2k B T) it is located at negative values of ε d . When the equality holds, Ŵ m / Ŵ e = tanh (gµ B B/2k B T) , spin conductance becomes symmetric with respect to ε d = 0 . Thus, for a given ratio of couplings Ŵ m / Ŵ e one can obtain this symmetry by properly tuning the ratio B/T. Inversely, when the B/T ratio is set, the symmetry can be recovered by proper selection of Ŵ m / Ŵ e ratio. Case of finite U. In this section we consider an influence of finite intradot Coulomb repulsion on spin thermoelectric coefficients. In Fig. 4 spin conductance dependence on the dot's level position for indicated values of temperature (a) and applied magnetic field (c) is presented. The main difference in respect to the U → ∞ case, presented in Fig. 2, is a double peak structure. One peak in spin conductance is associated with resonance at ε d ≈ 0 , whereas the second maximum appears in the vicinity of ε d = −U . The latter peak is present only for finite U values. The minimum between the maxima is located at ε min d = −U/2 + µ/k B T , and thus, assuming µ = 0 one obtains ε min d = −U/2 . Moreover, the intensities of maxima in the spin conductance follow the same behavior with changing temperature and applied magnetic field as those calculated for the U → ∞ case. However, here the positions of maxima exhibits slightly different behavior than for the U → ∞ case. Specifically, at low temperature limit the positions of the maxima are located at ε 0max d 0 and ε Umax d −U . Furthermore, with increasing temperature the maxima move away from each other until the temperature reaches the critical value www.nature.com/scientificreports/ T c1 which for assumed parameters and for gµ B B = k B T 0 equals to T c1 /T 0 ≈ 2.564 . For this temperature the separation between the maxima is the largest. Further increase of temperature leads to shrinking of the separation and for certain temperature T c2 the maxima occur for ε 0max d = 0 and ε Umax d = −U i.e. when T c2 /T 0 ≈ 3.881 for gµ B B = k B T 0 . For temperature T > T c2 the positions of both maxima become negative and move closer to each other. Finally, both maxima merge into one maximum which occurs for temperature T c3 ( T c3 /T 0 ≈ 5.506 for gµ B B = k B T 0 ). In Fig. 4c and d we show spin power factor corresponding to spin conductance displayed in Fig. 4a and b, respectively. The power factor is defined as; and determines the effectiveness of heat to spin current conversion in the linear response regime. The power factor is symmetric with respect to the particle-hole point given by ε d = −U/2 . One can notice that the power factor achieves large values in the low temperature regime and drops with increasing temperature owing to temperature dependence of spin Seebeck coefficient [see Eq. (17)]. In turn, the power factor is a nonmonotonic function of the applied magnetic field. For sufficiently low or sufficiently large magnetic fields it becomes suppressed, whereas for moderate magnetic fields the power factor achieves maximal values, which follows from peculiar dependence of spin conductance and spin thermopower on magnetic field. Conclusions In summary, we have analyzed spin thermoelectric properties of a quantum dot coupled to a metallic electrode and magnetic insulator. We have considered two cases: with infinite intradot Coulomb repulsion ( U → ∞ ) and with finite values of U. In both cases the spin Seebeck and spin Peltier coefficients acquire the same forms and don't depend on dot's level position. We provided analytical formulas for these coefficients which showed that spin Seebeck coefficient depends on temperature and applied magnetic field, whereas spin Peltier coefficient equals the energy carried by a magnon. We have also shown that spin Seebeck and spin Peltier coefficients are related via Onsager reciprocal relation. Additionally, we have shown that in the considered system heat conductance vanishes which means that the system works at Carnot's efficiency.
2021-12-03T16:14:31.611Z
2021-12-01T00:00:00.000
{ "year": 2022, "sha1": "eaa87a75df40e204355fd6f237873f0f50d58a68", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-022-09105-z.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "88f5b8c9b4d1e0c8d36a65e60303dd6cd5c18e58", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine" ] }
234486413
pes2o/s2orc
v3-fos-license
Sex differences in biological aging with a focus on human studies Aging is a complex biological process characterized by hallmark features accumulating over the life course, shaping the individual's aging trajectory and subsequent disease risks. There is substantial individual variability in the aging process between men and women. In general, women live longer than men, consistent with lower biological ages as assessed by molecular biomarkers, but there is a paradox. Women are frailer and have worse health at the end of life, while men still perform better in physical function examinations. Moreover, many age-related diseases show sex-specific patterns. In this review, we aim to summarize the current knowledge on sexual dimorphism in human studies, with support from animal research, on biological aging and illnesses. We also attempt to place it in the context of the theories of aging, as well as discuss the explanations for the sex differences, for example, the sex-chromosome linked mechanisms and hormonally driven differences. Introduction -a short overview of the field Aging is a complex biological process characterized by hallmark features accumulating over the human life course, including mitochondrial dysfunction, telomere attrition, epigenetic alterations, genomic instability, loss of proteostasis, cellular senescence, imbalanced metabolism, stem cell exhaustion, decreased autophagy function, and immune aging (Kennedy et al., 2014;Ló pez-Otín et al., 2013;Ferrucci et al., 2020). These features, along with others and the complicated interactions between them, describe the aging process and shape the individual's aging trajectory and subsequent disease risk. There is substantial individual variability in the aging process, with some individuals living independently in their 90 s while others need help in daily routines earlier in life. In animals, isogenic populations, such as a certain mouse strain in a lab, still portray considerable variability in lifespan (Yuan et al., 2009). Due to the increasing number of individuals reaching the oldest ages, identifying the healthspan's underpinnings-the disease-free period of life-has become more pivotal than finding the determinants of a long lifespan. Here, we distinguish between lifespan and healthspan where possible. In the section on age-related diseases, we focus on those diseases that the World Health Organization (WHO) has listed as the major causes of death at old age, commonly considered to end the period of healthspan. In general, women live longer than men, consistent with lower biological ages as assessed by molecular biomarkers (Jylhävä et al., 2017), but there is a paradox. Women are frailer and have worse health at the end of life. While men still perform better on physical function examinations (Austad and Fischer, 2016;Gordon et al., 2017), women outlive men. The survival benefit in women is also seen across nonhuman mammals, where some species present a greater median difference in lifespan than humans, although aging rates are similar across sexes . There is also an increasing sex ratio in humans with age such that there are~50 men per 100 women among 90-year-olds and~25 among 100-year-olds (Ritchie, 2019). These differences may be attributed to biological and sociocultural aspects; however, despite improved health care systems, public health initiatives, and increased health awareness, this so-called 'gender gap' persists (Ritchie, 2019). Hence, there is a pressing need to better understand the underpinnings of the sex differences in aging, not only from an equity point of view but also toward personalized medicine approaches to tackle age-related decline and diseases more efficiently (Cohen and Beamish, 2014;Ostan et al., 2016). At present, there is relatively limited information on whether biological aging presents differently in men and women. The reason for this lack of knowledge may be rooted in the long tradition of male-biased research sampling in preclinical studies and clinical trials (Holdcroft, 2007). For safety reasons, women have not been the norm in clinical trials where precaution is made for harmful treatments in fertile and pregnant women. Hormonal fluctuations due to menstruation are another reason for excluding women, and women using contraceptives should be stratified into different treatment groups, resulting in increased sampling and cost. As an example, for many decades, research on cardiovascular disease was largely male-biased, resulting in risk calculations and clinical guidelines that did not meet the needs of women, who often present with a different risk profile than men (Schenck-Gustafsson, 2009). In animal research, male models are more commonly used because of the assumed increased female variability (Beery and Zucker, 2011). A recent study investigated more than 200 traits in 27,000 male and female mice and concluded that sexual dimorphism in variability is trait-specific; neither males nor females are more variable overall (Zajitschek et al., 2020). Therefore, to reflect the sex-specific pattern, it is imperative to include both sexes in all types of biomedical research (Zucker and Beery, 2010). Hence, this review aims to focus on sex differences in biological mechanisms of aging in human studies, with some parallel examples from animals included. While we acknowledge that there are other important gender and psychosocial aspects of aging, they fall beyond the scope of this review and are thus not discussed here. An attempt to summarize what is known in light of current theories of aging and sexual dimorphism studies is also performed. Why is the biology of aging different in men and women? There are multiple theories on aging available (Cohen and Beamish, 2014;Zajitschek et al., 2020;Jin, 2010). Here, we present the two main groups of biological aging theories: the senescent theory of aging and the programmed theory of aging. The senescent theory builds on the belief that damage, random errors, and drift occur for different reasons as we age, which eventually leads to less capacity for maintenance and resilience. The subtheories are: 1. Disposable soma: faults accumulate in somatic cells as they get worn out across life (Jin, 2010;Kirkwood and Shefferson, 2017), 2. Reactive oxidative species (ROS) theory of aging: free radicals and oxidative damage across the lifespan cause damage (Jin, 2010;Liochev, 2013), 3. Mutation accumulation: somatic DNA mutations accumulate in cells and tissues that cause errors (Jin, 2010), and 4. Rate of living theory: increased energy metabolism escalates the production of free radicals that in turn accelerate organismal senescence and reduce lifespan (Lints, 1989;Pearl, 2011). The programmed theory of aging suggests that aging is tightly regulated, similar to a biological clock, and contains subcategories: 1. Hayflick limit: discovered in the 1960s -at a time when the senescence theory of aging was the only prevailing theory -and it was shown that the number of times a cell can divide is finite and preset in the cell's DNA (Bengtson and Settersten, 2016), 2. The central aging clock was proposed in 1975 as a 'hypothalamic clock' or with the pineal gland as a central clock regulator (Rattan, 2019), and 3. Developmental processes and growth, embryonic development, and aging are driven by the same molecular mechanisms (Feltes et al., 2015). The first group of theories covers the whole lifespan, where processes such as mutation accumulation occur throughout life. However, the critical effects are manifested in late life, and therefore, no selection against them takes place. In contrast, programmed theories may be more relevant in explaining healthspan. Menopause and andropause typically align with the end of healthspan in women, and they are considered to result from a series of programmed events. There have been multiple theories presented to explain why men and women age differently, as they differ in life expectancy, levels of frailty, and biological aging, reviewed here (Austad and Fischer, 2016;Fischer and Riddle, 2018;Maklakov and Lummaa, 2013;Sampathkumar et al., 2020). The two best described biological explanations for the sex difference are the sex-chromosomal linked mechanisms and the hormonal driven differences in biology, which we describe further below. Sex-chromosomal linked mechanisms As men and women are born with different sets of chromosomes, the double X version in women versus the XY in men, there are apparent phenotypic differences because of this. Men are thus more susceptible to X-linked recessive diseases, for example hemophilia, and there may be many more age-related traits driven by X-chromosomal variation leading to sex-specific effects than we currently know (Maklakov and Lummaa, 2013;Marais et al., 2018). Because of chromosomal sex differences, compensatory effects are in place that are susceptible to changes across the lifespan, such as X-chromosomal inactivation (XCI) in women and loss of Y (LOY) in men (described in more detail below). Hence, there is no doubt about the importance of sex chromosomes in the biology of aging, and the effects may be more pronounced due to increased genomic instability as we age. Moreover, this theory fits well with the programmed aging theory that everything is set in the genes. For the sex differences in aging, likely, X and Y chromosomal effects do not explain the full range of the biological differences, and other sex-specific genetic factors may contribute to the programmed theory of aging. For example, mitochondrial inheritance (and selection) takes place through the maternal line (Marais et al., 2018), and women have a survival advantage already in utero (Austad and Fischer, 2016), although the latter could be driven by hormonal factors as well, which we describe next. Sex-hormonal effects Sex-specific hormones are essential for many biological differences seen in men and women. The hypothalamus regulates hormonal release from the gonads through the pituitary in response to different stimuli. The most common groups of sex steroids are androgens (testosterone), which are mostly present in men, estrogen (estradiol, estrone, and estriol), and progestogens highly abundant in women. The lifelong influence of sex steroids begins already in utero, giving rise to sex differences in neuroanatomy and neurochemistry. A wealth of animal studies has shown how manipulating sex steroid levels during this period causes permanent changes in neuronal architecture (for a detailed review, see Fitch and Denenberg, 1998). During pregnancy, estrogen is first produced by the corpus luteum and later by the placenta and maintained at high levels so that both sexes are exposed equally. Estradiol has been attributed to the regulation of many central processes, such as neurogenesis and cell migration, both in the hypothalamus and corpus callosum (Fitch and Denenberg, 1998). In a male fetus, testosterone is produced by the Leydig cells that develop during the first trimester and produce a testosterone surge during the second trimester. Masculinization of the male fetal brain is brought by testosterone, which enters the brain, where it is converted to estradiol via the aromatase enzyme. In addition to giving rise to dimorphic phenotypic and sexual characteristics, perinatal hormonal exposure plays a significant role in sex-specific metabolic programming, manifested as different risk profiles for metabolic diseases between men and women later in life (Dearden et al., 2018). Insights into sex-specific influences on the prenatal period have also been obtained by studying the effects of the nutritional status of mothers. The 'Thrifty Phenotype Hypothesis' was presented in 1991 by Hales and Barker who observed an association between low birth weight, indicative of reduced fetal growth, and adverse cardio-metabolic risk profile in adulthood (Hales et al., 1991). Observations in individuals born to mothers who were pregnant during the Dutch Hunger Winter, a period of famine during the second World War in the Netherlands, have provided insights into how undernutrition affects late life disease risk, with varying effects depending on the sex of the fetus. For example, women have been reported to exhibit more unfavorable adiposity traits, such as higher body mass index (BMI) and waist circumference as well as disrupted lipid profiles compared to men, whereas men seem to be more vulnerable to neurological damage (Dearden et al., 2018). Somewhat conversely, girls' BMI level is more affected by the mother's overweight and obesity before and during pregnancy compared to boys (Dearden et al., 2018). Sex hormones are responsible for the most marked endocrine changes with aging. In women, menopause demarcates the period of reproductive aging that manifests as low ovarian hormone secretion, occurring on average at the age of 50 years. However, the underlying biological drivers of menopause begin earlier with compensatory hypothalamic and pituitary mechanisms in place (Hall, 2015). A similar sharp decrease in testosterone levels is not seen in men. Male andropause is thus more difficult to define, with the decrease in testosterone levels occurring more slowly, on average at the rate of 1% per year (Singh, 2013). The threshold at which the symptoms of decreasing testosterone levels start to manifest shows great between-individual variability, and many men are asymptomatic despite very low levels of testosterone (Singh, 2013). A third significant age-related endocrine change affecting both men and women is the gradual decrease in the adrenal production of dehydroepiandrosterone (DHEA) and DHEA sulfate, termed adrenopause (Papierska, 2017). DHEA, often called adrenal androgen, is converted to testosterone and estradiol in peripheral tissues. In old men, up to 50% of sex hormones originate from the conversion of DHEA to testosterone, whereas in postmenopausal women, DHEA is the source of almost all estrogens (Papierska, 2017). Although the physiological importance and exact mechanism(s) of action of DHEA are not entirely understood, it is believed to have significant antiaging effects, such as improving cognitive function and anti-inflammatory activity, as well as being antiatherosclerotic and antiosteoporotic (Nawata et al., 2004). In women, sex hormones play a crucial role in healthspan and lifespan. Estrogen exposure, defined as the reproductive lifespan, is the most commonly used approach for assessing hormonerelated risks. Interestingly, the risks are known to differ for different outcomes. A shorter reproductive lifespan has been associated with decreased odds of longevity (living until a certain high age, e.g. centenarians) (Shadyab et al., 2017) and a higher risk of cardiovascular (CVD) events (Mishra et al., 2021) but a lower risk of mortality from gynecological cancers (Wu et al., 2014). Furthermore, the risks may also be age varying. A large study pooling individual-level data from 15 observational studies has shown that women with premature and early menopause have an increased risk of nonfatal CVD events before the age of 60 years but not after 70 years (Zhu et al., 2019). Giving further support for age-and cause-varying risks, female hormone replacement therapy (HRT) was associated with a reduced risk of mortality in younger women (<60 years) and a reduced risk of mortality due to causes other than CVD or cancer in women of all ages (Salpeter et al., 2004). However, another study found that the reductions in all-cause and CVD mortality risks due to HRT are greatly diminished with increasing age, regardless of the age at first use or duration of the HRT (Stram et al., 2011). Hence, it is likely that HRT is not able to bring the same benefit to lifespan as a longer (partly genetically determined) exposure to natural estrogen does. In middle-aged and older men, higher endogenous testosterone levels are associated with a lower risk of all-cause CVD and cancer mortality (Khaw et al., 2007). However, the relationship between male hormones and lifespan is complex. The (rather grotesque) examples of castrations of mentally ill institutionalized men (Hamilton and Mestler, 1969) and Korean eunuchs (Min et al., 2012) suggest that withdrawal of male sex hormones results in a longer lifespan compared to noncastrated men. On the other hand, testosterone HRT shows beneficial effects on some aspects of health, and although side effects are also noted, the overall effects on mortality seem to be mostly beneficial (Maklakov and Lummaa, 2013;Tyagi et al., 2017). However, abuse of testosterone in athletes can cause serious adverse effects and premature death (Frati et al., 2015). DHEA and DHEA-S have also been studied for their associations with mortality. Although the findings are rather mixed, there is some support for low DHEA/DHEAS levels to increase mortality risk in older men, whereas in women, the association may be weaker or U-shaped (Ohlsson et al., 2015). In summary, there is support for the importance of sex hormones in aging, further in line with the central aging clock theory on a unified control system for the regulation of aging (Rattan, 2019). However, hormones may also interfere with the level of ROS production (Coluzzi et al., 2019), in line with the ROS theory of aging (Gladyshev, 2014). Sex differences in biological aging While a growing body of evidence is accumulating on the relevance of biomarkers of aging in human health and mortality, understanding the sex-specific features of these markers is lagging behind. Not only has the effect of sex been largely ignored but is also often considered a confounder rather than a source of biological variation. Treating sex merely as a confounder or a 'nuisance parameter' can lead to results that are not biologically relevant to either sex. In the following sections, we discuss the available literature on sex differences in humans, with supportive evidence from animals, for the most commonly studied biological processes and markers of aging and highlight the key lessons learned from these studies so far. An overview of the topic and a conceptual framework is presented in Table 1 and Figure 1. Genetic factors in aging The last two decades have been a revolution for human genetics, starting with the sequencing of a human genome in 2003 and breakthroughs in genome-wide association studies finding thousands of genetic loci associated with complex human traits, including many age-related diseases. For aging, gene discoveries have been sparse, although lately, large cohorts such as the UK Biobank have enabled powerful analyses of parental lifespan, healthspan, and longevity (Timmers et al., 2019;Zenin et al., 2019;Melzer et al., 2020). However, only a handful of genes have been identified, and the top loci are often well known for their relation to diseases, for example APOE, LPA, and CDKN2B-AS1. Longevity is known to be moderately heritable (Melzer et al., 2020); however, from an evolutionary perspective, natural selection is active for the reproduction of a species and not for maximizing lifespan. A recent study using human genotype data found that rare germline mutational burden was associated with lifespan and healthspan (Shindyapina et al., 2020). In particular, the association between mutations and healthspan was more pronounced in women. Another recent study found measured and genetically predicted levels of ten serum biomarkers to be associated with healthspan and lifespan, and again with stronger effects for healthspan seen in women (Li et al., 2021). Hence, many genes may be linked to the underlying aging process or beneficial for age-related diseases, with importance for longevity, health, and lifespan, but they may not have been specifically selected for (Rattan, 2000). Thus far, large-scale genome-wide association studies have focused mostly on autosomes and rarely even stratified results by sex. Hence, little is known about sex-specific genetic effects for complex traits, although sexual dimorphisms have been reported for anthropometric traits and gout (Bernabeu, 2020;Randall et al., 2013), and gene-sex interactions have been found for multiple sclerosis (Traglia et al., 2017). Few efforts have been made for X chromosome-wide association studies, but they reveal (sex-specific) links to several complex traits and identify a locus associated with height escaping XCI (Bernabeu, 2020;Tukiainen et al., 2014). The mechanisms by which XCI is controlled are complex, for example by noncoding RNA and epigenetics (Lee, 2011), and are a way to balance the unequal amount of X-chromosomal DNA between men and women. The X chromosome encodes approximately a thousand genes, many related to metabolic activity, such as amino acid turnover and transport, and could explain the differential proliferative rates in sexes seen during embryonic growth (Patrat et al., 2020). During aging, the XCI ratio between maternal and paternal X chromosomes is no longer equal, leading to skewed XCI, which has been implicated in diseases and shown to be less severe in female centenarians (Gentilini et al., 2012). For men, the mosaic LOY in blood cells increases with age and is associated with age-related diseases and a higher risk of death (Forsberg, 2017). Although the sex chromosomes are responsible for most of the female and male-specific traits, autosomes have bene increasingly studied for their role in sex-specific gene expression and associations with biological functions. Recent findings in this area point toward sexual dimorphism in transcriptomic profiles with hormone-related regulation and associations with various processes such as tissue morphogenesis, fat metabolism, cancer, and immune responses . Implications on immunoinflammatory functions have also been highlighted (Bongen et al., 2019;Nevalainen et al., 2015). The underlying mechanisms for the sex differences in tissue specific transcription and associations with disease risks in men and women is currently unclear. However, there is some evidence that while most transcription factors have similar expression profiles in men and women, there may be sex-specific regulatory networks across different tissues, leading to altered function and disease control (Lopes-Ramos et al., 2020). For example, such sex-specific targeting patterns of transcription factors have been found for genes associated to Alzheimer's disease (AD), Parkinson's disease (PD), diabetes, autoimmune thyroid disease, and cardiomyopathy (Lopes-Ramos et al., 2020). Genomic instability, such as chromosomal abnormalities, is known to be one of the hallmarks of biological aging (Ló pez- Otín et al., 2013). DNA damage accumulates across the life course as exogenous and endogenous triggers occur and DNA repair mechanisms become less efficient. Rare somatic mutations may accumulate across life and play a role in cancer, where men have been shown to have mutation accumulation earlier in life (Podolskiy et al., 2016), and in several premature-aging syndromes (Fischer and Riddle, 2018). Studies in rodents and in Drosophila support the association between DNA repair, mutational burden, and aging; however, sex-specific effects are intricate, and the results depend heavily on animal strain and environmental conditions (Fischer and Riddle, 2018). Taken together, the examples described here relate to chromosomal stability and resemble well with the senescence theory of aging, where random events occur over time with less capacity of our maintenance system to repair and fix the faults. Sexual dimorphism in genome-wide studies for anthropometric traits may be consistent with the developmental processes and growth controlled during early life and aging, where hormonal influences are also apparent. Mitochondria-linked mechanisms Mitochondrial DNA (mtDNA) is inherited from mothers and contains the genetic code for 13 proteins, essential components of oxidative phosphorylation complexes, and several RNAs (Kauppila et al., 2017). Mitochondria are important for cellular processes such as energy production, oxidation, and apoptosis, and their function has been described as one of the hallmarks of aging (Ló pez-Otín et al., 2013). Mitochondrial dysfunction is associated with many age-related diseases Chocron et al., 1865). Oxidative damage and increased ROS production across life were initially thought to cause this dysfunction, but research in recent years showed that ROS do not accelerate aging in mice and even prolong lifespan in yeast and C. elegans (Ló pez- Otín et al., 2013). In humans, studies have linked the accumulated burden of mutations in mtDNA to aging and PD, although a majority of the mtDNA molecules within a cell must be affected for critical symptoms to emerge (Kauppila et al., 2017). Another feature of aging is the number of mtDNA copies within a cell. A lower number has been associated with aging, cognitive and physical decline, and increased mortality (Mengel-From et al., 2014). Historically, the free radical theory of aging, or the ROS theory of aging, has been postulated to explain mitochondrial dysfunction in aging (Gladyshev, 2014). However, evidence from both human and animal studies points toward the fact that the accumulation of mtDNA mutations is a feature of early life replication errors that undergo polyclonal expansion independent of ROS (Ló pez- Otín et al., 2013). The latter fits well with the whole senescence theory of aging (in which energy needs to be preserved to last across the full lifespan) and the mutation accumulation theory. Substantial sexual dimorphism has been observed for mitochondrial function concerning oxidative capacity and enzyme activity (Ventura-Clapier et al., 2017). In humans, women show higher mitochondrial gene expression levels, protein content, and overall activity in multiple tissues, such as the brain, skeletal muscle, and cardiomyocytes (Ventura-Clapier et al., 2017). Similar sexual dimorphism is observed in rodent models investigating mitochondrial respiratory function (Ventura-Clapier et al., 2017). Estrogens have been shown to influence mitochondrial function and exert protective effects, partly explaining why women have delayed mitochondrial aging compared to men. These differences may contribute to altered mitochondrial function during stress conditions such as injury or starvation, where sex-specific effects are also noted on mitochondrial respiration (Demarest and McCarthy, 2015). Little is known about sexual dimorphisms in mtDNA copy numbers and accumulated mutations in relation to aging. A recent analysis in UK Biobank found that abundant mtDNA, estimated from the weighted intensities of probes mapped to the mitochondrial genome, was significantly elevated in premenopausal women compared to men and inversely associated with age, smoking, BMI, and frailty (Hägg et al., 2020). Hence, taken together, sex hormones likely play a pivotal role in explaining the beneficial effect seen in women on mitochondrial function and aging. Telomeres Telomeres are repeated sequences of nucleotide bases (TTAGGG)n located at the end of the chromosomes (Blackburn et al., 2015). Every time a cell divides, the DNA polymerase machinery replicates the DNA sequence into two identical copies, although the last part of the DNA is not preserved due to the end replication problem. Hence, instead of losing important coding materials, Figure 1. Conceptual framework of the complex interactions between molecular, cellular, functional, organ, and whole body aging processes across the life course in men and women, with influences from chromosomes and hormones on the sex differences. The different illustrations made for men and women are based on descriptions in the text. For healthspan and lifespan, trajectories are taken from a recent publication by Li et al., 2021. the telomere is shortened. When it becomes critically short, the cell enters senescence, and this was later found to be the explanation for the Hayflick limit (Olovnikov, 1996). However, germline cells have an active telomerase enzyme that elongates the telomeres to maintain length, as do many cancer cells, but somatic cells do not normally have this process. Therefore, throughout life, the length of the telomere (TL) decreases and serves as a marker of cellular aging (Blackburn et al., 2015). As different cells have varied rates of cellular turnover, the attrition rates of telomeres depend on the proliferative capacity of the host cell. Leukocyte TL is among the most proliferative cells with fast TL shortening, while skeletal muscle maintains longer telomeres (Demanelis et al., 2020). Increased attrition rates are seen in childhood and adolescence, when growth and development occur, as well as in old adults. In the elderly, cellular senescence is apparent where DNA maintenance and repair are no longer efficient, and telomeres reach critical lengths for cellular survival consistent with a person's natural lifespan limit (Steenstrup et al., 2017). As such, short TL has been associated with age-related outcomes and health aspects, for example mortality (Wang et al., 2018), CVD (Haycock et al., 2014), and different stressors in life (Starkweather et al., 2014). Telomeres are present across many species, but their length and attrition rates may vary (Oeseburg et al., 2010). Different genetic models have been used in mice to lengthen telomeres with telomerase activation, where some experiments increased the cancer incidence, while others did not (Folgueras et al., 2018). Recently, a model using hyperlong telomeres showed that this phenotype increases the lifespan in mice and shows overall beneficial effects on metabolism, glucose control, and mitochondrial function (Muñoz-Lorente et al., 2019). The lengths of the telomeres are also sex-specific. At birth, boys have shorter TLs than girls (Factor-Litvak et al., 2016), which prevails throughout life (Gardner et al., 2014). As women have a longer lifespan than men, telomeres have been suggested as the causal factor explaining the difference. However, it is still not completely understood whether telomeres could be the cause or consequence of biological processes. Several large-scale genomic studies identified 30 + genetic variants associated with TL (Codd et al., 2013;Li et al., 2020a). These findings have led to increased knowledge, and many studies have provided evidence for causal associations between short leukocyte TLs and age-related diseases (Kuo et al., 2019). Hence, it seems that the biology of telomeres is a good example of how genes and the environment interplay to present a phenotype. Genetic liability contributes to the overall length of telomeres in all cells, and across the lifespan, stressors and lifestyle factors influence cell-specific attrition rates. Different aging theories may fit in this scenario, while the limit on cellular division (Hayflick) was described as a direct consequence of critically short telomeres. The sexual dimorphism of telomere dynamics has been discussed in many different aspects (Barrett and Richardson, 2011). The sex chromosome-linked mechanisms could be part of the explanation. Although most telomere-related genes have been found in autosomal chromosomes, it has been suggested that the unguarded chromosome in heterogametic sex is a disadvantage for mortality and telomere maintenance. A mutation in the DKC1 gene on the X chromosome -a gene involved in telomere biology -is often seen in patients with dyskeratosis congenita, which leads to rapid TL shortening and reduced survival (Savage and Bertuch, 2010). Another explanation is that the larger sex has disadvantages in the cellular maintenance, oxidative stress reactions, and telomere function because cellular capacity is linked to growth. Consequently, men, who are generally taller than women, should suffer from worse telomere function. However, a recent meta-analysis investigated sex differences in TL across 51 vertebrate species and found no evidence supporting either the heterogametic sex disadvantage or the sexual selection hypotheses (Remot et al., 2020). The analyses, including TL dynamics in mammals, birds, reptiles, and fish, did not find associations to support sex differences in longevity. Hence, the true nature by which TL sexual dimorphism presents remains to be elucidated. The importance of sex hormones may need further scrutiny, as they influence the level of ROS production, which may interfere with telomere maintenance and elongation (Coluzzi et al., 2019). However, other theories have been discussed, and many factors are likely important for sex-specific telomere dynamics. Cellular senescence Another hallmark of aging is cellular senescence. The lifetime of a cell is limited, as described by Hayflick, and the fate of a cell depends on the type of cell and what signals it receives and the damage it is exposed to across life. Events such as critically short telomeres, oxidative stress, replicative errors, mitochondrial dysfunction, pathogen response, oncogene activation, and other stress sources may induce senescence of the cell with irreversible replicative arrest (Ló pez- Otín et al., 2013). This state causes a response of cytokines and other proinflammatory factors to be released, which may trigger downstream effects in the surrounding tissue and invoke a senescence-associated secretory phenotype (SASP) . Cellular senescence is tightly linked with aging, has been well correlated with DNA damage, and an increasing number of cells are senescent in old tissues compared to young tissues in a study of liver tissue in mice (Ló pez- Otín et al., 2013;Khosla et al., 2020). However, it has been difficult to assess SASP in human studies since the phenotype markers are heterogeneous and not consistently available in the circulation. Nevertheless, the systemic accumulation of senescent cells in aging has been associated with many age-related diseases and conditions, such as frailty, both in humans and animal models Khosla et al., 2020;Schafer et al., 2020). Currently, there is also increasing evidence for the beneficial antiaging effect of senolytic drugs as potential treatments to remove senescent cells when abundant . Hence, cellular senescence is a core mechanism in the senescence theory of aging, where cells and tissues accumulate damage across life but is also essential in the Hayflick limit's programmed theory of aging Khosla et al., 2020;Schafer et al., 2020). No human studies specifically investigate the difference between men and women in cellular senescence, and evidence from other models is sparse. A recent study in mice suggested that male mice have a higher number of senescent cells across life compared to female mice (Yousefzadeh et al., 2020), although at the end of life, the proportion of female senescent cells is almost at the same level as in male mice. The notion of higher cellular senescence in males would be consistent with the shorter telomeres seen. Evidence points to the fact that female stem cells have an increased capacity for regeneration, self-renewal, and proliferation (Dulken and Brunet, 2015), in line with a more beneficial cellular aging route in females/women. The limited knowledge would nevertheless suggest that sexual dimorphism exists, where women maintain better cellular maintenance throughout the life course. Regardless, more studies on sex differential senescent mechanisms are urgently needed to learn about the biological aging processes therein. Proteostasis and autophagy Protein homeostasis, or proteostasis, is the body's ability to maintain control over protein synthesis, folding, stability, degradation, and removal through autophagy (Hipp et al., 2019). During aging, the balance in the protein machinery is lost and unfolded and misfolded proteins can aggregate and cause pathological conditions seen in diseases of (neuro)degeneration, AD, PD, and diabetes (Hipp et al., 2019). Oxidative stress and heat may increase conformational changes and induce cellular toxicity from accumulated protein aggregations. Under stressful conditions, the heat shock response is activated in the cell, and unbound chaperones are available to assist in stabilizing the protein network. A study by Ubaida-Mohien et al. found a decreased representation of chaperone proteins in old skeletal muscle tissue in healthy adults, although autophagy-related proteins were overrepresented (Ubaida-Mohien et al., 2019). Experiments in worms, flies, and mice have shown that overexpressing chaperones and heat-shock proteins are associated with an extended lifespan, whereas models deficient in parts of the chaperone-heat-shock system present accelerated aging phenotypes (Ló pez- Otín et al., 2013;Ferrucci et al., 2020). Moreover, autophagy becomes dysfunctional with aging. In model systems, abrogation of autophagy leads to neurodegeneration and shortens lifespan, whereas increased basal activity of autophagy increases lifespan (Leidal et al., 2018). In humans, long-lived families have a better-maintained autophagy system, and individuals under starvation exhibit enhanced autophagic flux (Leidal et al., 2018). Hence, declining proteostasis control in aging may be an effect of accumulated aggregates and dysfunctional autophagy, consistent with the senescent wear-and-tear theory of aging, including the ROS theory of aging. A recent investigation analyzed proteasome activity across nine different tissues and found higher activity in female mice than in their male counterparts (Jenkins et al., 2020). The largest sexual dimorphism was observed in the small intestine and kidney, specifically in chymotrypsin-like proteasomal activity. In another study, female fruit flies were more tolerant to oxidative stress and showed increased proteasome expression and activity than male flies, although the resistance was lost with age (Pomatto et al., 2017). Overall, adaptations to maintain homeostasis seem to depend on both age and sex, although studies on the latter are still limited (Pomatto et al., 2018). Females studied in animals and model systems also exhibit more resistance to stressors, partly hypothesized to be due to estrogens' beneficial effects (Tower et al., 2020). Analyses on sexual dimorphism in human protein homeostasis and autophagy aging processes are still lacking, which is understandable, as efficient high-throughput methods are not yet available Pomatto et al., 2017). Epigenetic alterations The term 'epigenetics' means 'on top of genetics' and is a collective term for chemical modifications altering the activity of the gene transcription process without changing the DNA code itself. There are four major types of epigenetic mechanisms: ATP-dependent chromatin remodeling complexes, histone, and DNA modifications, and noncoding RNAs (Pagiatakis et al., 2021). Histones can be modified posttranslationally. The most well-studied mechanisms are acetylation and methylation processes; changes in histone acetylation/methylation have been linked to aging, healthspan, and lifespan in diverse models, such as flies, mice, yeast, and human cell lines (Yi and Kim, 2020). A study by Klein et al., 2019 found that tau may affect histone acetylation in the human brain using an epigenome-wide association study of H3K9ac, thus relating histone modification processes to AD pathology. However, in human studies, genome-wide DNA methylation arrays have paved the way for a new field of research on epigenetic age where hundreds of (un)methylated sites (CpGs) have been shown to associate with age across the life course . A multitude of clocks quantifying biological age across tissues, in whole blood, skin, muscle, or in human cell culture models have emerged (Horvath and Raj, 2018) and recently across mammalian species (Lu, 2021). With remarkable accuracy, clock ticks with aging and a higher epigenetic age are associated with worse health and increased mortality risk (Horvath and Raj, 2018;Chen et al., 2016). Promising studies have reported reversal of epigenetic age with different interventions (Horvath, 2020;Fahy et al., 2019). A still unanswered question is whether this reversal of the epigenetic clock would then infer a lower risk for adverse events. In other words, is the epigenetic process causal in aging ? As with telomeres, the epigenetic clocks seem to be tightly linked with cellular replication underlying the Hayflick limit theory (Wagner, 2019). Genetic studies of epigenetic clocks have discovered several loci associated with lifespan and lifestyle factors beyond the gene regions where the CpGs themselves are located (Lu et al., 2018;McCartney, 2020). One of the top loci found harbors the telomerase TERT gene, demonstrating the link to telomere biology. Epigenetic changes have also been proposed due to both developmental and maintenance processes, where gestational age clocks represent the former and other adult tissue clocks represent the latter. Moreover, intrinsic and extrinsic epigenetic clocks have been suggested to represent internal (cellular) versus external (lifestyle stressor) aging processes (Horvath and Raj, 2018). The epigenetic process in aging may be consistent with both senescence and programming theories on aging depending on the specific timing in life and the clock under study. The sex-specific effect on epigenetic age is apparent in young children and adults Horvath and Raj, 2018). At all ages, boys/men have a higher epigenetically predicted biological age than girls/women, in accordance with the survival benefit in women. This phenomenon seems to be true across different tissues and gives rise to an effective difference in mortality risk between men and women (Li et al., 2020b). Moreover, in women, earlier menopause, either natural or surgical, is associated with increased epigenetic age, and although the finding was not consistent across different tissues, there was further support for lower epigenetic age in women undergoing HRT . Little is known about sex-dimorphic effects on histone modifications in aging, although studies on different interventions and acetylation/methylation in animals suggest that these effects are important modifiers in aging (Fischer and Riddle, 2018). Furthermore, the Klein study found >4000 H3K9ac sites associated with sex in their human histone data, highlighting the future need for deeper studies in this area (Klein et al., 2019). Studies investigating genome-wide DNA methylation differences between men and women report significant differences in autosomes and on the X chromosome, the latter being linked to sexual dimorphism genes and XCI (Li et al., 2020c;McCartney et al., 2019). A recent meta-analysis study investigating the agerelated sex differences in DNA methylation patterns found changes associated with both methylation level and variability across the genome (Yusipov, 2020). Differentially methylated sites were enriched in imprinted genes but not in sex hormone-related genes. Furthermore, the top CpGs displayed a sex-specific pattern in samples from centenarians (healthy aging model) and Down's syndrome (accelerated aging model). On the other hand, another study investigating brain DNA methylation patterns found no support for sex-age interaction effects in neurodegeneration from human samples on AD and controls (Pellegrini, 2020). Studies on sexual dimorphism and DNA methylation are sparse in animal models, but some evidence for differences has been found in both rats and mice (Sampathkumar et al., 2020). Bacon et al., 2019 used a rat model resembling human neuroendocrine function and showed that DNA methylation regulates the onset of menopause. Taken together, the sexual dimorphism seen in epigenetic studies on aging is complex and seems to reflect sex chromosome-linked mechanisms and/or hormonal biological processes. Inflammatory and immunological makers Immunoinflammatory functions are at the heart of health in aging, and there is exhaustive literature available on the various changes that take place with age. At a cellular level, two distinct yet often parallel processes characterize immune aging: immunosenescence and inflammaging. The former refers to changes in the adaptive immune system, such as increased numbers of memory CD8 +T cells (resulting in a decreased CD4/CD8 cell ratio), loss of the key costimulatory molecule CD28 on the T cell surface and compromised clonal expansion and specific antibody production in the B cell compartment (Gubbels Bupp, 2015;Franceschi, 2019). Inflammaging refers to chronic, low-grade inflammation that occurs in the absence of infection and manifests as increased production of proinflammatory cytokines, linked to both frailty and CVD (Ferrucci and Fabbri, 2018). From an evolutionary perspective, inflammaging can result from positive selection of genetic variants that associate with higher levels of pro-inflammatory factors and enhanced immune responses in early life, conferring better protection against pathogens but resulting in increased damage to host tissues in later life. Inflammaging is thus in accordance with multiple different theories, where various stimuli, such as oxidative stress and lifestyle factors, contribute as well (Franceschi, 2019;De la Fuente and Miquel, 2009). While both sexes experience aging-associated changes in the immune system, the hallmark features differ for men and women, and men are considered to experience maladaptive changes to a greater extent (Gubbels Bupp, 2015;Gomez et al., 2018). Between puberty and menopausewhen differences in the hormonal milieu are the greatest between men and women -women experience lower rates of infections, an advantage attributed to stronger immune and vaccine responses and more efficient pathogen clearance (Gubbels Bupp, 2015). On the other hand, women are more susceptible to autoimmune diseases than men. However, after the age of menopause, the incidence of autoimmune diseases in women decreases close to the numbers observed in men, whereas the incidence of chronic inflammatory diseases increases (Gubbels Bupp, 2015). The temporal dynamics of these changes point to the crucial role of sex hormones in shaping immune aging, although it is likely much more complicated, involving an interplay of multiple homeostatic systems. It has been shown that nonimmune cells, such as adipocytes, fibroblasts, and endothelial cells, also contribute to inflammaging (Franceschi, 2019). As stated above, men seem to experience immunosenescence to a greater extent than women, potentially because women exhibit higher basal immunoglobulin levels, higher CD4 +T cell counts, and an increased CD4/CD8 T cell ratio compared to men (Gubbels Bupp, 2015;Gomez et al., 2018). The corresponding adaptive immune functions, such as antigen-specific antibody responses and CD4 +T cell cytokine production, are also typically more enhanced in women (Gubbels Bupp, 2015;Gomez et al., 2018). A recent study using sequencing and flow cytometry data in blood mononuclear cells further elucidated the sexual dimorphism in immune aging by showing that male and female cells also significantly differ at the age when sex hormones decline (Márquez et al., 2020). Older women had higher genomic activity for adaptive immune cells, while older men had higher activity for monocytes and inflammation, indicating greater inflammaging in men (Márquez et al., 2020). In the same study, a life-course analysis of the timing of epigenomic regulation of chromatin accessibility showed that male immune cells are more strongly affected and that a decline in immune function occurs 5-6 years earlier in men than in women (Márquez et al., 2020). Although animal models cannot fully recapitulate human immunosenescence or inflammaging, findings on sex-related immune functions in animal studies have generally been in line with observations in humans. Sex differences are present in diverse species ranging from insects to mammals, with female individuals presenting stronger innate and adaptive immune responses than males (Klein and Flanagan, 2016). Like humans, the differences are largely attributable to the effects of sex hormones, with a contribution of genetic differences due to several immunoinflammatory genes that are X chromosome encoded (Klein and Flanagan, 2016). In summary, the above findings support the assertion that men experience faster and/or earlier aging-associated immunoinflammatory changes and that these changes may be attributed to both hormonal changes and other factors. Nutrient sensing Intracellular nutrient-sensing pathways and signaling systems mediate information on nutrient availability and energy levels in the extracellular milieu. The key pathways include the insulin/insulin-like growth factor 1 (IGF-1) signaling pathway, mechanistic target of rapamycin (mTOR), and adenosine monophosphate-activated protein kinase (AMPK) pathway (Pignatti et al., 2020). These pathways regulate a multitude of intracellular functions, such as cell cycle control, DNA replication and repair, autophagy, and antioxidant defenses, by which their effects are excreted for reproduction, growth, and aging (Pignatti et al., 2020). Deregulated nutrient sensing is also one of the hallmarks of aging (Ló pez-Otín et al., 2013). Each of the hallmarks of aging is associated with undesirable metabolic alterations (Ló pez-Otín et al., 2016), stressing the fact that nutrient sensing and metabolism are interlinked processes with broad effects on whole-organism functions. Over the past years, there has been intensive research on how nutrient-sensing pathways control lifespan and healthspan, with the most significant breakthroughs achieved in unraveling how different dietary restrictions improve aging outcomes and survival in several species, including humans (Templeman and Murphy, 2018). Of the different dietary restrictions, the most compelling evidence rests on caloric restriction (CR), in which the energy intake is reduced~30% relative to ad libitum-fed animals without reducing the intake of micronutrients (Templeman and Murphy, 2018). At the molecular level, CR triggers activation of stress response pathways that in turn reduce inflammation and increase repair and antioxidative functions. Interestingly, genetic polymorphisms in genes encoding proteins in the insulin/IGF and mTOR pathways are among those that are robustly associated with longevity, such that variants associated with the lower basal activity of the pathways are associated with longevity (Pan and Finkel, 2017). Sex hormones regulate several key functions in nutrient sensing and metabolism of glucose, amino acids, and proteins, and it is not surprising that men and women differ in several metabolic characteristics. At the molecular level, women have lower fasting insulin and glucose levels, lower basal fat oxidation, and higher fat use but lower consumption of carbohydrates during physical activity (Comitato et al., 2015). The most noticeable difference is the fat distribution at the phenotypic level, so that men tend to have more visceral fat, whereas women have greater fat deposition in lower body depots (Comitato et al., 2015). For healthspan, the above-described traits tend to favor women such that they have a lower risk of cardiometabolic diseases (before menopause). However, the higher basal insulin levels in men promote glycogen and lipid synthesis in muscle cells, resulting in higher muscle mass and strength (Comitato et al., 2015). Aging is, however, associated with a reduction in glucose tolerance in both sexes, increasing the risk of diabetes. There is a complex interplay between sex hormones and body composition for which data from in vivo studies and clinical trials remain inconclusive (Allan, 2014). Future studies will hopefully shed light on possible sex differences in CR in humans; thus far, the available data do not support (or allow) inferences on sexual dimorphism. However, studies in rodents have suggested that males may have a more robust response to CR than females (Kane et al., 2018), but the mechanistic bases are not understood. Akin to epigenetic clocks (see Epigenetic alterations) that predict mortality independent of other risk factors, there have been attempts to create similar composite measures based on metabolites measured using different techniques (Jylhävä et al., 2017). For example, Hertel et al., 2016 created a 'metabolic age score' that was shown to be associated with mortality independent of chronological age and other risk factors. The score was robustly associated with chronological age in both sexes, the only significant sex difference being that the score was more strongly influenced by obesity in women than in men (Hertel et al., 2016). However, such studies on metabolomics scores have been much fewer than studies on epigenetic clocks, and the potential sex dimorphism in metabolic scores is less clear. In summary, the sexual dimorphism in nutrient sensing and metabolism is largely attributable to sex hormones and their downstream effects. The higher muscle mass coupled with a higher basal metabolic rate in men also aligns with the rate of living theory. Functional measures Functional measures relevant to aging and mortality are numerous. One of the most commonly used and strongest markers for human population-based estimation of death risk is a simple assessment of walking speed (Ganna and Ingelsson, 2015), yet other popular measures include grip strength, chair rise, lung function, vision, and an abundance of cognitive domains (Peiffer et al., 2010). Although it is well known that being physically fit translates to better health, maintaining higher muscle mass and strength requires spending more energy and a higher metabolic rate. Analogous to the Hayflick limit, the rate of aging theory posits that the total amount of energy expenditure per lifetime is finite and that excessive usage results in accelerated aging (Pearl, 2011). Although much debated (Lints, 1989), this theory is supported by the observations that long-lived mammals have low energy expenditure rates, while short-lived mammals have higher rates. Studies in aging humans have shown that those having higher basic metabolic rates are more likely to die than those with lower rates (Ruggiero et al., 2008). It is well established that men do better in physical capability, measured as grip strength, walking, and stair climb, even after adjusting for total body weight and lean body mass (Peiffer et al., 2010). Upon menopause, the withdrawal of sex hormones negatively affects bone and muscle health in women, where women experience a greater reduction in bone mineral density than men. However, men have a steady decline in bone function across life, but the interaction between load and bone strength is better maintained in older men, and this phenomenon may explain the reason for fewer fractures seen in men (Seeman, 2001). Women have less skeletal muscle mass than men, but men have greater loss with aging, although different parts of the body may show different sex-dimorphic effects, and menopause accelerates the loss in women (Doherty, 2003). Sarcopenia affects both sexes but is clinically more important in older women who may live longer with the disability (Doherty, 2003). For age-related visual impairment, women report more eye problems than men (Li et al., 2011), and overall, healthy adult men seem to perform better on visual perception than women (Shaqiri et al., 2018). In contrast, hearing loss is more frequent in men and may start as early as in the thirties (Shuster et al., 2019). Sexual dimorphism is also apparent in animal models, and women seem to be protected from age-related hearing decline before menopause, as estrogen levels are directly linked to the hearing threshold. Lung function is strongly associated with age, and a decline in spirometry-based measurements of dynamic flow starts soon after lung maturation in young adults (Sharma and Goodwin, 2006). Sex-specific differences are seen across almost all respiratory structures and functions; women have smaller and anatomically different lungs than men, perform worse in breathing exercises, and sex hormones interact with lung and airway function during early developmental processes and aging (LoMauro and Aliverti, 2018). However, anatomical changes during aging to other organs may be advantageous to women. Cardiac remodeling due to aging is universal, but the decline in myocytes and systolic function are greater in males, both in humans and rodents (Keller and Howlett, 2016). Kidney function declines with aging, and men have a greater decrease in glomerular filtration rate, where women are most likely protected due to estrogens before menopause (Baylis, 2009). A recent study created a composite measure, termed the functional aging index (FAI), to better capture the state and changes in various physical functions simultaneously. The FAI includes muscle strength (grip strength), movement (gait speed), sensory (vision and hearing), and lung function and is predictive of mortality in both sexes, yet the hazard ratio is greater in women (Finkel et al., 2019). However, while women had higher FAI scores than men, indicating poorer functioning, the rate of change did not differ between the sexes (Finkel et al., 2019). Hence, the better physical performance in men may be explained by evolutionary selection for physical fitness, which means better health in general, but it is unclear why this does not translate to a survival advantage. As men have higher muscle mass than women, some clues might be obtained from the observed associations between higher skeletal muscle mass and higher basal metabolic rate, that is energy expenditure that is higher in men than in women (Ruggiero et al., 2008). Perhaps, the sex specificity in functional measures best describes the complex interplay between fitness and aging in line with the rate of living theory in the senescence theory of aging, emphasizing the sex paradox in aging where women with worse physical function and health still outlive men, possibly due to a better cellular maintenance system and protections from estrogens. Frailty Frailty is defined as a state of increased vulnerability to stressors resulting from decreased physiological reserves to maintain homeostasis across multiple organ systems. Manifestations of frailty overlap with those of normative aging yet are more pronounced. When a certain threshold in frailty is reached, the risk of adverse outcomes, such as disability and death, increases. Although frailty often coexists with multimorbidity (and disability), the association between frailty and mortality is independent of multimorbidity (Hanlon et al., 2018), indicating that frailty captures health-related variation that is not attributed to diseases alone. There is currently no widely accepted consensus on how to measure frailty; however, the two most commonly used approaches are the Fried phenotypic model (FP) (Fried et al., 2001) and the Rockwood frailty index (FI) (Searle et al., 2008). The first views frailty as a physical syndrome with a discrete categorization of individuals into nonfrail, prefrail, and frail, whereas the latter considers frailty as a multidimensional construct based on the accumulation of deficits in physical, biological, and psychosocial domains. The FI is measured on a continuous scale, allowing for the detection of more subtle changes and making the FI suited for younger individuals. Although viewed more as a measure of fitness than biological age, frailty stands out as an exception in the wealth of research devoted to understanding the sex differences compared to the other markers. Women not only have a higher prevalence of frailty but also experience higher levels than men across the age range (Gordon et al., 2017). Women are nevertheless able to tolerate frailty better; men are more vulnerable to death at any given level of frailty than women of the same age (Gordon et al., 2017;Jiang et al., 2017). The above-described male-female health-survival paradox may thus also be conceptualized as a sex-frailty paradox. The sex-frailty paradox has been described using several frailty scales (Theou et al., 2014) and across different populations (Gordon et al., 2017), suggesting that it is likely independent of the specific scale used to measure frailty. The reasons for higher levels of frailty in women have been discussed previously, with various biological, social, and behavioral factors hypothesized to allow women to better tolerate frailty (Gordon and Hubbard, 2019;Hubbard, 2015). When conceptualizing frailty using the deficit accumulation model, that is the FI, it seems conceivable that women are evolutionarily 'calibrated' for late-life fitness. This theory aligns with the grandmother effect and increases in the population postreproductive lifespans when it benefits younger generations (Lahdenperä et al., 2004). Frailty also recapitulates characteristics of disposable soma theory that allow a certain amount of damage to the organism. However, another theory suggested underlying the sex differences is the chronic disease hypothesis by which women are more likely to experience nonlethal chronic conditions, while men tend to develop acute conditions associated with high mortality, such as stroke and myocardial infarction (Gladyshev, 2014;Bernabeu, 2020). Women may also be more prone to actively seek medical help for their conditions, resulting in better treatment balance of their (chronic) diseases. Last, variability in reporting behavior may contribute to the difference; when using self-reported data, a common conception is that men tend to underreport their morbidities and disability, while women are more likely to overreport. However, evidence supporting this conception is not conclusive (Merrill et al., 1997;Macintyre et al., 1999), and the underlying mechanisms for the sex-frailty paradox remain unresolved. In recent years, animal models of frailty, building on both FI and FP, have become available, providing opportunities to untangle how and why frailty develops and the mechanisms behind the sex differences. However, evidence on sex differences in frailty in animal models is less equivocal than in human studies. Few studies have reported that aged female mice exhibit higher FI scores than males (Heinze-Milne et al., 2019). However, other studies have reported no difference between the sexes, and one study found that male mice had higher FI scores than females (Heinze-Milne et al., 2019). The paucity of animal studies available and the variety in mouse strains used in the studies nevertheless warrant more evidence before the mechanisms of the sex differences in frailty can be resolved. Sex differences in age-related diseases Due to global aging and improved health care, the leading causes of death worldwide have shifted remarkably over the last century. Noncommunicable diseases, which are considered chronic agerelated illnesses, are now the three most common causes of death worldwide (ischemic heart disease, stroke, and chronic obstructive pulmonary disease) (World Health Organization, 2021). For the population older than 70 years, all but one (lower respiratory infection) of the top 10 leading causes of death in the world are noncommunicable age-related diseases ( Table 1; World Health Organization, 2021). An age-related disease can be defined as a disease where chronological age is a strong risk factor, and the incidence rate is increasing with increasing age. For a more comprehensive review on age-related diseases and the link to biological aging mechanisms, we refer to Franceschi et al., 2018. However, age-related diseases often present in a sex-specific manner. The top 10 leading causes of death by sex in those above 70 years reveals a change in the ranking of diseases so that instead of colon and rectum cancers, prostate cancer emerges in men and communicable diarrheal diseases in women. Hence, we highlight the sexual dimorphism in age-related diseases below, further strengthening the evidence that biological aging is different in men and women. Although men and women present different disease-specific patterns and expression of risk factors, several leading age-related diseases are related to cardiovascular health in both sexes. It is well accepted that premenopausal women are relatively protected from the most common cardiometabolic manifestations, whereas postmenopausal women are not (Aggarwal et al., 2018). This observation has been attributed to estrogens' beneficial effects on CVD, metabolic syndrome, and diabetes. (For a more in-depth discussion on sex and gender aspects in aging diseases and treatment, we refer the reader to Mauvais-Jarvis et al., 2020 andRegitz-Zagrosek, 2012). In addition to looking at the sex hormones individually, several studies have shown that it may instead be the sex-specific testosterone/estradiol ratio that is more decisive on health outcomes than either of the hormones alone (Morselli et al., 2016). However, CVD is also tightly linked to inflammaging of the vasculature, and cellular senescence that could be reflected as TL shortening and intrinsic epigenetic age acceleration (Ferrucci and Fabbri, 2018). Hence, aging and sexual dimorphism in cardiovascular health are delicately intertwined. Most cancers have apparent sex-differentiated effects, even after controlling for risk factors and lifestyle differences between sexes. In general, men have higher incidence rates and higher death rates in most cancers that are not related to reproduction (Mauvais-Jarvis et al., 2020). The male predominance is seen already in children with cancer before puberty, indicating that genetic or early developmental processes going wrong likely determine these differences. All cancer tumors have mutations in their genome, and commonly mutated genes are referred to as oncogenes (Stewart and Wild, 2014). There are many oncogenes known across the genome, some with specific X-linked mutational differences in men and women, and others encoded by the Y chromosome. Recent evidence suggests that noncoding genomic regions also contribute to sexual dimorphisms in driving cancer mutations and signatures (Li et al., 2020d). Many oncogenes present specific epigenetic signatures used for cancer diagnostics (Stewart and Wild, 2014), and epigenetic outlier burden is associated with age and cancer diagnosis in a sex-specific manner . Longer telomeres and extrinsic epigenetic age acceleration are also features seen in cancerous tissues. Hence, genomic instability, including the accumulation of mutations, epigenetic alterations, and telomere attrition, are hallmarks of aging and provide a link between aging and sexual dimorphism mechanisms in cancer. There are also cancers related to hormonal secretion where androgens are stimulating and estrogens are protective (Hammes and Levin, 2019). Cancers are not a class of homogenous diseases but complex, age-and sex-dependent biological processes that may arise due to several different factors. AD and other dementias are perhaps the most established age-related diseases, and the prevalence continues to grow worldwide because of global aging. They are predominant in women, particularly in the oldest old, which may also be attributed to the female survival benefit (Mauvais-Jarvis et al., 2020; Mazure and Swendsen, 2016). There is evidence for sex-specific brain differences in early growth and development of the brain and adult structure and function, which may be of relevance to neurodegeneration. Cognitive aging in healthy adults demonstrates sex-differential effects, where men generally perform better in visuospatial ability and women better in verbal ability, but the speed of decline may be worse in men, although the literature is not consistent (Li et al., 2020b;McCarrey et al., 2016). In AD, women present worse clinical symptoms for comparable levels of brain atrophy in men, and interactions with hormones may be one explanation for the differences (Toro et al., 2019). Early natural or surgical menopause and late initiation of HRT is associated with increased risk of AD (Mauvais-Jarvis et al., 2020). However, sex-differential effects may also be related to sex chromosomes. A recent study using an AD model in mice, expressing the human amyloid precursor protein, showed that adding an extra X chromosome decreased mortality and clinical AD symptoms (Davis, 2020). It should also be noted that sex differences in dementia incidence may be partially explained by selective survival (Shaw et al., 2021). Sex differences in the age-related diseases, frailty and domains of physical functioning are summarized in Figure 2. Thus, all the above calls for more research to better understand how biological sex and its attributes shape health in aging. Moreover, as many age-related diseases, most prominently CVD, are associated with systemic manifestations, such as low-grade inflammation, there is likely a complex bidirectional interplay between the diseases and biological aging at the cellular level. Having longer telomeres, for example, is protective for CVD and AD but a risk factor for many cancers, likely explained by the fact that tumor cells have overcome the problem of telomere shortening by activating the telomerase enzyme (Jylhävä et al., 2017). Epigenetic age has been associated with both cardiovascular and cancer deaths, depending on whether the clock represents intrinsic or extrinsic biological aging (Jylhävä et al., 2017). Hence, there is a trade-off between biological mechanisms promoting longevity and good cardiovascular health versus those promoting cancer growth. Therefore, more interesting than looking at the diseases or biological markers in isolation would be to assess the temporal dynamics between disease progression and aging biomarkers, with rigorous sex-specific approaches included. Summary and future directions In this review, we have tried to disentangle the complex interactions between biological aging and sexual dimorphism and have provided evidence from the perspective of current theories thereof. There is overwhelming support for the fact that whenever sex is analyzed in biological research on aging, it demonstrates significant sex differences, whether it is human cohorts or animals. Moreover, many of the biological and functional markers of aging under study, as well as for age-related diseases, are consistent with both the programmed theory of aging and the senescent theory at the same time (Table 1), and both chromosomal-linked mechanisms and hormones may explain the observed sexual disparities. Hence, there is no clear pattern of association within these interactions; Figure 2. Overview of the most significant sex differences in age-related diseases, functioning and frailty. Abbreviations: AD, Alzheimer's disease; COPD, chronic obstructive pulmonary disease. rather, many intertwined mechanisms are in action. However, it is clear that cellular and molecular mechanisms of aging are better maintained in women, although after menopause, women seem to catch up and, to some extent, reach the same levels of aging as men. For functional aging related to muscle strength, the pattern is the opposite, where men generally are stronger and faster than women, explained by higher testosterone levels coupled with upregulated growth hormone, insulin, and IGF signaling, leading to a greater muscle mass. From an evolutionary perspective, the sex difference may be attributed to sexual antagonistic pleiotropy, where natural selection for aging is a side effect of genes selected for their contribution to fertility, reproduction, and other essential components of an individuals' fitness earlier in life (Maklakov and Lummaa, 2013). For men, natural selection may favor strength and physical fitness, while women benefit from babies that are not too large for the mother and child to survive the birth. These selection mechanisms may act against each other in opposite sexes, leading to a longer lifespan in women (Maklakov and Lummaa, 2013). With the increasing body of evidence highlighting the importance of biological sex in the aging process, it is now more timely than ever to focus on understanding the sex-driven characteristics of aging. Entering the era of personalized medicine, the quest becomes even more important. However, most preclinical and clinical studies have been performed in male subjects, animals, or cell lines, limiting our understanding of the impact of sex on the given research question. To overcome these issues, the National Institutes of Health now expects that sex as a biological variable to be factored into research designs, analyses, and reporting in vertebrate animal and human studies (Pinn, 2020). The Swedish Research Council, 2020 has set similar guidelines by asking that since 2020, applicants describe whether sex and gender perspectives are relevant in their research and, if so, in what way those perspectives are to be included in the project. Although great initiatives as such, it is yet to be seen how they translate into research practice and, above all, to a better understanding of biological sex differences. A suggestion could be that all biomedical journals should adhere to common practice and guidelines requiring authors to report sex-specific effects of their findings and put that into a research context whenever applicable. Similar suggestions were proposed at a workshop hosted by the Institute of Medicine (US) in 2011, where different stakeholders were present (Public Health, 2012). Although the progress has been slow, an increasing number of journals now adhere to these rules (Schiebinger et al., 2016), and reporting guidelines exist (Heidari et al., 2016), making the sex-specific reporting scheme possible. The need for sex-specific estimates is nevertheless apparent, especially for future meta-analyses and Mendelian randomization studies so that we can build a ground on solid sex-specific research questions. Reporting sex differences also comes with obvious caveats; when the sample is stratified by sex, the power may be limited to the extent that an absence of association in the other sex cannot be considered a lack of evidence. A sound approach also entails considering the extent to which sex explains the observed variation, not just reporting whether the sexes differ. Last, it should be kept in mind that when addressing the effect of sex conceptually, it is often impossible to pinpoint the true source of sex-related variation, whether it is hormonal, genetic, differences in karyotype, or something else, such as gender norm behaviors or sex-specific environmental exposures. The underpinnings of sex differences are extremely complex, multifactorial, and challenging to apprehend even with the most sophisticated (statistical) models. Nevertheless, it is of utmost importance to start filling in the missing pieces of the puzzle of sex differences in aging. We now know that one marker or measure alone cannot capture the complexity of biological aging, and with the various machine-learning methods becoming available, we should consider opting for more 'all-inclusive' approaches. Depending on the outcome of interest, factors across different domains should be considered as explanatory variables and assessed for their sex specificity and interactions. An important point worth noting is that, as there are now many longitudinal studies with repeated measurements of biological aging markers available, these resources should be used to revisit or reformulate some of the aging theories -or propose completely new ones. For example, the recently proposed geroscience hypothesis posits that biological aging at the cellular level drives organ system aging and gives rise to aging-associated diseases (Kennedy et al., 2014). Should we manage to slow biological aging, the risk of all aging diseases should decline. However, as we have observed, women have more favorable profiles than men in many cellular and molecular markers of aging, such as telomeres and epigenetic clocks. According to the geroscience hypothesis, this should manifest as a lower multimorbidity rate in women. However, as this seems not to be the case, there must be other factors in action as well, some of which we have highlighted in this paper, and that may interact with each other in a complicated manner. The geroscience hypothesis may still be valid but needs to be put in a sex-specific context, as we need to widen our thinking and search for more answers in the data. With the accumulating knowledge, the hope is that we will eventually be able to better tackle those negative aging outcomes that are preventable or reversible. design, data collection and interpretation, or the decision to submit the work for publication. The figures have been created with Biorender (human shapes) and using icons from the Noun Project (Horst, JS, and Hout, MC (2015 The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication.
2021-05-14T06:16:53.873Z
2021-05-13T00:00:00.000
{ "year": 2021, "sha1": "f0e2c556652b555dbbada6cd0c9ff3490bde921e", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7554/elife.63425", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "dd9d83c180b26a435ce8247393c23ce7394afa1c", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
252809245
pes2o/s2orc
v3-fos-license
Home dialysis in older adults: challenges and solutions ABSTRACT There is a rising demand for dialysis in the older population given the increased numbers of older adults living with chronic kidney disease (CKD) progressing to kidney failure. Home dialysis, i.e. peritoneal dialysis (PD) and home hemodialysis (HHD), has been available for decades, but more recently there has been a rapid increase in home dialysis utilization as patients and clinicians consider its practical and clinical advantages. For older adults, incident home dialysis utilization more than doubled and prevalent home dialysis growth nearly doubled over the past decade. Whilst its advantages and recent rise in popularity are evident, there are numerous barriers and challenges that are important to consider prior to initiating older adults on home dialysis. Some nephrology healthcare professionals do not view home dialysis as an option for older adults. Successful delivery of home dialysis for older adults may be made even more difficult by physical or cognitive limitations, concerns around dialysis adequacy, and treatment-related complications, as well as challenges relating to caregiver burnout and patient frailty that are unique to home dialysis and older adults. Ultimately, it would be important for clinicians, patients and their caregivers to define what constitutes a ‘successful therapy’ to ensure treatment goals are aligned towards each individual's priorities of care, considering the complex challenges that surround an older adult receiving home dialysis. In this review, we evaluate some of the key challenges surrounding the delivery of home dialysis to older adults and propose potential solutions based on updated evidence to overcome these challenges. INTRODUCTION A greater proportion of older adults diagnosed with kidney failure are being initiated on dialysis as our global population continues to expand. In the most recent European Renal Association Registry Annual Report, the incidence and prevalence of kidney replacement therapy was highest among those who were ≥75 years of age (539 and 3154 per million age-related population, respectively) [1]. Whether this increase is related to changes in eligibility or access to dialysis, improvements in outcomes for older adults, better management of comorbidities or all of the above, it is clear that the cause of increased dialysis use is multifactorial [2]. Irrespective of the underpinning reasons, dialysis is uniquely challenging for older adults, and nowhere is this more apparent than in home dialysis. Home dialysis, i.e. peritoneal dialysis (PD) and home hemodialysis (HHD), have been available for decades. In some countries (e.g. the USA), there have been noticeable increases in home dialysis utilization for older adults, suggesting greater recognition by stakeholders that it is a viable treatment option in older people. In the latest United States Renal Data System Annual Report, incident home dialysis utilization more than doubled from 3.2% to 8.2% over the last decade for those aged 80-99 years [3]. Similarly, prevalent home dialysis growth nearly doubled from 4.2% to 7.8% [3]. While this is a positive finding, there may be an upper limit beyond which further increases are not feasible and any increase must be interpreted in the context of the baseline rate. The 2021 Australian and New Zealand Dialysis and Transplant Registry (ANZDATA) noted little to no increase in home dialysis utilization for patients aged 75 years or above between 2016 and 2020 (i.e. PD incidence increased from 13.8% to 14.0% and prevalence increased from 20.3% to 20.6%), although the incidence and prevalence in 2016 were already high [4]. The growth of home dialysis uptake amongst older adults in some countries is likely due to its practical advantages. One important reason to favor home dialysis especially in the older population is a greater flexibility in conducting dialysis sessions, which allows for more options to schedule the patient's other lifestyle activities and caregiver support around their dialysis needs. Another key advantage of home dialysis to consider in older adults is related to the better control of hemodynamic balance. Due to the slow removal of molecules, PD may achieve better cardiovascular stability whilst intensive HHD may achieve better blood pressure (BP) control during dialysis [5,6]. Whilst these advantages are evident, there are a number of challenges that are important when considering how best to manage older adults on home dialysis. The uptake of home dialysis across Europe in patients aged 65 years or above over recent years is variable, with noticeable decline in the prevalence of patients receiving PD in several European countries [7][8][9][10][11]. These variations may be due to differences in clinical experience of healthcare professionals, organizational and financial factors [12,13]. Some nephrology healthcare professionals do not view home dialysis as an option for older adults, especially those with deficits in their health. The delivery of home dialysis to older adults may be made even more difficult by physical or cognitive limitations, concerns surrounding dialysis adequacy, treatment-related complications, as well as challenges relating to caregiver burnout and living with frailty that are unique to home dialysis and older adults. Finally, despite being an option that is largely associated with comparable or improved outcomes compared with in-center hemodialysis (HD), the cumulative risk of death among older adults receiving PD remains high, and in some studies, higher than those remaining on in-center HD. Compounding these issues, one of the bigger challenges relates to the fact that there are a variety of thresholds that are used to define an 'older' person, leading to inconsistency and difficulties with comparisons. Chronological age does not equate to biological age, yet studies evaluating the feasibility and outcomes of home dialysis among older adults do not always consider frailty, fitness or functional status. Although these challenges are of importance, they are not insurmountable. Strategies to overcome these challenges include assisted home dialysis, individualized approaches to treatment initiation or a relaxation of treatment targets to avoid treatmentassociated complications. Above all, simply redefining what constitutes a 'successful therapy' may help to better ensure that we are staying aligned with what is important to patients when considering home dialysis. In this narrative review, we will describe some of the important challenges in managing older patients on home dialysis, and subsequent strategies to overcome these challenges. PERCEPTIONS AND TREATMENT-TEAM BIASES TOWARDS OLDER ADULTS PURSUING HOME DIALYSIS Not all practitioners view home dialysis as an option for older adults despite its potential advantages, particularly older adults with impairments that tend to accumulate with age. Nephrologists in the UK were previously surveyed about the feasibility of home dialysis and preferences for PD versus in-center HD when given a list of patient factors. Among the perceived barriers, only 9% favored PD for those >70 years of age, while 35% favored HD [14]. When considering factors that develop with age such as poor visual acuity, poor motor strength and impaired cognition, the proportion favoring PD was even lower. This perception of ineligibility is not unique to nephrologists. In an online survey of 89 nurses working across different areas in a large dialysis facility in Toronto (Canada), home dialysis nurses preferred home dialysis for those >70 years of age [15]. In contrast, nurses working at in-center HD units strongly preferred in-center HD for older patients. This suggests an unconscious bias towards eligibility which may be influenced by clinical experience and expertise. This survey was repeated more recently in a multidisciplinary group of nephrology practitioners. While there were some who were in favor of PD, in-center HD nurses continued to prefer in-center HD for older patients, and other allied health members appear to have a neutral view on this issue [16]. In contrast to these views which may be explicit, some perceptions may be implicit against the notion that home dialysis is an option for older adults. In a survey of nephrologists in France, 298 responders were asked to comment on their preferred dialysis modality if they had kidney failure [17]. Interestingly, there were differences in survey results based on the age of the responding nephrologist. Older nephrologists preferred HD and cited dialysis efficiency as a major factor [17]. Younger nephrologists felt the opposite and cited flexibility and professional freedom as the reason. Perceptions such as these may underlie the observation that older adults are less likely to initiate chronic dialysis with PD [1][2][3]18]. Whether explicit or implicit, how can bias be overcome? Providing education to patients around modality decisions that is standardized, that evaluates all options and that encourages and promotes home dialysis without preconceived bias (while focusing on patient-important values) is a valuable approach. In a prospective study of a structured pre-dialysis education program, the nephrologist's decision was but one component of the assessment [19]. Questionnaires that evaluate patient values (i.e. travel expenses, flexibility with treatment time and the ability to take up employment), the availability of an algorithmic approach to map patient values towards a dialysis modality and dialysis education programs tailored to patient needs accompanied the physician's impression [19]. Ultimately, this approach led to a higher proportion of patients expressing a preference for home dialysis and an increase in the proportion that received it. Education is helpful not just for patients, but for healthcare professionals working in nephrology as well. In a fellowship training survey, it was identified that training inadequacies existed for both PD and HHD amongst fellows. Almost 70% of fellows felt either unprepared or minimally prepared to deliver HHD and 27% for PD [20]. It is hard to be comfortable in delivering home dialysis to vulnerable groups including older adults with limited experience, and this emphasizes the importance of educating healthcare professionals. There is optimism that education initiatives may change opinions about the eligibility of older adults for home dialysis. In a survey of 89 nurses working in HD, 26% felt home dialysis could not be performed in adults >70 years of age [21]. After a continuing nurses education initiative (inclusive of a presentation on how to overcome barriers and a patient testimonial video), that proportion fell to 10% [21]. Other than these aforementioned factors, perceptions and treatment-team biases against home dialysis may also be influenced by the incumbent policy of reimbursement within a health system, with this being a major variable that limits home dialysis in many countries [22,23]. Reimbursement schemes usually make in-center HD a more attractive dialysis modality option for service providers [23]. Unless all stakeholders recognize the chance to prioritize patient-centered dialysis over profitability, it is likely many dialysis services will continue to be predominantly provider system-centered. For one, transport costs (per patient/per week) for patients to travel for in-center HD are expenditure, which could be allocated to set up home dialysis programs to make this option more financially attractive [24,25]. Other ways to encourage service providers to increase home dialysis utilization may include adjusting reimbursement of for-profit clinics according to strategic quality performance indicators, such as the percentage of home dialysis offered to patients [25]. Is prolonging survival the most important outcome for older adults? One of the challenges that may influence provider perceptions towards home dialysis is that some studies have suggested outcomes are worse, especially amongst older adults receiving PD. It has been shown that older PD patients are at a higher risk of mortality (pooled relative risk 2.45, 95% confidence interval 1.36-4.40) versus younger PD patients [26]. When comparing across modalities, outcomes among older adults differ. In a large systematic review of incident Korean patients (≥65 years of age), the pooled hazard ratio for mortality was 1.10 (95% confidence interval 1.01-1.20) for PD versus HD and even higher for those of longer dialysis duration or with diabetes [27]. It is possible that outcomes have improved for older adults on PD in more contemporary eras. In a study from Australia and New Zealand, the adjusted risk of death was lower when comparing HHD with in-center HD for those ≥65 years of age [28]. In contrast, patients receiving PD had a higher risk of death compared with in-center HD in the earlier era (1998-2002) but a similar relative hazard for mortality in the more contemporary era (2013-17) [28]. While comparisons of mortality outcomes between dialysis modalities are frequently discussed in literature, it is clear that basing modality choice solely on anticipated survival is the incorrect approach. In the standardized outcomes in nephrologyhemodialysis (SONG-PD) study of 126 patients/caregivers, mortality was viewed as having the second highest importance score [29]. However, when looking only at those ≥55 years of age, several other factors (PD infection, fatigue, ability to travel, flexibility with time and ability to work) were viewed as being more important ( Fig. 1) [29]. While a similar study in HHD is an important consideration for future study, the results of the SONG-PD study emphasize that improving survival is not the sole objective when considering home versus in-center dialysis, particularly for older adults. Defining dialysis adequacy and goals in older adults In contrast to mortality, it is clear that quality of life is a more important outcome. As such, dialysis adequacy should be an individualized process with both the updated 2020 International Society of Peritoneal Dialysis (ISPD) guidelines and Standardized Outcomes in Nephrology-Hemodialysis advocating for a personalized, goal-based approach to define PD and HHD adequacy in older adults [30,31]. Such an approach would ideally take into account comorbidity burden, clinical suitability for PD or HHD, how other treatments may interact in affecting patient outcomes, the patient's preferences for dialysis modality, available caregiver support during dialysis, and treatment end-goals [30,31]. HD delivered in the home setting can also provide additional benefits such as increased autonomy, individualization of therapy and elimination of transport (leading to increased well-being). Optimizing the initiation of home dialysis in older adults When an older adult decides to pursue home dialysis, it is important to consider how best to initiate therapy while taking into account the factors that both positively and negatively impact patient experience and outcomes. It has been shown that many patients have concerns and fears upon initiating HHD, mainly due to the illness intrusiveness and fears of being isolated from care [32]. Finding strategies to minimize these fears may be of benefit, including the provision of respite care for those initiating HHD and individualized ways to provide support to home dialysis patients during periods of care transitions such as starting dialysis [33]. It is well established that early initiation of chronic dialysis is not beneficial for patient outcomes [34]. Therefore, most guidelines suggest deferring dialysis initiation unless indicated, based on a patient's acute/subacute clinical progression [35]. Among older adults initiating on PD, similar findings have been noted. In a recent study, the risk of mortality was relatively higher for those initiating chronic dialysis at estimated glomerular filtration rate ≥7.5 versus <5 mL/min/1.73 m 2 [36]. While this may simply be a manifestation of indication bias (i.e. patients are started on chronic dialysis at early glomerular filtration rates because of worsening health which in turn is associated with a higher mortality risk), there may be an inherent desire to start patients early to avoid unplanned transitions through HD. To overcome this, a number of innovative strategies exist to facilitate timely home dialysis initiation in unplanned situations that may be extendable to older adults. Education initiatives directed to those in need of starting urgent in-hospital dialysis have been shown to be effective at enhancing the number of patients immediately transitioning to either PD or HHD [37,38]. Urgent start PD (i.e. placement of a catheter and initiation of PD within 2 weeks of placement on an inpatient or outpatient basis) has also been shown to be an effective approach to improving the uptake of chronic PD, with good 1-year survival outcomes and a low risk of complications [39]. Buried PD catheters may be a strategy to establish PD access well in advance of dialysis initiation, thereby increasing the probability of a direct PD start without significant concerns on having to start HD as a bridge to PD [40]. Finally, supporting patients with referrals to transitional care units to improve their experience of initiating home dialysis may enhance the uptake of PD after an unplanned in-center HD start [41]. In HHD, a particular consideration is that training time is correlated with age. Extended training time (with a median of 75 days) needs to be considered in planning HHD infrastructure and capacity for these individuals, and may help to support HHD utilization in older adults [42]. Among those who commence on either PD or HHD, there has been emerging opinion in 'starting slowly' (especially among those with residual kidney function) to minimize the burden placed on patients and caregivers. In a recent study, 175 patients started dialysis with an incremental PD prescription (continuous ambulatory PD or automated PD with assistance) a daily PD fluid volume of ≤6 L/day and/or <7 days of PD/week. While this study was not specific to older adults, the mean age of this population was 60 ± 17 years, and outcomes were as expected for an incident cohort of PD patients initiating on full dose PD [43]. Similarly, incremental HD has garnered more attention, with observational studies demonstrating comparable or improved outcomes for those receiving incremental prescriptions (provided there are no contraindications) [44]. Not surprisingly, there are a number of upcoming clinical trials evaluating the feasibility and benefits of incremental HD when initiating dialysis (NCT04360694 and NCT04932148). Therefore, starting in a less intense fashion may be an attractive approach that allows for a graded transition to dialysis, something that intuitively would be of benefit to older adults who are faced with the potentially daunting task of commencing on long-term dialysis. Identifying and managing treatment-related complications in older adults Both PD and HHD are generally feasible and safe in older patients. However, infections, circulatory volume overload, BP instability and malnutrition are commonly observed (Table 1) . PD peritonitis is the most common cause of mortality amongst PD patients aged above 65 years, accounting for approximately half of all cases of PD-related mortality [45]. Risk factors for peritonitis in older patients are numerous [46][47][48][49]. Reduced hand dexterity and eyesight with aging combined with a deterioration in cognitive function could affect performance and adherence to the aseptic demands of PD treatment, contributing to an increased risk of contamination during each exchange. Constipation is common in the general older population, and it may increase the risk of PD peritonitis by elevating the activity of bacterial intestinal translocation [50]. Recent ISPD guidelines have provided recommendations aiming to address the risk of peritonitis in older adults [51]. Daily application of topical prophylactic antibiotics such as mupirocin ointments and intermittent nasal mupirocin are recommended by ISPD to prevent exit site infections and Staphylococcus aureus carriage in older patients receiving PD [51]. Regular re-training of PD technique for independent patients and caregivers, and frequent re-assessment of an older patient's ability to perform PD were Complication Suggested management strategies PD peritonitis [45][46][47][48][49][50][51][52][53][54][55][56] •Daily application of mupirocin ointment and intermittent nasal mupirocin as prophylactic treatment •Regular re-assessment of patient and caregiver PD technique and to advise on touch contamination risks •Prompt recognition where independent PD is no longer suitable due to cognitive and functional decline •Prevention of constipation by encouraging more fruits, vegetables and fiber intake (while monitoring for electrolyte complications) •Consideration for early acute inpatient care particularly for those who are frail and those with limited support at home •Regular review of dose-related adverse effects of antibiotics (e.g. neurotoxicity from third-generation cephalosporins and carbapenems, Clostridium difficile infection) and antibiotic-resistant strains of bacteria •Assistance with intraperitoneal antibiotic administration by trained community nurses or caregivers once discharged back to community HHD vascular access infection [57][58][59][60] •Intensive training programs prior to HHD initiation for patients and caregivers to ensure procedural technique for vascular access is of satisfactory level •Ensure there is availability of subsequent refresher training opportunities to maintain level of procedural technique for vascular access •Regular prophylactic antibiotic treatment administered intravenously may not always be in the best interests of an older patient receiving HHD Circulatory volume overload [61][62][63][64][65][66] •Address dialysis-specific factors to reduce risk of volume overload (i.e. for PD-optimize effluent drain volumes and membrane transport status; for HHD-ensure dialysis intensity and session frequencies is adequate to optimize volume status) •Ensure compliance towards fluid intake restrictions and maintaining a low-salt dietary intake pattern Blood pressure instability [67][68][69][70][71][72][73][74][75][76] •BP management should include personalized targets, avoid symptomatic or overtly low BP, considerations of goals of treatment, volume status, comorbidities and home environment •Regular home BP monitoring should be encouraged with assistance from caregivers to ensure BP measurements are done properly with automated devices PEW, malnutrition and electrolyte abnormalities [77,78] •Holistic evaluation of an older patient's nutritional needs and electrolyte requirements through a combined evaluation of the patient's appetite, body weight, dietary intake and physical examination of muscle mass and body fat loss. Ensure regular biochemical tests for electrolyte and vitamin levels are being completed to guide management •Multidisciplinary care approach in the community to ensure appropriate nutrition and electrolyte supplementation when indicated, and caregivers to encourage appetite for adequate dietary intake shown to be useful in reducing touch contamination [52]. Prompt recognition of scenarios where an older patient is not suited to perform PD independently, for example where acute cognitive and functional decline is apparent following acute stroke, is important [51]. In these circumstances, it is important to involve patients' families in the shared decision-making process, and identify means of caregiver support (and perhaps more regular community nursing assistance) during PD, or whether it would be in the patient's best interests to continue PD. Prevention of constipation should be encouraged via a healthy dietary pattern with more fruits, vegetables and fiber intake. For older patients with multiple comorbidities experiencing acute peritonitis, inpatient care until treatment response is usually advised [51]. One of the noticeable challenges related to antibiotic therapy in older patients with PD peritonitis is the susceptibility to dose-related adverse effects of antibiotics, especially neurotoxicity from third-generation cephalosporins and carbapenems [53,54]. Older adults are also more susceptible to PD peritonitis from antibiotic-resistant strains of bacteria [55]. It is recognized that many older patients are unable to independently administer intraperitoneal antibiotic injections following a PD peritonitis episode [56]. Intraperitoneal antibiotic administration requires a good degree of manual dexterity and aseptic technique in handling sharp needles and injecting into the PD bags. For older patients with PD peritonitis who require assistance, administration of intraperitoneal antibiotic injec-tions and PD exchange can be facilitated by a trained nurse or caregiver [56]. If intraperitoneal antibiotic treatment fails to eradicate PD peritonitis sufficiently, a shared decision-making process to consider catheter removal is needed, and the prognosis and treatment wishes of the older patient need to be considered when this step is taken. Infective complications for older patients receiving HHD are usually related to vascular access. In particular, infection rates were higher in patients receiving HHD through central venous catheters compared with arteriovenous accesses (although buttonhole cannulation-associated infection rates were comparably higher to rates seen with central venous catheters in some studies) [57,58]. This emphasizes the need for intensive education programs initially with subsequent refresher courses, to ensure patients and caregivers are aware of the procedural requirements in minimizing access-related infection [59]. When vascular access infection is identified, prompt confirmation with exit-site and blood cultures would be required, followed by removal of the infected access [60]. Other than infection, cardiovascular complications associated with volume overload, BP instability and myocardial stunning are major considerations for older adults receiving home dialysis. Circulatory volume overload is especially common and up to half of all patients receiving long-term home dialysis may exhibit circulatory congestion [61][62][63]. Numerous factors can contribute to volume overload, including PD-specific Negative perceptions and preconceived biases from healthcare professionals regarding the eligibility of older adults for home dialysis Concerns about the burden of treatment upon initiation of home dialysis Home dialysis-associated complications could be more prevalent in the older population with increased frailty and co-morbidity status Caregiver stress and burnout sustained from the burden of caring for the older adult receiving dialysis at home Challenges Solutions Education initiatives to improve awareness of the feasibility and benefits of home dialysis for older adults; emphasizing patient-important outcomes as opposed to survival as the best metric of 'success' Starting slowly with an incremental dose approach for HD and PD, planning for the unplanned through buried PD catheters, urgent start PD, catheter and single needle options where appropriate and establishing transitional care units Employing an individualized, goal-based approach in managing dialysis-related complications (likely more complex in older adults) and measure clinical frailty scores routinely. To assess degree of frailty against levels of motivation to remain on home therapy Development of a feasible long-term care plan to support caregivers with resources and personnel for regular respite care, community nursing assistance and re-education programs, as well as patient and caregiver financial incentives factors such as low effluent drain volumes and high membrane transport status, and HD-specific factors related to inadequate dialysis intensity and session frequency [64,65]. These factors should be closely monitored and addressed to optimize volume control. Non-compliance to fluid intake restrictions and highsalt dietary intake are patient-specific factors of concern. For older patients, blunted taste acuity may lead to adoption of a higher salt content diet [66]. Therefore, improving salt literacy and awareness of foods with high-salt contents is essential for better low-salt intake compliance especially among older adults. Optimizing BP control remains a challenging prospect in older patients receiving PD or HHD. The relationship between BP and mortality is complex in the dialysis population, in that either extreme is associated with a higher mortality risk [67][68][69]. Hypotension has been shown to have a higher association with mortality over short-term follow-up [70][71][72]. Older individuals with low BP at baseline are more likely to have underlying heart failure and other cardiac disease, and cardiac comorbidities most likely explain early mortality [70][71][72]. Evidence remains inconclusive in relation to the longer-term risks associated with hypertension for older patients receiving dialysis, with no definitive guidance regarding strict BP targets [69]. A precise, standardized method to determine intradialytic and interdialytic BP is still under debate [73]. General hypertension guidelines do not account for differences in individual cardiovascular risks for patients receiving long-term PD or HHD, especially for older adults who may be prone to complications with hypotension [74,75]. Therefore, a universally applicable BP management strategy is not supported [69]. In contrast, BP management in older patients requiring PD or HHD should be individualized, with specific aims to avoid overly low BP along with considerations of an older individual's goals of treatment, volume status, comorbidities and home environment. Regular home BP monitoring should be encouraged with assistance from caregivers to ensure BP measurements are done properly with automated devices [69,73,76]. Adherence to medications that control BP is important in older patients receiving home dialysis, and would require frequent, regular counselling and education from the multi-disciplinary team. A final complication of major importance to older patients receiving home dialysis is protein-energy wasting (PEW), malnutrition and electrolyte abnormalities. The mechanisms underlying why older patients are more susceptible to PEW, nutritional deficiency and electrolyte abnormalities are multifactorial, not limited to an individual's genetic and phenotypical features, but also contributed to by other environmental factors of aging and frailty-increased cellular mitochondrial dysfunction and oxidative stress, inflammation, reduced immunity, lifestyle, psychosocial condition, and invariably kidney failure and dialytic factors [77]. As recommended by the International Society of Renal Nutrition and Metabolism, monitoring and assessment of nutrition status is essential in the older dialysis population through a combined evaluation of the patient's appetite, body weight, dietary intake and physical examination of muscle mass and body fat loss [78]. This process should be supplemented by regular biochemical tests for electrolyte and vitamin levels to guide management [75]. Nutrition and electrolyte management in the older home dialysis population require a multidisciplinary care approach in the community and regular family support, if available, to encourage appetite and guide appropriate dietary requirements. Caregiver dependence and assisted home dialysis The importance of caregiver support for older patients receiving home dialysis is acknowledged. There is significant symptom burden in older patients receiving home dialysis, and the wide range of symptoms is complex, multifactorial, and difficult to assess and manage [79,80]. Older patients living at home with kidney failure usually experience multiple simultaneous symptoms and the extent of these symptoms changes during dialysis treatment [79,81]. Whether the symptom burden is primarily physical or psychological, the presence of regular caregiver support has improved overall clinical and quality of life outcomes for older patients receiving home dialysis [82,83]. Type of strategy Strategies Training Broadly delivering educational initiatives for all home dialysis modalities to older adults, caregivers and healthcare professionals to overcome preconceived bias that home dialysis is not an option Training Continued promotion and delivery of home dialysis training opportunities for healthcare professionals from low uptake countries by working in collaboration with centers of excellence Training Developing individualized training programs for older adults. Success in this is often driven by early involvement of caregivers, highly skilled trainers and extended HHD training time. To consider retraining and refresher programs annually or as necessary Managing Encouraging evaluation for prioritized goals of home dialysis in each older adult at different phases of treatment, and ensuring resources and appropriately skilled multi-disciplinary personnel are available Managing Continued work to improve assisted care models in PD and HHD Managing Quality improvement initiatives aimed at minimizing symptom burden and ensuring early identification and intervention of home dialysis complications and comorbidities Reducing attrition Increasing collaboration with governments and industry to create financial and reimbursement schemes for robust support systems for eligible older adults and caregivers Reducing attrition Transition from PD to HHD through early identification of PD technique failure due to complications or inadequacy. Reappraise advance care plans on a regular basis Reducing attrition Identifying areas for continued innovation and improvement of current telehealth platforms such as virtual ward, digital rehabilitation programs and others to reduce attrition and improve support to patients and caregivers Assisted PD models have displayed successful results over the previous 20 years since it was first introduced, with improved clinical outcomes for older populations receiving PD in terms of PD-associated mortality, technique survival and symptom burden [82]. In an international retrospective cohort analysis, >50% of older patients on HHD required home assistance either by the partner or by a dialysis assistant [42]. Assisted HHD is increasingly promoted for older adults, including approaches that rely on family caregivers or those that use nursing staff to provide HD for even those with advanced comorbid conditions [83,84]. Nurse-assisted HHD is cost-effective but for both patient-and nurse-assisted HHD, it is emphasized that success is dependent on the quality of training provided for nurses and caregivers and the extent of caregiver support during HHD [85][86][87]. Nevertheless, sustained caregiver dependence has emerged as a challenging problem. Build-up of caregiver stress and burnout from the repetitive 'wear-and-tear' caregiver tasks is an important concern [87][88][89][90][91]. The prevalence of caregiver overburden when taking care of an older adult receiving dialysis is variable across published observational studies, with this being reported as high as 85% [92,93]. Family members of older patients receiving home dialysis may require full-time employment for financial sustainability, and taking up caregiver roles and responsibilities simultaneously may lead to caregiver burnout [94]. Caregivers for older patients receiving home dialysis may also be older individuals themselves with chronic illness, and this responsibility could add further physical and psychological burden [94]. Qualitative studies reported low mood in a significant proportion of family members caring for patients on nocturnal PD and HHD [88][89][90][91]. Even if not real, perceived caregiver burnout may further impair a patient's well-being and quality of life [95]. To address these issues, it is helpful to develop a more feasible long-term care plan for both the older individual receiving home dialysis and their caregivers to reduce caregiver stress and burnout, such as having regular respite care and availability of community nursing support [91,96]. Availability of nursing support to provide re-education programs in performing assisted dialysis could be instrumental to improve caregiver confidence and reduce anxiety, and to maintain safety standards and quality of home dialysis delivery. Initiatives to provide government-led financial support programs and digital health platforms where caregiver support networks are established could also prove useful. Addressing frailty The interaction of age and frailty is a natural but critically important dimension in self-care dialysis. Previous findings suggest frailty is a more representative measure of capacity to withstand demands of dialysis [97]. Home dialysis may help to mitigate geriatric syndromes, including frailty at the time of dialysis initiation, as it may reduce dialysis-related complications such as intra-dialytic hypotension, cerebral disturbances, cardiac events, malnutrition, infections, sleep disorders and psychological problems [98]. On the contrary, the presence of significant frailty may negatively affect incident uptake and dropout on home dialysis [99]. Focusing on measured frailty rather than age alone may help address some of the challenges and emerging solutions in this population [94,97,100,101]. Access to dedicated pre-habilitation programs, nutrition support and rehabilitation programs delivered either face-to-face or remotely via digital interface may help prevent deterioration and drop outs and also positively impact on health outcomes and well-being on home therapies [100]. SUMMARY AND FUTURE DIRECTIONS Home dialysis, where possible, can offer a lot more to patients even in the presence of old age and comorbidities. Extended support through assisted home dialysis care models and robust support systems may be an attractive option for the older population, to help mitigate risks and address potential complications in a timely manner. A personalized approach to dialysis care in older adults is highly desirable and is best offered in the setting of home, where a range of options exist in flexibility, prescribing, support and degree of autonomy. There are many barri-ers and challenges to realizing this for all eligible and willing patients, and it is hopeful our review of updated evidence provides potential solutions on tackling some of the key issues surrounding home dialysis care in older adults (Fig. 2). Ultimately, home dialysis for older adults is still emerging, and in need of advancement and innovation through enabling technology, robust pathways and supportive health policy and reimbursement strategies ( Table 2). Continued efforts by the global nephrology community to identify unmet needs of older adults living with kidney failure would be instrumental to provide further directions in optimizing home dialysis care for this patient population.
2022-10-11T15:54:44.941Z
2022-10-07T00:00:00.000
{ "year": 2022, "sha1": "493ff45f8b254b3a5a715236a466be39fbf3632d", "oa_license": "CCBYNC", "oa_url": "https://academic.oup.com/ckj/advance-article-pdf/doi/10.1093/ckj/sfac220/46578894/sfac220.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3c8163615b5ce543e39394243436d936f133f43f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
252019989
pes2o/s2orc
v3-fos-license
High-Intensity Ultrasound Processing Enhances the Bioactive Compounds, Antioxidant Capacity and Microbiological Quality of Melon (Cucumis melo) Juice The bioactive compounds, antioxidant capacity and microbiological quality of melon juice processed by high-intensity ultrasound (HIUS) were studied. Melon juice was processed at two ultrasound intensities (27 and 52 W/cm2) for two different processing times (10 and 30 min) using two duty cycles (30 and 75%). Unprocessed juice was taken as a control. Total carotenoids and total phenolic compounds (TPC) were the bioactive compounds analyzed while the antioxidant capacity was determined by DPPH, ABTS and FRAP assays. The microbiological quality was tested by counting the aerobic and coliforms count as well as molds and yeasts. Total carotenoids increased by up to 42% while TPC decreased by 33% as a consequence of HIUS processing regarding control juice (carotenoids: 23 μg/g, TPC: 1.1 mg GAE/g), gallic acid and syringic acid being the only phenolic compounds identified. The antioxidant capacity of melon juice was enhanced by HIUS, achieving values of 45% and 20% of DPPH and ABTS inhibition, respectively, while >120 mg TE/100 g was determined by FRAP assay. Further, the microbial load of melon juice was significantly reduced by HIUS processing, coliforms and molds being the most sensitive. Thus, the HIUS could be an excellent alternative supportive the deep-processing of melon products. Introduction Melon (Cucumis melo) is one of the most consumed fruits worldwide as it contains naturally occurring vitamins (~60 mg/100 g [1]), minerals (~260 mg/100 g [1]), and pigments (>2000 µg/100 g [2]) that provide taste, health benefits, high antioxidant, and anti-inflammatory properties. Melon is rich in important vitamins and also a good source of pro-vitamin A [3]. However, its high level of water together with low acidity makes it a very perishable fruit resulting in high postharvest losses. In general, postharvest loss can be controlled by postharvest treatments, preservation technologies and deep processing technologies, melon juice being one of the most important deep processing products of melon with a high nutritional value [4]. Fruit juices are commonly subjected to thermal processing to assure their safety and extend their shelf-life. Nevertheless, thermal procedures, such as pasteurization or sterilization, lead to the loss of heat-sensitive nutrients, bioactive components and compromise the fresh taste [5]. In fact, it has been found that heat-treated melon juice exhibits a strong cooked off-flavor [4], limiting the development of melon products. All this, together with the high consumer demand for high-quality and fresh-like fruit juices, has been encouraging food industries to look for mild preservation techniques that minimize the negative impact of thermal treatments [6]. Thus, non-thermal techniques have been the focus of many investigations in order not only to preserve the nutrients, but also to improve the quality of food processed products. Concerning melon juice processing, non-thermal technologies, such as ozone [7,8], ultraviolet (UV-C) irradiation [6,9] and ultrasound (US) [10], have been studied in order to improve or maintain the quality of juice. However, ozone and UV-C irradiation demonstrated to be unsuitable to maintain the overall quality of melon juice since promotes the degradation of different biocompounds modifying the color of the juice [7,8], or are unable to reduce the intrinsic microbial load of the juice [6], affecting its stability during storage. On the contrary, US has been shown to be suitable for inactivating some degradative enzymes, such as polyphenol oxidase, peroxidase and ascorbate peroxidase, promoting better color retention of melon juice and high cloud stability during storage [10], although there is no evidence of possible alterations in the bioactive compounds and microbial quality of melon juice as a consequence of US. Interestingly, US has shown a strong antimicrobial capacity against a spectrum of microorganisms and has been recognized as a forthcoming technique to meet the FDA requirement of safety for fruits and vegetable products [11]. Further, it has been reported the application of US technology to food processing may increase the bioavailability and/or bioaccesibility of different biocompounds [12], such as phenolic compounds [13,14], carotenoids [15][16][17][18], anthocyanins [13,19,20], flavonoids [15] among others [21], resulting the increase of the functional properties associated to these biocompounds, i.e., antioxidant capacity [13,21], antimicrobial activity [12], antidiabetic activity [15], anticancer activity [12,13], etc. Thus, US processing could not only improve the quality and safety but also extend the shelf life of different fruit juices [10,22]. The extending shelf-life of juices promoted by US technology has been attributed, on the one hand, to the inactivation of degradative enzymes, such as polyphenol oxidase [23] and native microorganisms [23][24][25] and, on the other hand, to the breakdown of the cell-walls increasing the release of bioactive compounds with antioxidant properties, into the juice [14,20,23], improving the quality of this. Based on all this, US technology could be a promising technological alternative that is able to retain the bioactive compounds and functional properties of melon juice, inactivating its intrinsic microbial load. Therefore, the main aim of this study was to assess the physicochemical properties, bioactive compounds-specifically phenolic compounds and carotenoids-antioxidant activity and microbial quality of melon juice processed by high intensity ultrasound (HIUS). High-Intensity Ultrasound (HIUS) Processing The acoustic treatment of melon juice was performed using a Branson Sonifier SFX-550 (Branson Ultrasonics Corp, Danbury, CT, USA) equipped with a 1 /2 inch tip-horn, operating at 550 W and 20 kHz. Approximately 200 g of melon juice was placed into a double jacket vessel (250 mL). The HIUS treatments were carried out using two different acoustic intensities (27 and 52 W/cm 2 ) for two different processing time (10 and 15 min) with two US duty cycles (30 and 75%). The temperature of the samples was controlled by recirculating water at 10 ± 2 • C through the jacked vessel. Melon juice was heat pasteurized for comparison purposes. Thus, around 200 g of melon juice was heat-treated at 65 • C for 30 min in a double jacket vessel. Unprocessed melon juice was taken as a control sample. All experiments were recorded in triplicate. Finally, melon juice was freeze-dried in a FreeZone 6 Benchtop Lyophilizer (Labconco, Kansas City, MO, USA) operating at −40 • C and 0.07 mBar for posterior determination of total carotenoids, total phenolic compounds, individual phenolic compounds by HPLC-DAD, and antioxidant activity. Centrifugal Sedimentation The centrifugal sedimentation assay was carried out following the methodology previously described by Shen et al. [14] with slight modifications. Approximately 50 mL of melon juice were placed in a centrifuge tube and centrifuged at 3500× g for 15 min. The supernatant was discarded and the sediment was weighed. The centrifugal sedimentation rate was calculated as follows: where m 1 is the weight of the precipitate after centrifugation and m 2 is the weight of the juice before centrifugation. Color Difference The color difference of HIUS melon juice was determined using a CR-300 colorimeter (Konica Minolta Sensing, Ink., Tokyo, Japan). The L* (brightness), a* (greenness/redness) and b* (blueness/yellowness) values were measured, and the ∆E* (total color difference) was calculated as follows (Equation (2)) [14]: where L * 0 , a * 0 and b * 0 are the values of the control sample, and L * , a * and b * the measured values corresponding to a processed sample. All parameters were measured in triplicate for each treatment. Analysis of Total Carotenoids The determination of total carotenoids in melon juice was determined following the methodology described by Borghesi et al. [26] with some modifications. Approximately 200 mg of the freeze-dried sample of melon juice were mixed with 2.5 mL of methanol, 5 mL of trichloromethane and 2.5 mL of distilled water. The mixture was homogenized at 13,000 rpm for 1 min using an ultraturrax IKA T18D (IKA Works, Wilmington, NC, USA) and subsequently centrifuged at 4050× g for 10 min at 4 • C. The lipophilic phase was separated using a Pasteur pipette and the remainder was re-extracted three times using 5 mL of trichloromethane. All extracts were concentrated in a concentrator vacufuge ® plus (Eppendorf, São Paulo, Brazil) until 5 mL. The total carotenoids were measured in a spectrophotometer HACH at 465 nm. The concentration was determined using the equation (Equation (3)) [27]. where ABS 465 is the absorbance at 465 nm; V is the volume of solvent; m is the mass of the sample and ε is the molar absorbance of trichloromethane at 465 nm (2396 mol/L). Total Phenolic Compounds The total phenolic compounds were spectrophotometrically measured following the Folin-Ciocalteu method, using 96-well microplates, as previously described by Reyes-Avalos et al. [28]. Approximately 500 mg of lyophilized melon juice was suspended in 5 mL of methanol and continuously mixed at 4 • C for 24 h. The results were expressed as mg of gallic acid per g of dry matter (dm). All determinations were carried out in triplicate. Identification of Individual Phenolic Compounds by HPLC-DAD The phenolic compounds were extracted and analyzed by HPLC-DAD according to the method described by Reyes-Avalos et al. [28] with slight modifications. Approximately 500 mg of the freeze-dried melon juice were homogenized with 5 mL of methanol HPLC grade in an ultraturrax IKA T18D (IKA Works, Wilmington, NC, USA). The prepared samples were mixed at 4 • C for 24 h. Then, the samples were centrifuged at 4060× g for 20 min at 20 • C and the supernatant was filtered through a ∅ 0.45 µm PTFE filter before HPLC analysis. The chromatographic analysis was carried out using an HPLC Agilent 1200 (Agilent Technology, Palo Alto, Santa Clara, CA, USA) equipped with a diode array detector (DAD), a quaternary pump and two C 18 5-µm (250 mm × 4.6 mm) column connected in series. The temperature, flow rate, and injection loop were 20 • C, 0.5 mL/min y 20 µL, respectively. The mobile phase was comprised of (A) acetic acid 0.5% (B) methanol and (C) 80% acetonitrile. The mobile phase gradient was of 95% A, 2% B and 3% C at 2 min, 90% A, 4% B and 6% C at 8 min, 75% A, 10% B and 15% C at 15 min, 60% A, 10% B and 30% C at 30 min, and 95% A, 2% B and 3% C at 40 min. The different phenolic compounds were analyzed at three different wavelengths: 280, 316 and 365 nm. Thirteen individual phenolic standards (99%) were used to identify and quantify. Antioxidant Activity The samples used to determine the antioxidant activity were prepared as in total phenolic compounds. The prepared samples were continuously mixed at 4 • C for 24 h. A MultiSkan FC spectrophotometer and 96-wells plates were used for antioxidant activity determinations. Radical Scavenging by DPPH Assay The effect of high-intensity ultrasound on the DPPH radical scavenging capacity (RSC) of the melon juice was measured as previously described by Ge et al. [29] with slight modifications. An aliquot of 10 µL of melon juice (100 mg/mL) was mixed with 190 µL of 2.5 mM DPPH solution in methanol. The mixture was incubated at 25 • C for 30 min and the decrease in absorbance was measured at 520 nm. The RSC was expressed as the percentage of DPPH radical inhibition and calculated as follows: where A control and A sample refer to absorbance from control and sample, respectively. ABTS Free Radical Scavenging Assay The effect of high-intensity ultrasound on the ABTS free radical scavenging capacity of melon juice was assessed as previously described Ge et al. [29] with slight modifications. An aliquot of 10 µL of melon juice (100 mg/mL) was mixed with 190 µL of 7 mM ABTS solution in methanol containing 2.5 mM potassium persulfate and incubated at 25 • C for 30 min. The change in the absorbance was recorded at 740 nm. The ABTS free radical scavenging was expressed as the percentage of ABTS free radical inhibition and calculated as follows: where A control and A sample refer to absorbance from control and sample, respectively. FRAP Assay The FRAP assay was carried out according to Hernández-Rodríguez et al. [30]. An aliquot of 10 µL of the sample was mixed with 190 µL of the FRAP solution and incubated at 25 • C for 30 min. The absorbance was measured at 593 nm. The antioxidant capacity measured by FRAP was expressed as milligrams of Trolox equivalent per gram of sample (mg TE/g). Microbiological Quality The microbial analysis of melon juice was performed according to the methodology of AOAC for aerobic (990.12), total coliforms (990.14) and yeast and mold (997.02) count plate. Proper serial dilutions were prepared by mixing sterilized distilled water followed by further decimal dilutions up to obtain colonies in the countable range (10-100 CFU/mL). Thus, 3M TM Petrifilm TM aerobic count plate, 3M TM Petrifilm TM coliform count plate and 3M TM Petrifilm TM yeast and mold count plate (3M Company, Saint Paul, MN, USA) were used. The plates were incubated at 35 ± 1 • C for 48 h for the aerobic count, at 35 ± 1 • C for 24 h for total coliforms and at 25 ± 1 • C for 5 days for yeast and molds. The yeast and molds were differentiated following the instructions of the 3M™ Petrifilm™ interpretation guide. Yeasts grow as small colonies with a defined edge, showing color ranges from pink-tan to blue-green color without a center focus (dark center), while molds grow as large colonies, with a diffuse edge and flat, exhibiting a wide variety of colors and center focus (dark colors). The number of colony-forming units (CFU) was expressed as logarithmic cycles (log (CFU/mL)). Statistical Analysis The results from centrifugal sedimentation, total carotenoids, total phenolic compounds, antioxidant capacity and microbiological inactivation were analyzed by one way ANOVA with a statistical significance level α = 0.05. The post-hoc analysis was performed using the Fisher's least significant difference (LSD) test. All analytical determinations were performed in triplicate. All statistical analyses were performed in MINITAB software version 19 (Minitab Inc., State College, PA, USA). Centrifugal Sedimentation The effect of HIUS processing of melon juice on the percentage of centrifugal sedimentation is shown in Figure 1. The HIUS processing promoted a significant decrease in the centrifugal sedimentation of melon juice compared with pasteurization processing or control juice (p > 0.05). In particular, centrifugal sedimentation from those HIUS juices ranged from 12.1% to 14.2% while for pasteurized and control juices were 14.6% and 14.5%, respectively. For longer times of HIUS processing (30 min), the centrifugal sedimentation is more efficiently reduced when working at lower duty cycles (30%). These results are concomitant with the study performed by Shen et al. [14], who evaluated the centrifugal sedimentation in apple juice treated with temperature-controlled ultrasound. These authors observed that the centrifugal precipitation of apple juice was reduced by around 35% with temperature-controlled ultrasound processing. Likewise, Rojas et al. [31] observed that the application of high-intensity ultrasound (>790 W/cm 2 ) for long times (>6 min) avoided the pulp sedimentation in peach juice. The decreasing of centrifugal sedimentation in fruit juices treated with HIUS has been associated with the reduction of the particle size of fibrous material from juice, as a consequence of the shear forces generated by the high-intensity US [14,31]. Thus, the HIUS processing could improve the physicochemical stability of melon juice during storage by the reduction of the centrifugal sedimentation, extending its shelf-life. Color Change Color attributes are considered an important standard by which to evaluate the quality of fruit juice or related products since it affects the consumers' acceptance [32]. Thus, the evaluation of possible color changes attributed to HIUS processing was carried out. Figure 2 shows the results of color change (∆E) of pasteurized and sonicated melon juice taking unprocessed melon juice as a reference. It can be seen that the processing, either pasteurization or sonication, promotes some difference in the color of the juices, denoting a higher ∆E in the sonicated samples. The ∆E ranged from 1.5 up to~3 in HIUS processed juice, whereas in pasteurized juices this value was~1.2. Interestingly, the lowest ∆E of sonicated juices (~1.5) was obtained in the juice processed at 27 W/cm 2 for 10 min at 75% of duty cycle. Further, the color of this HIUS juice was similar to pasteurized juice (65 • C-30 min) (p > 0.05). It is important to point out that color change from those juices treated at 27 W/cm 2 for 10 min at 30% of duty cycle and 27 W/cm 2 for 30 min at 75% of duty cycle cannot be seen by the naked eye since ∆E was lower than 2 [33] while observable color changes (∆E ≥ 2) were obtained with the rest of HIUS conditions. The increase of ∆E by the effect of HIUS has also been observed in other fruit juices, such as kiwi juice [33] and apple juice [14], among others. Previously, Costa et al. [34] observed that high-intensity ultrasound promoted high color stability in pineapple juice which has been attributed to the lower polyphenol oxidase activity and the lower availability of oxygen in sonicated samples as a consequence of the liquid degasification process. Although the preservation of color characteristics as result of the inactivation of degradative enzymes by US has been widely reported [10,23,34], the minimal color changes observed in this study could also be associated with the release of antioxidant compounds into juice as a consequence of the breakdown of the cell walls caused by the cavitation phenomenon [35,36]. Total Carotenoids The effect of HIUS processing of melon juice on total carotenoid content is shown in Figure 3. The results show a significant increase in the concentration of total carotenoids in processed samples, either pasteurization or ultrasound, compared with the control juice (23.6 µg/g). The concentration of total carotenoids in pasteurized juice was 26.3 µg/g, while in HIUS processed juices varied from 26-32.5 µg/g. The highest concentration was obtained at 27 W/cm 2 for 10 min at 30% of duty cycle, whereas the lowest content was observed at 52 W/cm 2 at 75% of duty cycle during 10 min. Previously, Abid et al. [37] observed that carotenoids increased from~1.22 up tõ 1.55 µg/mL when apple juice was treated with ultrasound at 2 W/cm 2 for 60 min. Recently, Ordóñez-Santos et al. [38] used ultrasound technology as an alternative treatment to Cape gooseberry (Physalis peruviana L.) juice processing. These authors observed that the concentration of carotenoids increased up to 90% when juice was treated at 210 W for 40 min. It has been observed that carotenoid-rich juices exhibit high concentrations of carotenoids when they are processed using HIUS. This increase has been associated, on the one hand, with carotenoids' migration into juice as a consequence of the cell wall disruption promoted by the cavitation phenomenon, and, on the second hand, with the inactivation of degradative enzymes responsible for the browning in fruit juices, such as polyphenol oxidase [23,37,39,40], as a result of HIUS application. Figure 4 shows the results of the content of TPC of the melon juice treated with HIUS. As can be seen, the ultrasound processing promoted the significant reduction of the TPC of melon juice (p < 0.05), being most noticeable at higher intensities (52 W/cm 2 ). The TPC from the control juice was around 1.1 mg GAE/g dm while in sonicated juices varied from 0.75 up to 0.90 mg GAE/g dm. Interestingly, juices processed at 75% of duty cycle exhibited lower TPC than those juices processed at 30% of duty cycle. It is important to note that those juices processed at 27 W/cm 2 at 30% of duty cycle exhibited similar TPC content as pasteurized juice (0.9 mg GAE/g dm) (p > 0.05). The reduction of total phenolic compounds as a consequence of HIUS application has been previously observed by other authors [10,41,42]. For instance, Fonteles et al. [10] observed that the TPC from melon juice treated with US were reduced by between~15% and~35%. Likewise, Keenan et al. [42] reported that phenolic compounds from a fruit smoothie (banana, apple, strawberry and orange) underwent an important reduction (~25%) as a consequence of US processing. Overall, the degradation of phenolic compounds as a result of HIUS treatment has been associated, on the one hand, with the generation of free radicals, such as -OH by the cavitation phenomenon [43], and on the second hand, with the increase of temperature when long processing times (>10 min) are used [41]. In this study, the TPC decrease could be associated with the extended exposition of unbound polyphenols to acoustic energy since it has been observed that US processing facilities the release of bound phenolic compounds due to cell disruption, being more susceptible to acoustic degradation [20,23]. Identification of Individual Phenolic Compounds by HPLC-DAD In order to gain more insight into the phenolic compounds present in melon juice, methanolic extracts were subjected to HPLC-DAD analysis. Two bioactive compounds were identified: gallic acid and syringic acid. The effect of HIUS on the concentration of gallic acid (GA) and syringic acid (SA) in melon juice is shown in Figure 5. As can be seen, the HIUS promoted significant changes in the concentration of these bioactive compounds (p < 0.05). In particular, HIUS processing increased the GA content significantly (p < 0.05), reaching up to~180 µg/g dm whereas pasteurization reduced it by~25% regarding to control (~70 µg/g dm) (p < 0.05). Previously, Wang et al. [35] observed that GA content from strawberry juice increased from 0.12 mg/mL up to 0.29 mg/mL when juice was sonicated at 400 W for 16 min at 50% of duty cycle. These authors exposed that the increase in GA concentration might be the result of the attachment of hydroxyl radicals produced by US, to the aromatic ring [35] which could explain the increase of GA in this study. It is important to highlight that GA, one of the most important hydroxybenzoic acids and widely distributed in plants, has shown several biological activities in many diseases, including cardiovascular diseases, cancer, neurodegenerative disorders, and aging [44,45]. In fact, previous studies have revealed that GA exerts a protective role against kidney damage [46] and cerebral ischemic injury [47] due to its strong binding affinity on targeted noxious compounds which has been attributed to its antioxidant and anti-inflammatory activities [48]. On the other hand, SA was around 20 µg/g in control juice whereas in pasteurized juice it decreased to~10 µg/g (p < 0.05). Interestingly, SA increased significantly as a consequence of HIUS processing (p < 0.05), ranging from~27 up to~60 µg/g. Noticeably, the juice processed at 27 W/cm 2 for 30 min showed the lowest SA content, <35 µg/g of the samples treated with HIUS. On the other hand, in the other HIUS processed juices, SA content was higher than 40 µg/g (p < 0.05). Similar concentrations of SA have been previously observed in melon dehydrated, accounting for between~21 µg/g in melon infrared-dehydrated, up to 52 µg/g when the melon was oven-dehydrated [3]. Thus, these results suggest the release of this type of bioactive compounds from inner cell into juice, confirming the rupture of cell walls by the shear forces generated by HIUS [35,36]. In the last years, SA (4-hydroxy-3,5-dimethoxybenzoic acid), a phenolic compound from the dimethoxybenzene subfamily of benzoic acids, has been the object of several studies mainly due to the functional properties related to this bioactive compound, such as strong antioxidant [49], anti-microbial [50] anti-osteoporotic [51], and anticancer [52,53] activities. In fact, recent studies have demonstrated that SA reduces cell proliferation, induces apoptosis, and alters autophagy by the modulation of the oxidative stress and DNA damage resulting in an effective therapeutic strategy in the treatment of several diseases, such as cancer or Alzheimer's disease [52][53][54][55]. It is worthy to highlight that melon juice processed by HIUS could become considered a functional food due to bioactive compounds enrichment by the high concentrations of this phenolic acids and the carotenoids. Antioxidant Activity of Melon Juice The results of the effect of HIUS processing on the antioxidant capacity of melon juice are shown in Table 1. As can be seen, HIUS processing promoted a significant increase in the antioxidant capacity tested by DPPH, ABTS and FRAP assays (p < 0.05). In particular, HIUS processing increased the DPPH scavenging capacity of melon juice, reaching up tõ 46% in juice processed at 27 W/cm 2 for 30 min at 75% of duty cycle whereas in pasteurized and control juice was~39 and~29%, respectively. On the other hand, the ABTS free radical scavenging of melon juice processed by HIUS ranged from~20% up to~31% whereas in control and pasteurized juice was~15% and~9%, respectively (p < 0.05). Regarding to the antioxidant capacity by FRAP assay, the control and pasteurized juice exhibited values of~40 and~62, whereas in HIUS processed juices the antioxidant capacity reached values >110 mg TE/100 g dm (see Table 1). The increase in the antioxidant capacity of fruit juices as a consequence of the sonication process has been widely reported by several authors [36,40,56]. Santhirasegaram et al. [40] observed the increase of the antioxidant capacity determined by DPPH, from 84.1% to 91.15%, and FRAP, from 360.71 up to 437.14 µg Ascorbic acid equivalent/mL, when mango juice was processed by ultrasound (<75 W) for 30 min. Likewise, Wang et al. [36] observed that ultrasound processing increased the antioxidant and radical scavenging capacities of kiwifruit juice. They observed that antioxidant activity by FRAP assay increased from 185.6 µmol/100 mL to 307.45 µmol/100 mL whereas the radical scavenging capacity by DPPH inhibition, was 1.6-fold higher than control (28.45%). Most of these authors [14,32] have attributed the increase of antioxidant capacity to the addition of a second hydroxyl group to the ortho-or para-positions to the aromatic ring of phenolic compounds during sonochemical reactions. Nevertheless, the enhancement of the antioxidant capacity observed in this study could be explained by the increase of individual phenolic compounds which have exhibited exceptionally high antioxidant activity, such as syringic acid (IC 50 = 0.043 mM) [57] as well as its possible interaction with carotenoids [58]. It is important to point out that phenolic acids have become to exhibit better performance as free radical scavengers than antioxidant enzymes [57]. Microbiological Inactivation Fruit juices have been considered a good alternative to increasing the consumption of fruits and vegetables [59]. Nevertheless, fruit juices are highly susceptible to contamination mainly by bacteria and some yeasts, either from contamination of fruit in the field or during its processing [60,61]. Microbial contamination can lead to the generation of off-flavors as well as render the products unsafe for direct consumption, due to the production of bio-toxins [62]. Thus, the effectiveness of HIUS processing to reduce the initial microbial load of melon juice was assessed. The effect of HIUS processing on the aerobic bacterial count, coliform count, yeasts and molds count of melon juice is shown in Figure 6. As can be seen, the aerobic bacteria and coliforms accounted for around 3.88 log CFU/mL and 3.54 log CFU/mL, respectively, in the control juice. Interestingly, HIUS processing promoted a reduction between~10 and 50% for aerobic bacteria (Figure 6a) and 100% for coliforms (Figure 6b), whereas the pasteurization showed the total inactivation for both cultures. Particularly, the highest inactivation of aerobic mesophiles by HIUS was obtained when melon juice was processed at 52 W/cm 2 regardless duty cycle. On the other hand, yeasts were significantly reduced by HIUS processing, reaching a reduction between 12% and 20% comparing with the control juice (3.4 log CFU/mL) (p < 0.05) whereas the pasteurized juice exhibited a reduction around 38% (see Figure 6d). Regarding molds count, HIUS processing promoted significant reduction (p < 0.05) while no significant difference was observed between control and pasteurized juice (2.8 log CFU/mL) (p > 0.05). Interestingly, molds count decreased to 2.0 log CFU/mL when melon juice was processed at 52 W/cm 2 for 30 min and duty cycle of 30% (p < 0.05) (see Figure 6c). It is important to mention that FDA has recognized ultrasound as a potential innovative technology for microbial inactivation in the fruit juices industry [11,63]. In fact, the reduction of the microbial load of fruit juices processed with US has been reported by several authors [24,25,40,62,64]. It has been observed that ultrasound was able to reduce below 2 log CFU/mL the natural microbiota of strawberry juice [24,25] whereas in mango juice, reduction of 26% and 100% aerobic mesophiles and coliforms, respectively, have been reported [40]. The effectiveness of ultrasound treatments depends, on the one hand, on the physicochemical and composition of the juice ( • Bx, pH, titratable acidity), and on the other hand, on the resistance of microorganisms or the presence of bacteria or fungi spores [65]. In particular, fungi exhibit higher resistance compared with bacteria due to the cell wall composition [66]. Interestingly, the reduction observed by US processing has been attributed to the generation of mechanical shocks which lead to the destruction of the cell-wall yeasts by a depolymerization effect resulting in the lysis of cells and inactivation of certain enzymes [5]. Some authors have exposed that the lethal effect of ultrasound is increased when combined with other technologies, such as heat-treatment [24,25,39,64]. In fact, it has been reported that sonication at 20 • C reduces around 0.5 log CFU/mL in mesophiles, and up to 1 log CFU/mL in yeasts and molds whereas the complete inactivation of natural microbiota of apple juice was achieved when this was processed with US at 60 • C for 5 min (25 kHz and 70% power) [64]. Furthermore, Yildiz et al. [24,25] observed that ultrasound at mild temperature (55 • C for 3 min) kept below 2 log CFU/mL for 42 days at 4 • C the microbial load of strawberry juice as in the high hydrostatic pressure (300 MPa for 1 min) and thermal pasteurization (72 • C for 15 s) processing. However, the application of heat-treatments in the melon juice processing promotes the generation of cooked off-flavors components limiting the combination of US and heat-processing [4,67,68]. Conclusions The physical stability, the bioactive compounds and microbial load of melon juice processed by HIUS were evaluated. Overall, the application of HIUS to the processing of melon juice improved the physical appearance, the content of bioactive compounds and the microbial quality of the juice. Thus, the HIUS enhanced the physical appearance of melon juice by reducing the sedimentation pulp, minimizing color change. Further, melon juice processed by HIUS was enriched with bioactive compounds by increasing the total carotenoids and the occurrence of gallic and syringic acids, augmenting the antioxidant capacity of the juice. It is important to highlight that melon juice processed by HIUS might be considered a functional food since gallic and syringic acids-two phenolic acids-have been involved in the prevention of diverse pathologies. Additional to this, HIUS treatment resulted in an effective procedure for the inactivation of microorganisms, specifically for coliforms and mold strains. These results demonstrate that HIUS might be a good technological alternative for the processing of thermal-sensitive fruit juices. Nevertheless, further studies are required to evaluate the effect of HIUS processing on the stability of the bioactive compounds of melon juice during storage and the sensorial characteristics.
2022-09-03T15:08:32.747Z
2022-08-31T00:00:00.000
{ "year": 2022, "sha1": "41543a03db953031860eaffbfc5d147f94a30b8a", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2304-8158/11/17/2648/pdf?version=1661957511", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "596a147b9c90573207417a055e659c0cbab2cfef", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
53070019
pes2o/s2orc
v3-fos-license
Spatial mapping of the electron eigenfunctions in InAs self-assembled quantum dots by magnetotunneling We use magnetotunnelling spectroscopy as a non-invasive probe to produce two-dimensional spatial images of the probability density of an electron confined in a self-assembled semiconductor quantum dot. The images reveal clearly the elliptical symmetry of the ground state and the characteristic lobes of the higher energy states. I. INTRODUCTION Quantum dots (QDs) are characterised by relatively small number of electrons confined within an island with a nanometer dimension. They can confine the motion of an electron in all three spatial dimensions [1]. The strong confinement in the QD gives rise to a set of discrete and narrow electronic energy levels similar to those in atomic physics. The epitaxial growth of lattice mismatched InAs on GaAs or AlAs opens new possibilities for the simple fabrication of semiconductor nanostructures. InAs QDs are formed in situ during growth due to the relaxation of a strained InAs wetting layer on GaAs or AlAs [2]. The particular interest lies in their uniformity and small size: a lateral dimension 10−20 nm and height 3 − 4 nm. Several theoretical approaches have been used to calculate the eigenstates of InAs QDs [3]. The results of the calculation depend strongly on the assumed shape and composition of the QDs. Experimentally, the quantized energy levels of a given potential can be probed using various spectroscopic techniques. The corresponding wave functions are much more difficult to measure. Information about the extent of the carrier wavefunction for the ground state of a QD was obtained from tunneling measurements in a magnetic field [4]. Also, the anisotropy of electronic wave function in self-aligned InAs QDs was deduced from magnetic-fielddependent photoluminescence spectroscopy [5]. However, until recently there have been no reported measurements of the detailed spatial form of the wavefunctions of the ground and excited states of the QD. Recently, it has been demonstrated that magnetotunneling spectroscopy can be employed as nonivasive probe to produce images of the probability density of the electron confined in a QD [6]. In this work, we use magnetotunneling spectroscopy (MTS) to investigate in detail of the spatial form of the wave function of the electron states of a double-barrier resonant tunnelling diode with InAs QDs embedded in the centre of a GaAs quantum well. We measure the dependence of the resonant tunnelling current through the QD states as a function of magnetic field, B, applied perpendicular to the tunnelling direction. This allows us to map out the full spatial form of the probability density of the ground and excited states of the QDs. The electron wavefunctions have a biaxial symmetry in the growth plane, with axes corresponding quite closely (within measurement error of 15 o ) to the main crystallographic directions X − [011] and Y − [233] for (311)B substrate orientation. For a similar InAs QD structure grown on a (100)-substrate we also obtained characteristic probability density maps of ground and exited states. II. EXPERIMENTAL DETAILS The InAs QDs are embedded in a n − i − n, resonant tunnelling diode. The samples were grown by molecular beam epitaxy on a GaAs substrate with the orientations (100) and (311)B. A layer of InAs QDs, nominally 2.3 monolayer thick, was placed in the centre of a 9.6 nm wide GaAs quantum well (QW) with 8.3 nm Al 0.4 Ga 0.6 As confining barriers, sandwiched between two nominally-undoped 50 nm GaAs spacer layers. The intrinsic region is surrounded by graded n-type contact layers, with the doping concentration increasing from 2 × 10 17 cm 3 to 3 × 10 18 cm −3 . The layers were grown at 600 o C, and there was a growth interrupt before the QDs were grown at 480 o C. For comparison, we also studied two control samples grown with the same sequence of layers, except one has only a thin InAs two-dimensional wetting layer (i.e. containing no QDs) and the other has no InAs layer at all. The samples were processed into circular mesa structures of diameters between 50 µm and 200 µm, with ohmic contacts to the doped regions. Figure 1 shows a schematic energy band diagram for our device under bias voltage. X and Y define the two main crystallographic axis in the plane perpendicular to growth direction, Z (see inset). The layer of InAs QDs introduces a set of discrete electronic states below the GaAs conduction band edge. At zero bias voltage, equilibrium is established by electrons diffusing from the doped GaAs layers and filling some dot states. The resulting negative charge in the QW produces depletion layers in the regions beyond the (AlGa)As barriers. By applying a bias voltage to the emitter layer, V, the QD energy level is shifted in energy with respect to both contacts. When a particular dot state is resonant with an adjacent filled state in the biased electron emitter layer, electrons tunnel through the dot into the collector and a current flows as shown schematically in Fig.1. Therefore, as we adjust the voltage we can study different energy states of the QDs. At sufficiently high voltages we are able to observe two separate resonances in the current related to confined subbands of the QW states. [4,6,[8][9][10][11][12] and, although not fully understood, is probably related to the limited number of conducting channels in the emitter that can transmit electrons from the doping layer to the quantum dots at low bias. There is no reason to believe that the dots studied are atypical of the distribution as a whole. On increasing the temperature to 4.2 K, the main peaks are still prominent, but much weaker features, which may be related to density-of-state fluctuation in the emitter [13], are strongly suppressed. A key observation is that many peaks look similar so we cannot tell if the peaks are due to tunneling through the states of a single dot or several dots. In the following, we will concentrate on three voltage regions labelled ("A"), ("B") and ("C"). We will focus on the magnetic field dependence of the QD resonances and on how this provides detailed information about the form of the wavefunction associated with an electron in the QD state. . The amplitude of each resonance exhibits a strong dependence on the intensity of B. In particular, with increasing B, the low-voltage resonances "A" decrease steadily in amplitude, whereas the others, "B" and "C", have a nonmonotonic magnetic field dependence. The Figure 3(b) and (c) show clearly two characteristic types of magnetic field dependence: type "A" shows a maximum on G(B) at B = 0 T followed by a steady decay to zero at around 8T ; type "B" shows a broad maximum at ∼ 4.5 T , followed by a gradual decay to zero. We observe a clear anisotropy in the dependence of I(V ) on B for the two field orientations. As can be clearly seen in Fig.4(a) peaks "A" and "B" in the I(V ) plot reveal a strong anisotropy of about ρ ∼ 0.5. We have also determined angular dependence of the peaks. The results are plotted in Fig.4(b) for peaks "A" and "B". Note that all peaks, observed over the bias range (∼200 mV) have a maxima in current amplitude at orientation of a field B [011] or B [233]. The main effect to be noted from Fig.4 is the dependence of the current as a function of the in-plane magnetic field orientation. We can understand the magnetic field dependence of the features in terms of the effect of B on a tunnelling electron. Let α, β, and Z indicate, respectively, the direction of B, the direction normal to B in the growth plane (X, Y ), and the normal to the tunnel barrier, respectively (see Figure 1 (b)). When an electron tunnels from the emitter into the dot, it acquires an additional in-plane momentum given by [14] where ∆s is the effective distance tunnelled along Z. This effect can be understood semiclassically in terms of the increased momentum along β, which is acquired by the tunnelling electron due to the action of the Lorentz force. In terms of mapping out the spatial form of an electronic state, we can envisage the effect of this shift in as analogous to that of the displacement, in real space, of the atomic tip in a STM imaging measurement. The applied voltage allows us to tune resonantly to the energy of a particular QD state. Then, by measuring the variation of the tunnel current with B, we can determine the size of the matrix element that governs the quantum transition of an electron as it tunnels from a state in the emitter layer into a QD. In our experiment, the tunnelling matrix element is most conveniently expressed in terms of the Fourier transforms Φ i,f (k) of the conventional real space wavefunctions [14,15]. Here the subscripts i and f indicate the initial (emitter) and final (QD) states of the tunnel transition. Relative to the strong spatial confinement in the QD, the initial emitter state has only weak spatial confinement. Hence, in k-space corresponds to a sharply peaked function with a finite value only close to k = 0. Since the tunnel current is given by the square of the matrix element involving Φ i (k) and Φ QD (k), the narrow spread of k for Φ i (k) allows us to determine the form of Φ QD (k) by varying B and hence k according to (1). Thus by plotting G(B) for a particular direction of B we can measure the dependence of |Φ QD (k)| 2 along the k-direction perpendicular to B. Then, by rotating B in the plane (X, Y) and making a series of measurements of I(B) with B set at regular intervals ( ∆θ ∼ 5 o ) of the rotation angle θ, we obtain a full spatial profile of |Φ QD (k X , k Y )| 2 . This represents the projection in kspace of the probability density of a given electronic state confined in the QD. The model provides a simple explanation of the magnetic field dependence of the resonant current features "A-C". In particular, the forbidden nature of the tunnelling transition associated with "B" at B = 0 T is due to the odd parity of the final state wavefunction, which corresponds to the first excited state of a QD. The applied magnetic field (i.e. the Lorentz force) effectively breaks the mirror symmetry at B = 0 and thus makes the transition allowed. FIG. 5. Distribution in the plane (kX , kY ) of the differential conductance, G = dI/dV , for three representative states. This provides a spatial map of |ΦQD(kX , kY )| 2 , the square of the Fourier transform, ΦQD(kX, kY ), of the probability density of the electron confined in the dot. X and Y define the two main crystallographic axis, [011] and [233], respectively, in the (311)-oriented GaAs plane. Figure 5 shows the spatial form of G(B) ∼ |Φ QD (k X , k Y )| 2 , in the plane (k X , k Y ) for the two representative QD states corresponding to the labelled features in Fig.3(b) and (c). The k-values are estimated from relation (1), assuming ∆s has nominal value of 30 nm which we estimate from capacitance measurements and from the doping profile and composition of the device. The contour plots reveal clearly the characteristic form of the probability density distribution of a ground state orbital and the characteristic lobes of the higher energy states of the QD. The electron wavefunctions have a biaxial symmetry in the growth plane, with axes corresponding quite closely (within measurement error of 15 o ) to the main crystallographic directions X − [011] and Y − [233] for (311) B-substrate orientation. For a similar InAs QD structure grown on a (100)-substrate we also obtained characteristic probability density maps of ground and exited states. To summarise, we have observed features in I(V ) corresponding to resonant tunnelling through a limited number of discrete states whose wavefunctions display the symmetry of the ground state and excited states of quantum dots. With the simple device configuration we have used, it is not possible to tell whether an excited state feature and a ground state feature correspond to the same quantum dot. This question could be resolved by new experiments on structures with electrostatic gates. IV. CONCLUSION In conclusion, we have shown how the magnetotunnelling spectroscopy provides a new means of probing the spatial form of the wavefunctions of electrons confined in quantum dots. The study revealed a biaxial symmetry of QD states in the growth plane. We observed the elliptical shape of the ground state and the characteristic lobes of the higher energy states.
2018-10-07T21:39:32.716Z
2001-06-29T00:00:00.000
{ "year": 2001, "sha1": "655e23cd45022d4d51a442182bdde6fbdc2ffadb", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "e77a46db43009df833cd635c160a13fe26f99a16", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
51638892
pes2o/s2orc
v3-fos-license
Caveolin-1 enhances RANKL-induced gastric cancer cell migration The classical pathway involving receptor activator of nuclear factor-κB (RANK) and its ligand (RANKL) induces the activation of osteoclasts and the migration of a variety of tumor cells, including breast and lung cancer. In our previous study, the expression of RANK was identified on the surface of gastric cancer cells, however, whether the RANKL/RANK pathway is involved in the regulation of gastric cancer cell migration remains to be fully elucidated. Lipid rafts represent a major platform for the regulation of cancer signaling; however, their involvement in RANKL-induced migration remains to be elucidated. To investigate the potential roles and mechanism of RANKL/RANK in gastric cancer migration and metastasis, the present study examined the expression of RANK by western blot analysis and the expression of caveolin-1 (Cav-1) in gastric cancer tissues by immunohistochemistry, in addition to cell migration which is measured by Transwell migration assay. The aggregation of lipid reft was observed by fluorescence microscopy and western blotting was used to measure signaling changes in associated pathways. The results showed that RANKL induced gastric cancer cell migration, accompanied by the activation of Cav-1 and aggregation of lipid rafts. Nystatin, a lipid raft inhibitor, inhibited the activation of Cav-1 and markedly reversed RANKL-induced gastric cancer cell migration. The RANKL-induced activation of Cav-1 has been shown to occur with the activation of proto-oncogene tyrosine-protein kinase Src (c-Src). The c-Src inhibitor, PP2, inhibited the activation of Cav-1 and lipid raft aggregation, and reversed RANKL-induced gastric cancer cell migration. Furthermore, it was demonstrated that Cav-1 was involved in RANKL-induced cell migration in lung, renal and breast cancer cells. These results suggested that RANKL induced gastric cancer cell migration, likely through mechanisms involving the c-Src/Cav-1 pathway and lipid raft aggregation. Introduction Tumor metastasis significantly affects the prognosis of patients with gastric cancer, and is the primary cause of treatment failure (1). Mechanisms of tumor metastasis are complex and the tumor microenvironment, enriched in cytokines, growth factors and tumor cell-derived vesicles, is key in its pathophysiology. Receptor activator of nuclear factor-κB ligand (RANKL), an important cytokine belonging to the tumor necrosis factor (TNF) family, promotes osteoclast maturation and migration. In addition to being secreted by osteoclast cells, previous studies have revealed that RANKL is secreted by infiltrating T cells; whereas RANK is expressed on the surface of various cancer cells, including breast, renal and lung cancer cells (2)(3)(4)(5)(6). According to our previous study, RANK is also expressed in gastric cancer cells (7), and infiltrating T cells have been found to be abundant in gastric cancer tissues (8,9). Collectively, these studies indicate that RANKL may also promote gastric cancer cell migration, although there is no supporting data at present. Lipid rafts, comprised of assemblies of cholesterol, sphingolipids and certain types of proteins, form sorting platforms for targeted proteins (10) and are essential in a variety of signaling processes, including cell migration, through the regulation of proteins located in the cell membrane (11,12). Lipid rafts are reported to be able to control human melanoma cell migration by regulating focal adhesion disassembly (13), and promote breast cancer cell migration by restricting interactions between CD44 and ezrin (14). A previous study showed lipid rafts to be critical for RANK functions in osteoclasts (15). Based on this, it was hypothesized that lipid rafts may be involved in RANKL-induced cancer cell migration. Caveolin-1 (Cav-1), a pivotal component of lipid rafts, is a membrane-bound scaffolding protein that regulates signal transduction (16). The role of Cav-1 in cancer remains controversial; it can regulate a number of metastatic cancer cells, either negatively or positively. Cav-1 reportedly inhibits cell migration and invasion via the suppression of epithelial-mesenchymal transition in pancreatic cancer cells (17), and has been shown to reduce the metastatic capacity of colon cancer cells (18). By contrast, the expression of Cav-1 appears to be increased in prostate tumors, lung cancer, melanoma cells and renal cell carcinoma (18)(19)(20)(21), thereby favoring tumor progression and migration (22). RANKL induces the expression of Cav-1, which is immediately conveyed to lipid rafts to promote osteoclastogenesis (23). As there has been no previous study reporting the effect of Cav-1 on RANKL-induced cell migration, the present study aimed to identify the potential roles and mechanisms of RANKL/RANK in gastric cancer cell migration and metastasis. The results indicated that the proto-oncogene tyrosine-protein kinase Src (c-Src)/Cav-1 pathway and lipid raft aggregation may be the primary mechanisms involved in RANKL-induced gastric cancer cell migration. Transwell assay. The cells were pretreated with appropriate solvent control (dimethyl sulfoxide) or various concentrations of inhibitors (PP2: 10 µM; Nystatin: 50 µg/ml) for 60 min in serum-free media. The treated cells were plated in the upper insert of a 24-well chemotaxis chamber (2x10 4 cells/well; 8-µm pore size; Corning Inc., Corning, NY, USA) in serum-free medium. Medium containing 2.5% serum (0.5 ml) and recombinant RANKL (1 µ1), with DMSO or inhibitors, was added to the bottom well and incubated for 24 h. The porous inserts were carefully removed, and the cells was stained and counted at x200 magnification (Olympus Corp., Tokyo, Japan) in at least five different fields of each filter. Fluorescence microscopy. The MGC803 cells were first treated with PP2 or nystatin for 1 h, and then RANKL was added at a final concentration of 1 µg/ml for 10 min. The cells were fixed in 4.4% paraformaldehyde for 20 min, permeabilized with 0.2% Triton X-100 for 15 min, and then blocked with 5% bovine serum albumin (BSA; Sigma-Aldrich, Merck KGaA) for 1 h. The slides were incubated with CTXB antibody or anti-RANK antibody for 1 h and then with FITC-conjugated goat anti-mouse or anti-rabbit IgG were added for 1 h. Images were captured with a fluorescence microscope (Olympus Corp.). Surface RANK expression analysis. Surface RANK expression was determined by flow cytometry as previously described (24). The following antibodies were used: Mouse anti-RANK (1:500; mouse monoclonal; cat. no. MAB683; R&D Systems, Minneapolis, MN, USA) or isotype control (R&D Systems), FITC-conjugated anti-mouse secondary antibody (1:200; mouse monoclonal; cat. no. sc-2356; Santa Cruz Biotechnology). Immunohistochemistry. Formalin-fixed paraffin-embedded tumor specimens were collected from the Department of Pathology at the First Hospital of China Medical University. The immunohistochemical staining observed with Olympus microscope (Olympus Corp.) was performed using the biotinstreptavidin method (UltraSensitive S-P kit; MaixinBio, Shanghai, China) as previously described (26). Two observers, who had no prior information of the clinical or pathological parameters, performed the evaluation of results independently. The immunoreactivity was scored based on the intensity of staining (negative, 0; weak, 1; moderate, 2; strong, 3). Statistical analysis. The experimental data are summarized and presented as the mean ± standard deviation. The significance of differences was analyzed statistically using Student's two-tailed t-test, P<0.05 was considered to indicate a statistically significant difference. Each experiment was repeated at least three times. Statistical analyses were performed using the SPSS statistical package software (SPSS for Windows, version 20.0; IBM Corp., Armonk, NY, USA). RANKL induces the migration of gastric cancer cells via phosphoinositide 3-kinase (PI3K)/Akt and ERK pathways. The western blot analysis revealed the expression of RANK in MGC803, BGC823 and SGC7901 cell lines. Stimulation of the MGC803 and SGC7901 cells with 1.0 µg/ml RANKL significantly increased cell migration by 63.8 and 56.3%, respectively (Fig. 1B). As RANKL had no effect on the proliferation of MGC803 or SGC7901 cells (data not shown), the increased number of MGC803 and SGC7901 cells traversing the filter may have resulted from increased migratory abilities. The downstream signaling of RANKL/RANK was also examined in BGC803 cells; Akt and ERK were markedly increased in response to RANKL treatment (Fig. 1C). Therefore, the RANKL/RANK pathway appeared to be significantly involved in the migration of gastric cancer cells. Lipid rafts are involved in RANKL-induced migration. Lipid rafts represent a major platform for signaling regulation in cancer. To examine the involvement of lipid rafts in RANKL-induced gastric cancer cell migration, the MGC803 cells were pretreated with nystatin, a lipid raft inhibitor, for 1 h, followed by RANKL treatment for 10 min. The immunofluorescence indicated that RANKL significantly induced lipid raft aggregation, which was reversed by nystatin ( Fig. 2A). Downstream signals, including the activation of Akt, were also markedly promoted by RANKL, but were decreased by pretreatment with nystatin (Fig. 2B). Nystatin also decreased RANKL-induced gastric cancer cell migration from 168.8 to 75.6% (Fig. 2C). These results suggested that the aggregation of lipid rafts was associated with RANKL-induced gastric cancer cell migration. Cav-1 promotes the migration of RANKL-induced gastric cancer cells via interactions with RANK. To investigate the effect of Cav-1 on gastric cancer cell migration, the activation of Cav-1 was examined. The results showed that RANKL not only activated Cav-1 in a time-dependent manner (Fig. 3A), but also triggered an interaction between RANK and Cav-1 (Fig. 3B). The knockdown of Cav-1 by siRNA suppressed RANKL-induced lipid raft aggregation, accompanied by a decrease in the activation of Akt and ERK in MGC803 cells ( Fig. 3C and D). Cav-1 knockdown also significantly reduced RANKL-induced gastric cancer cell migration from 176.2 to 18.5% (Fig. 3E). These results suggested that Cav-1 promoted RANKL-induced gastric cancer cell migration via interactions with RANK. RANKL induces the activity of caveolin-1 via c-Src. To characterize the downstream mechanisms occurring due to the activation of Cav-1, the cells were incubated with RANKL over different periods of time and examined for the activation of c-Src. As shown in Fig. 4A, c-Src was rapidly activated and reached a peak at 10 min. The c-Src inhibitor PP2 inhibited the activation of Cav-1 and Akt/ERK (Fig. 4A). The immunofluorescence and Transwell experiments revealed that PP2 significantly suppressed lipid raft aggregation and RANKL-induced migration ( Fig. 4B and C). Collectively, these results suggested that the c-Src-mediated activation of Cav-1 promoted RANKL-induced gastric cancer cell migration. RA NKL-induced migration is suppressed by Cav-1 knockdown. The expression of RANK was examined in a variety of cancer cells by flow cytometry. The results showed that H460 (lung cancer), ACHN (renal cancer) and MDA-MB-231 (breast cancer) cells expressed RANK on their surface (Fig. 5A). The knockdown of Cav-1 by siRNA significantly suppressed RANKL-induced migration of the cancer cells ( Fig. 5B and C). Cav-1 is independently a poor predictive factor for the overall survival rate of patients with gastric cancer. To examine the association between RANK and Cav-1, 228 histologically confirmed gastric cancer samples were selected for investigation. The follow-up time ranged between 3 and 83 months, with a mean follow-up time of 38 months. The immunostaining confirmed that Cav-1 was expressed in 56.5% of patients (Table I), whereas 47.4% were positive for RANK. The correlation between the expression of RANK or Cav-1 and patient characteristics is shown in Table I. The expression of RANK, observed in 58.3% of the diffuse patients, was correlated with Lauren classification. The prognostic value of Cav-1 in patients with RANK-positive cells was also analyzed. Within this population, a higher expression of Cav-1 was correlated with poor survival rate (P=0.025), as the mean overall survival rate of patients was 45 months in the Cav-1-positive arm, compared with 64 months in the Cav-1-negative arm (Fig. 6). In patients with RANK-positive cells, univariate analysis revealed that the positive expression of Cav-1, T stage, N stage and pTNM stage indicated poor prognosis. The multivariate analysis indicated that Cav-1, T stage and N stage were independent predictors for patients with RANK-positive cells (Table II). These results demonstrated that the expression of Cav-1 was predictive of poor prognosis in patients with RANK-positive gastric cancer cells. Discussion The RANKL/RANK pathway is a classical pathway for osteoclast maturation and activation, whereby RANKL interacts with RANK to recruit TNF-receptor associated factor, resulting in the activation of nuclear factor-FB, c-Jun N-terminal kinase, p38, ERK and Akt (27)(28)(29). In breast, lung and prostate cancer cells, the inhibition of PI3K and mitogen-activated protein kinase kinase 1/2 can reduce RANKL-induced migration (30)(31)(32). According to the results of the present study, RANK was expressed in gastric cancer cells. Furthermore, RANKL significantly increased the migration ability of gastric cancer cells, accompanied by the activation of Akt and ERK. As gastric cancer tissues are enriched in infiltrating T cells capable of secreting RANKL, RANKL-induced migration may represent a pivotal mechanism for gastric cancer metastasis. Drugs, including denosumab, which target the RANKL/RANK pathway, likely inhibit this process and can be potentially used as novel therapeutic intervention for treating metastatic gastric cancer. Previous studies have provided evidence in support of the involvement of lipid rafts in cancer cell invasion and metastasis (33)(34)(35). Yamaguchi et al reported the requirement of lipid rafts for invadopodia formation and extracellular matrix degradation in human breast cancer cells (36). Chinni et al showed that C-X-C motif chemokine ligand 12/C-X-C chemokine receptor type 4 transactivates human epidermal growth factor receptor 2 in lipid rafts to promote prostate cancer cell migration (37). In the present study, the finding that RANKL triggered lipid raft aggregation, which was reversed by nystatin, and reduced RANKL-induced migration in gastric cancer cells indicated the importance of lipid rafts in gastric cancer cell migration. Lipid rafts are known to be regulated by other important factors, including Cav-1. Cav-1 can also result in further clustering of lipid rafts mediated by the activation of several downstream signaling pathways (36,38). In the present study, Cav-1 was shown to be involved in RANKL-induced lipid raft aggregation and cell migration. It was confirmed that certain RANK-expressing gastric cancer cells also express Cav-1, which was significantly correlated with the poor prognosis in individuals with RANK-positive cells. Univariate and multivariate analyses demonstrated that the expression of Cav-1 was an independent predictor of poor overall survival rate in these patients. Furthermore, the involvement of Cav-1 in RANKL-induced cell migration was confirmed in several cancer cell lines. These findings indicated that Cav-1 is essential not only for appropriate RANK-localization within the lipid raft, but also for RANKL-induced lipid raft aggregation and cancer cell migration. Although the data obtained in the present study revealed that Cav-1 was rapidly activated by RANKL, the question regarding the key mediator remains unanswered. The tyrosine protein kinase c-Src is known to be involved in the regulation of cellular metabolism, survival and proliferation. In cancer cells, the activation of c-Src results in increased tumor progression, invasion and metastasis (39)(40)(41)(42). Furthermore, RANKL has shown potential in activating c-Src in breast cancer cells (30). Previous reports have suggested that the interaction between Cav-1 and Rho-GTPases promotes metastasis by controlling the activation of c-Src, Ras and Erk (43). In the present study, the activation of Cav-1 accompanied that of c-Src. In addition, the activation of Cav-1, lipid raft aggregation and cell migration were almost completely reversed by the PP2-mediated inhibition of c-Src function, which is an important regulator in several signaling pathways (44). These results suggested that the c-Src-mediated activation of Cav-1 promoted RANKL-induced gastric cancer cell migration. In conclusion, RANKL-induced gastric cancer cell migration is at least partially dependent on lipid rafts and its main component, Cav-1, and is promoted by the activation of c-Src and Cav-1. These findings demonstrate a detailed mechanism underlying the effect of RANK on gastric cancer cell migration. This may shed light on the potential drug targets for novel treatment of metastatic gastric cancer. Acknowledgements Not applicable. Availability of data and materials The datasets used during the present study are available from the corresponding author upon reasonable request. Authors' contributions YW, YL and XQ conceived and designed the study. YW, QW, XZ, LZ, JQ, ZL, LX, YZ, KH, YF and XC performed the experiments. YS provided the samples and collected the patient information. XC and YW contributed in the statistical analysis. YW wrote the manuscript. XQ, YL and XC reviewed and edited the manuscript. All authors read and approved the manuscript and agree to be accountable for all aspects of the research in ensuring that the accuracy or integrity of any part of the work are appropriately investigated and resolved. Ethics approval and consent to participate The First Hospital of China Medical University Ethical Committee approved the study. No consent was required due to the retrospective nature of the study. Patient consent for publication No consent was required due to the retrospective nature of the study.
2018-07-21T00:43:42.526Z
2018-07-05T00:00:00.000
{ "year": 2018, "sha1": "1df51af41cfcc4282a15e55f16a29da06b89465d", "oa_license": "CCBYNCND", "oa_url": "https://www.spandidos-publications.com/10.3892/or.2018.6550/download", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "1df51af41cfcc4282a15e55f16a29da06b89465d", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
263827807
pes2o/s2orc
v3-fos-license
Prophylactic antibiotics in septoplasty with intranasal septal splints: A comparative analysis Postoperative antibiotic therapy is a common practice following septoplasty with intra‐septal splints placement (ISS), even though there is a lack of evidence to support it. We sought to investigate the role of antibiotic therapy in septal surgeries with the placement of ISS. Intranasal septal splints (ISS) are commonly used in septoplasty to stabilise the reconstructed septum and prevent the formation of synechia during early mucosal healing. 1,2The effectiveness of ISS, when compared to nasal packing, has been well-established in improving nasal airflow and enhancing patient comfort, thereby driving its widespread acceptance and implementation in clinical practice. 3Nonetheless, the introduction of nasal foreign bodies has raised concerns regarding the potential risk of developing local nasal infections and even toxic shock syndrome. 4,5To minimise the risk of postoperative infections, the practice of prescribing postoperative antibiotic therapy following septoplasty with ISS placement has become increasingly common. 6However, the available data on the efficacy of prophylactic antibiotic therapy in the presence of ISS are limited, leaving uncertainty about its true benefits. The risk of post-septoplasty infection is estimated at less than 5%. 7,8Notably, even in cases involving nasal foreign bodies such as ISS, the incidence of postoperative infection remains remarkably low. 9,10These findings raise important considerations about the need for routine antibiotic administration in all cases.Furthermore, research studies have revealed that prophylactic antibiotics often fail to effectively prevent bacterial colonisation and biofilm formation on ISS. 11,12ofilm formation, in particular, poses a significant challenge in the treatment of infections, as it enhances bacterial resistance to antibiotics and contributes to the persistence of the infection. 13Therefore, the indiscriminate use of antibiotics may not only be unnecessary but could also contribute to the development of bacterial resistance, which is a growing concern in the medical field. 5][16] Studies investigating ISS cultures in patients treated with prophylactic antibiotics have revealed the growth of bacterial isolates with acquired antibiotic resistance, including multidrug-resistant gram-negative bacteria. 10These findings highlight the potential risks associated with antibiotic therapy and suggest that the practice of routine prophylactic antibiotic administration in septal surgeries with ISS placement may have unintended consequences. Considering the limited evidence on the efficacy of prophylactic antibiotics, the potential for bacterial resistance development, and the increasing prevalence of multidrug-resistant infections, it is crucial to conduct further research to investigate the role of antibiotic therapy in septal surgeries with the placement of ISS.The aim of the present study was to address this gap in knowledge and provide valuable insights into the appropriate use of antibiotics in this specific clinical scenario. | Study design and subjects The study protocol was approved by the Institutional Research Ethics Committee.Inclusion criteria for the study were adult patients (≥18 years) who had septoplasty with the placement of ISS with or without turbinate reduction, with a minimum follow-up period of 1 month.Patients were excluded from the study in the following cases: Evidence of chronic rhinosinusitis, immunosuppression, or autoimmune diseases, and when septoplasty was performed in addition to sinus surgery or rhinoplasty.The electronic records of all patients who had septoplasty between March 2015 and April 2020 were screened. | Surgical procedure All surgeries were carried out under general anaesthesia and performed in an endonasal approach.Deviated nasal septum was treated via submucosal resection, with or without bilateral inferior turbinate reduction. The initial surgical incision was a standard hemitransfixation or Killian, chosen according to the septal pathology and location of maximal deviation.Polyglactin 910 suture (Vicryl Rapide™; Ethicon-CA, United States) was used to close the mucosal incision lines.Intranasal silicone splints (Mackay/Grimaldi Nasal Splint; Exmoor-United Kingdom) were inserted bilaterally in the nasal cavity at the end of surgery.Transseptal silk suture 2-0 (Perma-Hand Silk ® ; Ethicon-CA, United States) was used to attach the splints to the septum.Nasal packing (Merocel ® ; Medtronic-MN, United States) was inserted at the discretion of the operating surgeon. | Prophylactic antibiotic therapy and postoperative care Prophylactic antibiotic therapy was administered at the discretion of the operating surgeon.Three antibiotic treatment groups were identified: (1) a preoperative single-dose IV prophylaxis (group AB-1) was given cefazolin (Cefamezin ® once 1000 mg; Teva Pharmaceutical Key points • Patients undergoing septoplasty with the placement of ISS are at increased risk of gram-negative bacterial colonisation, and development of postoperative nasal infection. • A single preoperative dose of IV antibiotic therapy should be considered the prophylactic treatment of choice for septoplasty with ISS. • Diabetes is associated with an increased risk of postoperative infection following septoplasty, regardless of antibiotic regimen. • The detection of Klebsiella pneumonia before surgery was associated with an increased rate of postoperative infection. • An increase in the rates of bacterial resistance was recorded post-operatively mainly in patients who were treated with antibiotics for 7 days after surgery.Industries-Israel).Clindamycin was used in patients with suspected beta-lactam allergy; (2) daily oral antibiotics until removal of the ISS on postoperative day (POD) 8 (group AB-7) were given cephalexin (Ceforal ® TID 500 mg; Teva Pharmaceutical Industries-Israel) and amoxicillinclavulanate (Augmentin™ BID 875/125 mg; GlaxoSmithKline-United Kingdom), according to the surgeon's preference; and (3) no antibiotic therapy (group AB-0).Nasal packing with Merocel ® when inserted was removed on POD-1.ISS was removed on POD-8.All patients were instructed to perform nasal irrigations using a 10 cc syringe, rinsing each nostril with 0.9% saline solution three times daily for a duration of 3 weeks in the postoperative period.During the initial post-operative visit for splint removal, patients were asked to adhere to the treatment regimen meticulously. Postoperative local infection was characterised by the presence of nasal cellulitis, vestibulitis, vestibular abscess, septal cellulitis, or septal abscess. | Bacterial cultures Bacterial cultures were taken routinely as part of the department's protocol for monitoring infectious diseases.Septal swabs were taken before surgery under sterile conditions.Samples were cultivated in chocolate agar, McConkey agar, TSA with 5% sheep blood agar and thioglycolate medium, and were incubated at 37 C and 5% CO 2 under aerobic conditions.Cultures were evaluated after 24 h.In cases of microbial growth only in thioglycolate medium, subcultures were transferred to agar plates for further incubation under aerobic and anaerobic conditions.Bacterial identification and susceptibility were performed for positive cultures using conventional methods.Positive cultures did not alter prophylactic antibiotic therapy unless clinical signs of infection were observed.Silicone splints were examined for the presence of bacteria after their removal.The splints were removed under sterile conditions and were incubated in a thioglycolate medium for 24 h.Subcultures were transferred to agar plates at 37 C under aerobic and anaerobic conditions and were evaluated after 24 h. | Statistical analysis All statistical analyses were conducted using IBM SPSS Statistics for Windows, Version 27.0 (IBM Corp, Armonk, NY, USA).Associations between nominal variables were assessed using Pearson chi-square (χ2), McNemar, and Fisher's exact tests.Odds ratios with 95% confidence intervals were reported to estimate the strength of the associations. Effect size (φ) was calculated using Cramer's V test.Associations between continuous and quantitative variables were examined using the Mann-Whitney U-Test.Univariate analysis was performed to test potential confounding variables, and multivariate logistic regression was used to assess their associations.A two-sided p-value of < .05 was considered statistically significant for all analyses. An a priori power analysis was conducted using G*Power version 3.1.9.7 (Faul et al. 17 ) for sample size estimation.The analysis was based on previous studies that reported post-septoplasty infection rates as low as 0.5% in patients treated with antibiotics and up to 13% in non-treated patients. 18,19With a significance criterion of α = .05and power = 0.80, a minimum sample size of 147 patients was calculated: 49 patients in the AB-0 group and 98 patients in the antibiotic-treatment groups (AB-1 and AB-7). | RESULTS The study included 146 patients.The main indication for surgery in all patients was nasal airway obstruction due to septal deviation.Group AB-0 (no antibiotic treatment) included 48 patients, group AB-1 (single-dose IV prophylaxis) included 43 patients, and group AB-7 (daily oral antibiotics until removal of the ISS) included 55 patients.Patient characteristics and details of the three treatment groups are pre- Group AB-0 patients were more likely to develop postoperative infection compared to antibiotic-treated patients (OR = 8.2, 95%CI: 1.63-41.1;p = .01,φ = 0.04).Infection rates did not differ significantly between the two antibiotic treatment groups AB-1 and AB-7.An analysis of the association between different demographic and clinical variables and postoperative infection was performed.Diabetes was associated with an increased risk of postoperative infection in all groups (OR = 5.2, 95%CI: 1.15-23.5;p = .032,φ = 0.04).Smoking status, obstructive sleep apnea, allergies, and history of sinusitis were not associated with infection.Nasal packing, which was placed in 15% of patients (all in group AB-7), was also not associated with postoperative infection.A post hoc analysis was conducted to determine the achieved power.With the given groups sizes and infection rates, and a significance criterion of α = .05,an adequate power of 0.80 was determined. The detection of Klebsiella pneumonia before surgery was associated with an increased postoperative infection rate (p = .017).Seven patients (5%) were found to carry K. pneumonia in their nasal cavities before surgery.Three of these patients developed postoperative Data on bacterial cultures and resistance were analysed and are presented in Table 3 and Figure 1.Overall, gram-positive bacteria (52% of patients) were more prevalent before surgery than gram-negative bacteria (38%), and S. aureus was the most common isolate (30%). Postoperative cultures grew 182 isolates in 132 patients (90%).Interestingly, the bacterial detection rate increased postoperatively in groups AB-0 and AB-7 by 13% and 26%, respectively, but did not change in group AB-1.The detection rate of gram-positive bacteria decreased from 52% before surgery to 40% postoperatively.A corresponding increase in the rate of gram-negative bacteria (a 'Gram-negative shift') was documented, from 38% before surgery to 63% postoperatively.This trend was observed in all three treatment groups. Resistant bacterial strains, excluding coagulase-negative Staphylococcus and Streptococcus Viridans which were considered contaminants, were identified in 8 patients (5%) before surgery.None of the patients had clinical signs of nasal infection prior to surgery.Acquired antimicrobial resistance was documented postoperatively in 14 patients (10%).Cultures from these patients grew gram-negative bacteria with increased resistance to beta-lactam antibiotics; three strains acquired ESBL or AmpC (plasmid-mediated AmpC enzymes) resistance.The rate of acquired resistance was highest among patients who were treated with one-week oral antibiotics, although this difference did not reach statistical significance (+10% in group AB-7, +5% in group AB-1, À2% in group AB-0) (Table 3). | DISCUSSION Septoplasty in general is defined as a clean-contaminated procedure, in which prophylactic antibiotics should be considered before first incision. 20,21However, due to the low risk of postoperative infection, a panel of experts of the American Academy of Otolaryngology-Head and Neck Surgery (AAO-HNS) reconsidered the role of routine perioperative antibiotics in septoplasty.In a clinical statement published in 2015, the panel reached a consensus that antibiotics have no benefit in routine septoplasty in patients without nasal packing or splint placement. 22However, no conclusion was reached concerning prophylactic antibiotic treatment following septoplasty with ISS.Our study provides a comprehensive analysis of the effects of prophylactic antibiotic therapy following septoplasty with ISS.In the present study, the rate of infection in patients after septoplasty with ISS was low at 6%, which is consistent with previously published rates of 0.5%-12%. 18,23All cases of infection were resolved after a 7 to 10-day course of antibiotics without further complications. | Antibiotic prophylaxis for septoplasty with ISS Thus far, large prospective studies have reported no significant differences in post-septoplasty infection rates between patients given prophylactic antibiotic therapy and placebo groups.In these studies, Ricci et al. 8 did not use ISS, and Lilja et al. 24 reported using ISS only in a minority of cases (57/188 subjects).Other studies have examined the rate of postoperative infection in the presence of septal splints.Two prospective studies reported no infections in patients with ISS who were treated with prophylactic antibiotics. 13,25Other studies reported low rates of postoperative infection (1%-2%) following septoplasty with ISS without prophylactic treatment. 10None of these studies compared patients who received antibiotic therapy with those who did not.The present study, to the best of our knowledge, is the first to compare different antibiotic treatment groups. The rate of postoperative infection was significantly higher in patients who were not treated with antibiotics (group AB-0).Patients in this group were up to 8 times more likely to develop postseptoplasty infection compared to patients treated with antibiotics. The rate of infection in the AB-7 and AB-1 antibiotic treatment groups was 4% and 0%, respectively.These findings suggest a F I G U R E 1 Alteration in bacterial growth and resistance following septoplasty. potential role for prophylactic antibiotics in the prevention of infections in patients undergoing septoplasty with ISS placement. | Additional risk factors for postoperative infection Diabetes was associated with an increased risk of postoperative infection regardless of prophylactic antibiotic therapy.Diabetes in general is considered a risk factor for development of surgical site infections, supposably due to the effects of hyperglycemia on immune system function. 26A recently published population-based Taiwanese study reported an association between type-2 diabetes mellitus and development of septal abscess after septoplasty. 27These findings support the need for close monitoring and further research on prophylactic antibiotic treatment in diabetic patients undergoing nasal surgery. Interestingly, non-resistant K. pneumonia carriers were up to 16.5 times more likely to develop postoperative infection, regardless of antibiotic prophylaxis.K. pneumonia has previously been described as a pathogen involved in infections after nasal surgeries. 28However, to our knowledge, this is the first report of a significant association between the detection of the bacteria preoperatively and a higher risk of infection in patients after septoplasty with ISS.The significantly higher infection rate in carriers suggests a potential role for nasal screening for K. pneumonia before surgery.The reported susceptibility of K. pneumonia to cefazolin is >80%, 29 and patients with nonresistant strains are expected to respond well to antibiotic prophylaxis given before incision. | Gram-negative shift and bacterial resistance The overall rate of positive cultures increased after surgery by 13%. Interestingly, the increase in bacterial growth was limited to the oraltreatment and no-treatment groups only (by 39% and 16%, respectively).Conversely, the rate of bacterial growth in patients treated with antibiotic prophylaxis before first incision did not change after surgery.The increase in bacterial colonisation after surgery may be explained by several mechanisms, including disruption of the mucosal lining with migration of pathogens, 30 and formation of biofilm on ISS. 13,31Thus, providing antibiotic protection before disrupting the mucosal lining in the clean-contaminated nasal environment is deemed necessary to decrease the rate of bacterial colonisation during and after surgery, and reduce the risk of postoperative infection. As expected, gram-positive bacteria were more commonly identified before surgery (52%), and S. aureus was the most frequently isolated bacterial species (30%), in agreement with previous studies. 25e proportion of gram-negative bacteria increased substantially after surgery in all three treatment groups (overall from 38% to 63%), indicating a violation of the normal nasal flora regardless of antibiotic treatment.Not surprisingly, gram-negative pathogens were involved in 78% of postoperative infections. In contrast to the increase in gram-negative bacteria in all treatment groups, an increase in the rate of bacterial resistance was recorded after surgery mainly in patients treated with antibiotics (group AB-7 by 10% and group AB-1 by 5%).Conversely, the rate of resistant strains decreased postoperatively in patients not treated with antibiotics (group AB-0, by 2%).These findings support the theories according to which the emergence of antibiotic resistance is a direct result of selective pressure exerted by antimicrobial agents. 11e emergence of resistance, particularly ESBL or AmpC, puts patients at risk of difficult-to-treat infections.According to a recent Centers for Disease Control and Prevention report, E. coli, K. pneumonia, and S. aureus are among the leading causes of resistance-related mortality. 32ese pathogens were detected in the present study in approximately 80% of postoperative infections. | Final considerations 4][35] However, direct comparisons between transseptal sutures and ISS, independent of other packing methods, remain limited. Undoubtedly, adopting techniques that eliminate foreign bodies from the nasal cavity holds the potential to significantly reduce the risk of postoperative infection and eliminate the need for antibiotic prophylaxis.However, additional evidence is necessary to substantiate this assertion.Moreover, given the ongoing prevalent use of ISS, whether routinely or selectively, particularly in cases like septal perforations, our study's findings offer valuable potential insights into the risk of ISS-related infections, its impact on microbiology, and the potential advantages of prophylactic therapy. | Study limitations The major limitation of this study stems from its retrospective design and potential selection bias.Specifically, the decision to administer antibiotics, as determined by the operating surgeon, may have been impacted by the severity of the septal disease and the complexity of the surgical procedure.Additionally, the disturbance of the microbiome could potentially vary depending on the extent of the disease and the scope of the surgery.Different surgical techniques and surgeon preferences should also be considered potential sources of bias. In addition, the study was limited to patients with ISS, and may not be applied to septoplasty without the placement of septal splints.Nevertheless, the findings of the study provide an insight into the effects of prophylactic antibiotic therapy on bacterial colonisation and resistance patterns after septoplasty with ISS.The ability to compare between treatment and control groups allowed us to demonstrate a significant association between different treatment protocols and the risk of postoperative infections. | CONCLUSIONS Septoplasty with ISS increases the risk of gram-negative bacterial colonization and postoperative nasal infection.Antibiotic therapy administered at the time of induction of anesthesia may be more effective in reducing bacterial load and antibiotic resistance than a one-week oral regimen.The study suggests that preoperative single-dose IV prophylaxis may also be effective in reducing the risk of postoperative infection, although larger studies are necessary to reinforce the findings and guide clinical decisions.Finally, special consideration should be given to diabetic patients undergoing nasal surgery, and further research is recommended to determine the role of preoperative nasal screening for K. pneumonia. infection (43%) compared with a 3% infection rate in Klebsiella-negative patients (OR = 16.6, 95%CI: 3.02-91.54;p = .001,φ = 0.12).None of the K. pneumonia species detected before surgery was antibioticresistant.Detection of other bacterial species, as well as bacterial resistance before and after surgery, and identification of multiple bacterial species, were not associated with postoperative infection.Antibiotic treatment (IV prophylaxis or one-week oral therapy) was independently associated with a reduced likelihood of developing postoperative infection (p = .02). Clinical characteristics of study population.Two patients had their ISS removed.Postoperative cultures from the nasal cavity and the ISS identified Staphylococcus aureus in 4 patients, and various types of Enterobacteriaceae in 5 patients (Table2).All patients were treated with a 7-10-day course of antibiotics.No systemic complications, including sepsis, toxic shock syndrome, or meningitis, were observed in any of the patients. T A B L E 3 Bacterial cultures and resistance.
2023-10-12T06:18:02.725Z
2023-10-10T00:00:00.000
{ "year": 2023, "sha1": "1742deaa2e0caa004dfbb47be371cbd49c21c91d", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/coa.14104", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "aee771f6fc47d4983e471ded7999a0d2b5a6c4ab", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
269727746
pes2o/s2orc
v3-fos-license
Malignant glomus tumor of the breast: a case report Malignant glomus tumor (MGT) is a rare mesenchymal neoplasm. It is rarely located in the breast. We present a case of a 57-year-old female patient presenting with complaints of a progressively growing mass found in her left breast. Though multiple imaging examinations have been performed, especially multimodal ultrasound examinations, an accurate diagnosis still cannot be determined. Finally, the lesion was confirmed to be a MGT of the breast by postoperative pathological diagnosis. In conclusion, MGT originating from breast is extremely rare. No such case has ever been described before. This study demonstrates the imaging characteristics of a patient with MGT of the breast in order to provide more extensive insights to consider the differential diagnosis of breast lesions. Introduction Glomus tumors (GTs) are rare mesenchymal neoplasms, accounting for about 2% of soft tissue tumors, and occurring most frequently in the subungual region of the distal extremities (1).Due to its extremely low incidence and lack of characteristic typical imaging features, the diagnosis of GTs mainly relies on histopathology and immunohistochemistry (2).Most GTs are commonly regarded as benign tumors, while malignant glomus tumors (MGTs) are extremely rare, constituting less than 1% of GTs (3).So far, only five cases of GTs occurring in the breast are reported, and all of them are benign.To our knowledge, this is the first case of MGTs originating from the breast.examination revealed a fixed, lobulated, moderately hard mass of approximately 70 mm in diameter in the left breast, with no nipple discharge and nipple retraction.No enlarged axillary lymph node was palpable.Meanwhile, tumor markers such as carcinoembryonic antigen (CEA) and carbohydrate antigen 19-9 (CA 19-9) were within the normal ranges.A mixed solid and cystic lump (measuring 64 mm × 37 mm × 49 mm) with an unclear boundary was found in the external upper quadrant of the left breast, and punctate echogenic foci was not detected on conventional ultrasonography (Figure 1A).According to the American College of Radiology, Breast Imaging Reporting and Data System (ACR BI-RADS), this nodule was assigned to BR-4b category.Color Doppler blood flow imaging shows no significant blood flow signal within the nodule, with several blood flow signals observed in the periphery (Figure 1B).Spectral Doppler examination showed the resistance index to be 0.74.Moreover, the point shear wave elastography (SWE) assessment of the lesion revealed that the lesion is of low stiffness (Figure 1C).Furthermore, contrast-enhanced ultrasound (CE-US) was performed with the injection of SonoVue (Bracco, Milan, Italy).The lesion showed homogeneous hyperenhancement, and its enhancement pattern is centripetal, filling from the periphery toward the center (Figures 1D-F).The mammography results indicate that the left breast exhibited heterogeneous density.A high-density mass (Figure 2A) measuring approximately 6.5 × 4.5 cm was identified in the upper outer quadrant of the left breast, displaying irregular morphology with clear margins.No abnormal calcifications were observed within the lesion.According to the Breast Imaging Reporting and Data System (BI-RADS), this lesion was classified as BI-RADS 4b.Remarkably, no abnormal axillary lymph nodes were found not only in ultrasound and X-ray examination but also in lymph node scintigraphy.The artificial intelligence analysis of the left breast mammography indicates a high risk of malignancy in the lesion detected in the left breast.In addition, contrast-enhanced computed tomography (CE-CT) not only revealed a poorly defined soft tissue mass (Figure 2B) on the outer side of the left breast but also identified numerous lesions (the maximum diameter was 30 mm) in multiple organs, including the lungs, liver, left adrenal gland, both kidneys, pancreas, spleen, and colon.Contrastenhanced magnetic resonance imaging (CE-MRI) reveals multiple abnormal enhanced lesions (the maximum diameter was 13 mm) in the intracranial and head soft tissues.The radiologist strongly suspects that the aforementioned enhanced lesions outside the breast are metastatic lesions. Ultimately, the patient underwent resection of the lesion of the left breast for further diagnosis.Axillary lymph node dissection was not performed due to the fact that no abnormal lymph node was found by different imaging modalities.This surgery did not remove the lesions outside of the breast.The histologic examination of the tumor showed that it consisted of abundant, regular, oval tumor cells with clear boundaries (Figure 3A).Prominently pleomorphic nuclei were observed in the cells of the tumor.On immunohistochemistry, the tumor had strong Collage-IV (Figure 3B) and strong smooth muscle actin expression (Figure 3C), while desmin (Figure 3D), STAT6 (Figure 3E), epithelial membrane antigen, S-100 protein, CD31, and CD34 were negative.The expression of Ki-67 (Figure 3F) was 60%.Finally, the lesion was confirmed to be a MGT of the breast by histopathology. After surgery, due to the patient's refusal of radiation and chemotherapy, targeted therapy with anlotinib was performed.Unfortunately, the patient passed away after 3 months, possibly due to multiple organ failure caused by metastases. Discussion MGT is a rare malignant tumor which has a much lower incidence than its benign counterpart.The majority of MGT mainly occur in the subungual region of the distal extremities, while some MGT can also occur in extracutaneous areas, such as the gastrointestinal tract, lungs, kidneys, and thyroid.To the best of our knowledge, only five benign glomus tumors occurring in the breast were reported in the previous literatures, while we report the first case of MGT arising from the breast (4)(5)(6)(7)(8). MGT has a very high tendency for distant metastases, and the most common sites of metastases are the brain, liver, lung, and lymph nodes (9, 10).In our case, a large number of abnormal lesions are also found in extra-mammary sites, such as the brain, liver, lung, and other organs.Regrettably, we cannot obtain pathological findings due to the patient's refusal for surgery and puncture operations of extramammary lesions.However, metastases were still considered to be the most likely diagnosis according to the imaging findings of those lesions on CE-CT and CE-MRI.Most of the patients with malignant glomus tumor died soon after the diagnosis because of tumor progression and distant metastases (2).The patient in our case died 3 months after diagnosis as well.Therefore, we also speculate that the extra-mammary lesions are metastases originating from MGT of breast.Notably, no abnormal lymph nodes were found in ultrasound, CT, and lymph node scintigraphy.That is why lymph node dissection was not done when the lesion of the left breast was removed.We hypothesized that the principal type of tumor metastasis in our case was hematogenous metastasis instead of lymphatic metastasis.The absence of abnormal lymph nodes also makes it difficult to judge as to whether the tumor is benign or malignant when imaging was performed. The initial preoperative radiographic diagnosis of MGT can be difficult and error-prone.MGT usually manifests as hypoechoic solid, cystic-solid tumor in conventional ultrasound (11).Those features are consistent with our case.Previous studies regarded that MGTs usually show an abundant blood signal on color Doppler and have a certain diagnostic value (12).However, our case showed a low blood signal.This reminds that the features of MGT on color Doppler may be variable.Our study also describes imaging findings on CE-US and elastography, which are almost absent in the previous literature.MGT, in our case, showed homogeneous hyperenhancement, and its enhancement pattern is centripetal, filling from the periphery toward the center, during the procedure of CE-US.The CE-US findings of MGT are similar to those of some cavernous hemangiomas to some extent.Previous studies have concluded that MGT and hemangioma have comparable imaging characteristics on conventional ultrasound and MRI (13).Therefore, we speculate that the overlap between the imaging manifestations on CE-US of the two is reasonable, and the feature on CE-US may have potential diagnostic significance of MGT.The MGT of our case was soft according to ultrasound elastography.This feature makes the lesion more likely to be misdiagnosed as a benign lesion of the breast.However, there are few available literatures that describe the characteristic imaging features of extradigital MGT, and even fewer reports provide a comprehensive analysis of ultrasonographic features in detail (8).Therefore, further cases are needed to confirm our findings. Mammography is indeed one of the most commonly used imaging methods to detect breast masses owing to its convenience, affordability, and high sensitivity to calcifications, making it highly favored by clinicians (14).Nonetheless, due to its relatively low sensitivity and the associated risk of ionizing radiation, mammography is primarily utilized for screening purposes (15).Both mammography and traditional ultrasound primarily concentrate on morphological alterations in breast masses, which can potentially lead to misdiagnosis and overlooked cases (16).Tumor angiogenesis is intricately linked with tumor progression, infiltration, and metastasis.By honing in on the distinctive features of tumor microvasculature, the accuracy of disease detection can be significantly enhanced (17).Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) stands out as the predominant imaging modality in this domain (18).The extent of enhancement and kinetic parameters derived from DCE-MRI have been shown to correlate closely with the histopathological changes associated with angiogenesis.DCE-MRI exhibits notable sensitivity, surpassing that of mammography and ultrasound imaging, particularly in detecting invasive cancer, with sensitivity levels nearing 100% (17).Moreover, DCE-MRI remains unaffected by factors such as breast tissue density, scar tissue, prior radiotherapy, or breast implants (19).Apart from its diagnostic utility in identifying breast masses, DCE-MRI also serves as a valuable tool for the early prognosis and evaluation of neoadjuvant chemotherapy response in breast cancer cases, thanks to its ability to assess tumor microvascular perfusion (20).Presently, the comprehensive evaluation of tumor vascularity via DCE-MRI has emerged as a pivotal aspect of diagnosing and managing malignant breast tumors.Unfortunately, in the present case, DCE-MRI was not conducted, thereby precluding the assessment of MGT presentation on DCE-MRI.In addition to DCE-MRI, superverb microvascular imaging (SMI) has emerged as a notable technique to evaluate the microvascular supply in breast tumors in recent years.SMI, utilizing innovative Doppler technology, employs multidimensional filtering to segregate blood flow signals from clutter, thus eliminating unwanted artifacts while preserving slow intravascular signals, all without the need for contrast agents (21).Studies have demonstrated that SMI offers superior resolution in depicting microvascular blood flow patterns and the angiogenesis characteristic of malignant breast tumors compared to conventional color Doppler flow imaging and power Doppler imaging (22).Some research findings suggest that SMI can highlight more penetrating vessels in breast cancer cases, aiding in distinguishing between benign and malignant lesions in the breasts without detectable abnormalities, especially those categorized as BI-RADS category 4 breast lesions (21,23).Nevertheless, further investigations are warranted to ascertain whether the diagnostic efficacy of SMI is comparable to that of CEUS. MGT is an exceedingly rare stromal tumor.Histologically, MGT typically originates from glomus body cells (3).Glomus bodies, normally found in the dermis and subcutaneous tissue, are contractile neuromyoarterial receptors that regulate blood flow, primarily located in areas such the palms, wrists, forearms, and beneath toenails (8).However, reports of glomus tumors occurring in atypical locations, such as bone, respiratory tract, cheeks, earlobes, tongue, stomach, sacrum, and buttocks, exist (8).The underlying mechanism remains ambiguous, contributing to the frequent misdiagnosis of MGT in such locations (10).MGT typically comprises numerous round tumors with enlarged nuclei, prominent nuclear division, and tumor cells surrounding blood vessels as they grow (4).Although breast stromal tumors are relatively rare, they encompass a diverse range of entities.From a pathological standpoint, a differential diagnosis of MGT includes glomus tumor, cellular or cavernous hemangioma, and paraganglioma.Glomus tumors are benign, with minimal tumor cell atypia and rare mitotic figures.While focal areas resembling cavernous hemangiomas may be observed in some malignant glomus tumors, hemangiomas typically express thrombomodulin, CD31, and CD34, with negative SMA, aiding in differential diagnosis.Paraganglioma and malignant glomus tumor share histological similarities, but paragangliomas specifically express neuroendocrine markers such as chromogranin A and synaptophysin while lacking SMA expression.Additionally, unlike MGT, solitary fibrous tumors express STAT6, melanomas express S100, and neuroblastomas lack SMA expression (4,24). A differential diagnosis of imaging for MGT in our case may mainly include breast carcinoma, breast phyllodes tumor, and breast hemangioma.Breast cancer is the most common malignancy among women (25).It usually presents as a painless, firm, fast-growing mass.As for ultrasonic features, breast cancer usually shows an irregular morphology and indistinct borders with microcalcification and a high aspect ratio in conventional ultrasound, nonhomogeneous enhancement in CEUS, and a stiff mass in SWE (26).These findings were different from our case.Phyllodes tumors of the breast are common breast fibroepithelial neoplasms including benign phyllodes tumor, borderline phyllodes tumor, or a malignant phyllodes tumor (27).The main sonographic appearance of phyllodes tumor is lobulated mass.No significant difference was observed in lesion boundary, orientation, posterior acoustic features, or echo pattern between benign and borderline or malignant phyllodes at sonography (28).The shape of our case is similar to phyllodes tumor to a certain extent, so the lesion and phyllodes tumor cannot be distinguished by sonographic appearances.Hemangioma is a rare benign vascular tumor of the breast (29).It typically presents as a hypoechoic, well-circumscribed oval mass and is located more superficially in the papillary dermis or epidermis (13).Color Doppler usually reveal a rich blood flow signal in hemangioma (4).These findings were different from our case. Conclusion We report an extremely rare case of MGT originating from breast which has never been described before.Due to the low incidence and deep location, ultrasonic manifestations of MGT are rarely reported. Although pathologic confirmation is required for the final diagnosis of MGT, we proposed the performance of MGT in multiple ultrasound modalities, hoping such to be useful in the diagnosis of MGT. FIGURE 1 FIGURE 1 Multimodal ultrasound performance of malignant glomus tumor in the breast.(A) Conventional gray-scale sonography revealed a mixed solid and cystic lump (arrow) in the breast.(B) Color Doppler flow imaging showed several blood flow signals that were observed in the periphery of the tumor (arrow).(C) Shear wave elastography showed a soft nodule (arrow) of the breast.(D) The contrast-enhanced ultrasound image captured 18 s after the injection of the contrast agent.(E) The contrast-enhanced ultrasound image captured 34 s after the injection of the contrast agent.(F) The contrast-enhanced ultrasound image captured 118 s after the injection of the contrast agent. FIGURE 2 FIGURE 2 Mammography and contrast-enhanced CT (CE-CT) findings of malignant glomus tumor in the breast.(A) Axial mammography image showing a high-density lesion (arrow).(B) CE-CT revealed a poorly defined soft tissue mass (arrow).
2024-05-12T15:19:53.479Z
2024-05-09T00:00:00.000
{ "year": 2024, "sha1": "c0980399e984663d2baeaa6cbd2c986a24d85e40", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/journals/oncology/articles/10.3389/fonc.2024.1393430/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "27c2742008d5e3a8a8c2c61bc475c8c81eaa2c32", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
53484612
pes2o/s2orc
v3-fos-license
Information Retrieval in Domain Specific Databases : An Analysis to Improve the User Interface of the Alcohol Studies Database This is the fixed version of an article made available by an organization that acts as a publisher by formally and exclusively declaring the article "published". If it is an "early release" article (formally identified as being published even before the compilation of a volume issue and assignment of associated metadata), it is citable via some permanent identifier(s), and final copy-editing, proof corrections, layout, and typesetting have been applied. Introduction Academic libraries are becoming more directly involved in the design and publishing of electronic information resources, including bibliographic databases, elec- May 2003 tronic journals, and digital archives. These new roles represent a challenging future for librarians who want to utilize their technology and design skills. 1 The Scholarly Communication Center of Rutgers University Libraries (RUL) and the Center of Alcohol Studies (CAS) at Rutgers have collaborated to provide Web access to the Alcohol Studies Database (ASDB). The ASDB contains more than 60,000 citations of documents indexed by the CAS since 1987. 2 The primary focus of the database is on research and professional materials dealing with beverage alcohol and its use and related consequences. Although a growing amount of literature on other drug use/abuse has been added in recent years, this material represents only a small percentage of the database and is not indexed to the same depth as the alcohol literature. In addition to the research and professional material, the database includes a small collection of educational and prevention materials, including audiovisuals suitable for students and educators K-12, parents, community workers, and the general public. At the outset, the author, working with librarians at the CAS, wanted to quickly develop a Web-based user interface for the ASDB. Originally, as a nonnetworked database, a controlled vocabulary of indexing terms had been developed and each article was extensively indexed with these terms. This vocabulary became the key component of the online search interface. The initial Web-accessible database and user interface were completed late in 1999 using the approach and technology described in an article by the author in 2001. 3 As a result, users at Rutgers University and throughout the world gained access to this important and freely available collection of medical and scientific research dealing with the use of alcohol and the related consequences. Subsequent to this introduction on the Web, the author designed a statistical gathering and reporting subsystem that was implemented in October 2000. The transaction logs now contain more than two full years of search statistics that have assisted researchers in making decisions about how to improve the user interface. Based on the transaction log analysis and extensive ASDB team discussions, an improved user interface was launched in February 2002. This article summarizes the data from the transaction logs and compares search results from the initial user interface and the improved user interface. User Interface Overview A partial image of the initial user interface showing the controlled vocabulary pick-lists is shown in figure 1. This partial image shows three primary subjectrelated pick-lists labeled as follows: physiological aspects, social aspects, and drug terms. Each pick-list has some thirty or more controlled vocabulary terms that can be selected by the user to form a query. In addition to these lists, a user can select items from two additional pick-lists, format and special populations (not shown), that will further constrain the query. Finally, author and title word or phrase searching also is available. Online help instructions are available on the top navigation bar and "example" links to screen images are provided for each type of search box to demonstrate clearly how one would specify a query. Complex Queries The user can form simple or quite complex Boolean queries with the ASDB interface. For example, one could simply do a search for a specific author or a search on a word or phrase that might be found in the title of an article. However, the user also has the capability to form complex Boolean operations by selecting multiple items from any one of the three primary pick-lists. Multiple items selected within a pick-list default to a Boolean "or" and the user also can use the toggle switch between the major pick-lists to select either an "and" or an "or" between these major categories. The default Boolean operation between search boxes is an "and." The example in figure 1 illustrates a more complex Boolean search with the FIGURE 1 Controlled Vocabulary in the Initial User Interface ([AIDS: HIV and Alcohol] AND [Aggression and Alcohol]) OR (AIDS: HIV and Drugs) properly parenthesized result shown at the top of the figure. After forming a query, the user can then select "search" at the bottom of the screen, which will yield a set of summary results, each of which can then be selected to view the full bibliographic citation. Results Display After a user selects the "search" button, each resulting bibliographic record is displayed in summary form, ordered by publication date with the most recent first. Within publication year, there is a secondary ordering by author. Note that relevance orderings are not appropriate because there are no abstracts or full-text content that can be used to make relevance decisions. Approach and Methodology The ASDB is a domain-specific database that contains bibliographic records of more than 60,000 citations primarily to journal articles and books relating to the beverage alcohol and its use and related consequences. The use of transaction logs is one primary method of improving user interfaces and, thereby, also improving the information retrieval performance for users. Transaction logs have been used successfully to improve user interfaces of traditional OPACs in libraries. 4 This article discusses the use of transaction logs to improve the user interface for the ASDB. The logs analyzed herein represent usage from October 2000 through September 2001 for the initial user interface and usage from February 2002 through April 2002 for the improved user interface. The ASDB is a research-oriented database and, by Web standards, it is not heavily used; however, the transaction log contains a significant statistical representation and usage continues to grow as more people discover the availability of the ASDB. At the writing of this article, the author and colleagues were seeing between 1,300 and 1,500 searches a month during the standard academic fall and spring semesters. May 2003 The objectives of this analysis were to understand user behavior, analyze failure rates, and identify improvement areas for the user interface. The analysis methodology used in this article is similar to that described by Jansen and Spink. 5 Although many improvement areas were discovered through the analysis, a specific objective was to reduce the number of searches that resulted in either zero hits or greater than 100 hits. These types of outcomes were considered potential failures of the user interface. Based on identified improvement areas, an improved user interface was launched in February 2002. Data from the initial and the improved user interfaces are compared to determine how the changes have improved the ability of searchers to use the ASDB. The following levels of analysis as reported by Jansen and Spink will be used. Session The session is the entire sequence of queries entered by the user. Heuristics will be used to define a session because the ASDB does not employ any type of "log-in" scenario that would accurately register each user. For the purposes of this article, the session will be defined as those queries submitted consecutively by a single IP address and not separated by more than twenty minutes. The twenty-minute interval was arrived at by visual inspection of the intervals that occur in the transaction log. Although it is conceivable that another user may have started another session with the same IP address and within a twentyminute time frame, this condition is highly unlikely. It should be noted that a session can have a single query. Query Sessions are composed of queries. A query within the context of the ASDB is defined when a user selects the "search" button and an entry is written into the transaction log. For the purposes of this article, the concepts of initial query and modified query will be used. The initial query is the first query in a session, and the modified query is a subsequent query in a session that is different from the initial query. Query length is measured by the number of terms used, and query complexity is determined by the use or absence of Boolean expressions. Term Within the ASDB, a term is defined as any controlled vocabulary term that is selected from one of the five pick-lists in the user interface (i.e., physiological aspects, social aspects, drug aspects, special populations, and special format). A term also can be an author's name or words/ phrases entered into the "title phrase" search box and which might be separated by the Boolean operators of AND/OR. The Statistical Gathering and Reporting Subsystem The author designed the statistical subsystem to capture as many data as possible about the user search behavior. Every aspect of the user query is captured, including search terms and how the user has toggled the AND/OR selection between major subject areas in order to create a Boolean expression. Each search is associated with a unique user identification, although users always remain anonymous. In addition, the results of each search are recorded, including the number of results generated and a time-date stamp. Because users do not register to search the ASDB, some mechanism was needed to identify a user session. The time-date stamp in conjunction with the IP address is used to track the concept of a "session" as discussed above. It should be noted that the statistical system only records data from users who conduct a search of the ASDB. Any data regarding users who are just visiting and who do not conduct a search, sometimes referred to as "tourists," is not recorded. 6 The reporting subsystem provides several types of summary reports, including total number of searches, searches with zero results, searches with more than 100 results, searches by month, and searches by major domain. In addition, the database administrator can select a more de- tailed report to see all the fields and options for a particular search. Table 1 shows the demographics by domain for the users of the ASDB. Although usage is predominantly from the United States, users are coming to the ASDB from all over the world. Zero-hit Outcomes Many studies have analyzed user difficulties with the syntax and semantics of Web searching. One paper has reported that more than 30 percent of searches of a university Web site resulted in zero-hit outcomes. 7 In an earlier paper, T. Peters reported that 40 percent zero-hit outcomes are common in his specific academic library OPAC. 8 Table 2 shows the distribution of hits in four different ranges, including zero-hit outcomes. The table indicates that the initial user interface of the ASDB is incurring 33.6 percent zero-hit outcomes where N = 10,267 is the total number of searches. The improved user interface has a marked reduction in zero-hit outcomes at 27.8 percent. Table 3 provides summary-level statistics for sessions and queries for both the initial (N = 10,267) and improved user interfaces (N = 3,375). From these summary statistics, it is obvious that the sessions are relatively short (e.g., 2.45 queries in the initial UI). From an examination of session length, it is apparent that 71.1 percent of all sessions in the initial UI have either one or two queries and 80.6 percent have one or two queries in the improved user interface. In other analyses, researchers have found similar results, speculating that users are either unwilling or unable to expend the effort to develop effective search strategies. 9 Analysis: Zero-hit Outcomes The zero-hit outcomes are a fruitful area for examination and will generally reveal a wealth of information regarding the effectiveness of a user interface. This analysis will proceed by examining the zero-hit outcomes of the initial user interface in more detail. Table 2 shows that 33.6 percent of the searches (3,454 out of 10,267) using the initial user interface resulted in zero hits. In the improved user interface, zero-hit outcomes have been significantly reduced to 27.8 percent. Of the zero-hit outcomes in the initial user interface (N = 3,454), 595 searches attempted some type of author search. Author Searching There were obvious syntactical and semantic errors with author searching. Generally, the semantic errors will be more difficult to detect and correct. For example, there were a few users who confused the author search field with a keyword search field and searched on phrases such as "Advocacy 1992" or used a term that was obviously subject related rather than an author. These errors were uncovered by visual inspection of the logs and are reported in table 4 as "incorrect AU semantics." In addition, a number of author search syntax errors were evident in the transaction log that could clearly be eliminated or minimized by improving the user interface. The following illustrate some specific examples that do not follow the conventions that are documented as part of the ASDB user interface: 1. typing in the first name "first" (e.g., "Bill Wilson"); 2. typing initials with no blanks (e.g., "Epstein, J. A."); 3. omitting comma delimiters that separate the last name from the first initial (e.g., "borg s"). Errors of this type comprise a significant percentage (32.6%) of the author search zero-hit outcomes as shown in table 4 as "incorrect AU Syntax." In the improved user interface, more restrictive syntax checking was implemented with a request to the user to reformulate the author search according to the required syntax conventions for author searching. Although some types of incorrect syntax still have not been detected, the improved user interface has significantly reduced the zero-hits due to incorrect author syntax to only 12.8 percent (table 4). Title Searching In the initial ASDB user interface, phrase searching was implemented; however, quoted phrases and the use of an asterisk as a truncation symbol were not permitted. Given the prevalence of these conventions in existing Web search engines, many ASDB users tried using these symbols. Of the zerohit outcomes (N = 3,454), 2,343 searches attempted some type of title search and 3.5 percent used special conventions that were not supported by the ASDB. (See "incorrect Title Syntax" in table 4.) In the improved user interface, both quoted phrases and use of the asterisk are flagged with a message to the user to reformulate the query using approved syntax conventions. This checking has virtually eliminated zero-hits due to the use of these conventions. Keyword Searching Perhaps one of the most confusing parts of the user interface was allowing keyword and phrase searching only in the title field. The rationale behind this decision was the assumption that users would use the controlled vocabulary in lieu of general keyword/phrase searching and some users would want to search the title field only. However, it is clear from a visual inspection of the title phrase searching that users continue to use the title phrase search as a general keyword search box. One simple example illustrating that users are not obtaining the proper results is evident in the search using the phrase "military and alcohol," which returned eighteen results in the initial UI. For the improved user interface, keyword searching has been offered across all fields in the database and the title-specific search option eliminated. If one reruns the search "military and alcohol" in the improved UI, 130 results are returned. It is probably a reasonable assumption that the user in this case did not want those citations in which the terms "military" and "alcohol" appeared only in the titles of the citations. Hence, the change in keyword searching for ASDB has likely improved recall for the great majority of users. Frequency of Use In the ASDB, there are more than 150 controlled vocabulary terms across the three primary areas of physiological aspects, social aspects, and drug terms. Table 5 shows the fifty most frequently selected terms from the 10, 267 searches using the initial user interface. Although the data shown in table 5 are not used quantitatively in this study, the qualitative assess-ment was that use of this highly technical vocabulary could not be abandoned, thereby leaving users with only a freetext searching capability. Many users of the initial and improved user interfaces took advantage of the controlled vocabulary; however, some unexpected results were encountered, as discussed in the following sections. Use of Subject Terms As shown in table 6, data have been collected and summarized for the initial and the improved user interfaces that illustrate the percentage of users who did not use the controlled vocabulary. The "No Subjects" row includes the percentage of users who did not use any of the controlled vocabulary from the three main areas of physiological aspects, social aspects, and drug aspects; however, they may have used some of the other special pick-lists or the free-text search. The "Free Text Only" row shows the percentage of users who did not use the subject vocabulary and other special pick-lists such as those for population and format. These queries (23.6% in the initial UI and 39.3% in the improved UI) used only the freetext searching fields. It should be noted that the two rows in table 6 are not mutually exclusive. For example, a free-textonly search would be counted in both rows; hence, summing to greater than 100 percent in a column is possible. It is obvious from table 6 that a significant number of users do not use the controlled vocabulary in either the initial or improved user interfaces. However, the dramatic result is the increased number of users who did not use the controlled vocabulary in the improved user interface. The author suggests that this result stems from two major changes in the user interface as we moved from the format of the initial UI to that of the improved UI. First, general keyword searching was introduced in contrast to only allowing keyword searching in the title field. In all like- lihood, users were more inclined to use the more familiar keyword searching rather than the less familiar method of selecting items from the controlled vocabulary list. Second, it was known that there would be trade-offs in the presentation styles of the two user interfaces. In the initial UI (figure 1), pick-lists were chosen in which the user could only see a small subset of the terms without scrolling. In the improved UI, the user is first presented with major subject areas (figure 2) and then must do a mouse click to see the terms of the controlled vocabulary in a checkbox format ( figure 3). The advantage of this approach is that the user can see all the subject terms whereas she or he could only see a limited subject list (without scrolling) in the initial UI. Thus, the improved UI approach requires more mouse clicks for the user to select the subject terms. Perhaps more important, in the improved UI, users do not see any terms on the first search page and it is suspected that they more naturally gravitated to the use of the obvious keyword searching capability rather than take the time to explore the subject terms available. Summary The improved UI has significantly reduced the number of zero-hits that users incur from 33.6 percent to 27.8 percent. This result is due primarily to the improved error checking for author searching and the checking for special syntax conventions that users might have seen on the Web, but which are not available in the ASDB. However, when one examines the distribution of outcomes with non-zero hits, there are 41.9 percent with greater than 100 hits in the initial UI and 48.0 percent with greater than 100 hits in the improved UI. One phenomenon that is occurring in the improved UI is that users are selecting many more controlled vocabulary terms to OR together, which is resulting in searches with more hits. In all probability, this search behavior stems from users being able to see the complete selection of controlled vocabulary terms in the checkbox format. It is difficult to put a value judgment on these outcomes, although it is unlikely that users are examining results beyond their first 100 hits. With the change in subject term display format from a pick-list to checkbox style, users have dramatically reduced the use of the ASDB controlled vocabulary from 33.7 percent not using any of the controlled vocabulary terms to 62.6 percent. This result was clearly unexpected by the author and the CAS librarians and not an altogether desired result. Although there is less usage of the controlled vocabulary, users who use the keyword FIGURE 3 Checkbox for Physiological Aspects searching also are searching all controlled vocabulary terms. Although this approach is yielding many relevant results, we are still struggling with the classic information retrieval problem of the difference between the user vocabulary and the indexer's vocabulary. 10 The other observation here is that user overhead in terms of mouse clicks appears to be more of an issue than originally expected. It appears that initial impressions from the first search page had a very strong impact on user behavior leading them to keywords when they did not see any controlled vocabulary on the first page. Although only one mouse click away, the controlled vocabulary has become more inaccessible for a great many users. A related phenomenon is that of session length between the two user interfaces. In the initial UI, the percent of single query sessions was 48.3 percent whereas this statistic jumped to 60.7 percent in the improved UI. There are possibly two factors that could account for this behavior. First, the reduced number of zero-hit outcomes in the improved UI suggests that more users have received appropriate results in a single-session query. The other factor is that many users may have considered the improved UI more complex than the initial UI and thus did not continue to explore how to use the ASDB effectively. In addition to the major user interface changes made above, it should be noted that the improved UI of the ASDB also includes the ability to e-mail citations, to select a "print-friendly" interface, and to page results in order to limit the size of the html page returned to the user's browser. Steps also are under way to implement the linking to the full text of journals that RUL has licensed. Conclusions For most Web database projects, it is unclear who the user community will be and frequently all the designer can assume is that users are all those "out there" on the Web. With the ASDB, the domain demographics suggest a very diverse user community. It is unlikely that one will have the luxury of understanding the user's search behavior prior to developing the user interface. Therefore, it is very important to capture usage statistics via transaction logs. These logs enable the designer to learn more about the user and to make incremental changes to improve the user interface. Users will frequently make assumptions about the user interface syntax given their experience with Web search engines or other database products. They frequently assume these conventions are standard and universal. In designing user interfaces, the librarian must either support a variety of conventions or provide error checking and feedback to assist the user in learning the syntax of the specific search engine. Rarely will one get the user interface "right" on the first iteration. The designer should plan on making improvements after the transactions logs have been reviewed and there is more information on the types of users and their search behavior. With respect to the specific ASDB analysis, a heuristic of zero-hits and greater than 100 hits has been used as an indicator that the user interface can be improved. Although certain of the searches that fall under this general classification are legitimate, this indicator can serve as a useful, low-cost tool for identifying and making user interface improvements. Certainly the reduction of zero-hit outcomes in the ASDB is an improvement. However, it appears that the improved user interface might be more complex for our user community given the decrease in usage of the controlled vocabulary, the increase in single-query sessions, and significantly more outcomes with greater than 1000 hits. This conclusion suggests that there are some possible future improvements that would help users. The researchers suspect that many of the users are casual information searchers who are accustomed to a basic keyword search interface and that the professional information specialists would find the controlled vocabulary most useful. Thus, offering a "basic" and "advanced" user interface is likely to help considerably in meeting the needs of two quite different user populations. However, with the basic keyword search, the researchers are still left with the problem of bridging the user vocabulary and the controlled vocabulary. In these small specialized databases, linking the keyword search terms with the controlled vocabulary is a further improvement that is likely to help considerably and is one area of continuing investigation. Many librarians are entering the information profession with technology skills or are acquiring and using technology skills while on the job. As a result, these librarians will likely be confronted with user interface design issues and the resulting questions of effective information retrieval. Analysis of transaction logs is an excellent method for better understanding user search behavior and also an effective tool for identifying improvement areas in the user interface.
2018-10-24T08:09:02.074Z
2003-05-01T00:00:00.000
{ "year": 2003, "sha1": "76392ace0ff26ffd6bb89c52b4789f1f7b4ed09c", "oa_license": "CCBYNC", "oa_url": "https://crl.acrl.org/index.php/crl/article/download/15598/17044", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "51e8b512e3d74576ad7753c5f95899d77b8668ab", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
232133640
pes2o/s2orc
v3-fos-license
RNA-Sequencing Reveals Differentially Expressed Rice Genes Functionally Associated with Defense against BPH and WBPH in RILs Derived from a Cross between RP2068 and TN1 Background Rice is staple food for over two billion people. Planthoppers like BPH and WBPH occur together in most of rice growing regions across Asia and cause extensive yield loss by feeding and transmission of disease-causing viruses. Chemical control of the pest is expensive and ecologically disastrous; breeding resistant varieties is an acceptable option. But most of such efforts are focused on BPH with an assumption that these varieties will also be effective against WBPH. No critical studies are available to understand rice resistance, common or otherwise, against these two planthoppers. Results Our studies aimed to understand the defense mechanisms in rice line RP2068 against BPH and WBPH through RNA sequencing analysis of a RIL line TR3RR derived from the cross TN1 (susceptible) and RP2068 (resistant) after infestation with BPH or WBPH. Results revealed higher number of differentially expressed genes (DEGs) in BPH infested plants than in WBPH infested plants when compared with the uninfested plants. These DEGs could be grouped into UPUP, DNDN, UPDN and DNUP groups based on whether the DEGs were up (UP) or down (DN) regulated against BPH and WBPH, respectively. Gene ontology analysis, specially of members of the last two groups, revealed differences in plant response to the two planthoppers. Abundance of miRNAs and detection of their target genes also indicated that separate sets of genes were suppressed or induced against BPH and WBPH. These results were validated through the analysis of expression of 27 genes through semi-quantitative and quantitative real-time RT-PCR using a set of five RILs that were genetically identical but with different reaction against the two planthoppers. Coupled with data obtained through pathway analysis involving these 27 genes, expression studies revealed common and differential response of rice RP2068 against BPH and WBPH. Trehalose biosynthesis, proline transport, methylation were key pathways commonly upregulated; glucosinolate biosynthesis, response to oxidative stress, proteolysis, cytokinesis pathways were commonly down regulated; photosynthesis, regulation of transcription, expression and transport of peptides and defense related pathways were exclusively upregulated against WBPH; MYB transcription factor mediated defense induction was exclusive to BPH. Conclusion Rice defense against the two sympatric planthoppers: BPH and WBPH has distinct features in RP2068. Hence, a conscious combination of resistance to these two pests is essential for effective field management. Supplementary Information The online version contains supplementary material available at 10.1186/s12284-021-00470-3. Background Rice (Oryza sativa L.) is one of the most important staple food crops of the world. Significant amount of the rice production is lost annually due to biotic stresses, of which about 25% is attributed to the insect pests (Savary et al. 2000). Planthoppers, such as brown planthopper (BPH) [Nilaparvata lugens (Stål)] and whitebacked planthopper (WBPH) [Sogatella furcifera (Horváth)] have again attained peak pest status in Asia since the beginning of this century (Bentur and Viraktamath 2008;Bottrell and Schoenly 2012). These insect pests inflict losses not only by direct sap sucking from rice but also by acting as vectors of disease-causing plant viruses (Zhou et al. 2008). Currently, these pests are being managed by the farmers largely through heavy use of environmentally harmful synthetic insecticides (Heong and Hardy 2009). However, breeding rice varieties resistant to these pests would be an environmentally safe and ecologically acceptable alternative approach to manage these pests (Brar et al. 2009). Studies were initiated during 1960s to identify, characterize and utilize rice land races with resistance to BPH and WBPH (Pathak et al. 1969;IRRI 1979;Heinrichs et al. 1985). So far about 40 major genes and 72 QTLs conferring resistance to BPH and 19 major genes and 75 QTLs conferring resistance to WBPH have been reported from cultivated rice and its wild relatives (Fujita et al. 2013, Ling andWeilin 2016;Du et al. 2020;Haliru et al. 2020). Using some of these genes or combinations thereof, several rice varieties possessing BPH resistance have been developed and released for cultivation since 1980s (Khush and Brar 1991;Brar et al. 2009). With advent of molecular marker technology almost all the BPH R genes have been mapped with linked markers (Du et al. 2020, Haliru et al. 2020. It has been observed that 30 R genes are located in six clusters on four chromosomes i.e. 3, 4, 6 and 12. The cluster on chromosome 12 is reported to be the most dense with 8 genes located within a 5 MB region (Du et al. 2020). So far, 14 of these R genes have been cloned through map-based approach (Zhao et al. 2016;). Through sequencing of R genes from different reported sources, Zhao et al. (2016) reported Bph9, Bph1, bph7, Bph10 and Bph21 to be different alleles of the same gene. Characterization of these cloned R genes suggested 10 of the genes to belong to NBS-LRR family of resistance genes while two (Bph3 and Bph15) represented lectin receptor kinases; bph29 has a B3 DNA binding domain and Bph32 is novel with an unknown SCR domain (Du et al. 2020). Thus, a fair degree of diversity, in terms of functional domains, is represented by the different BPH R genes and consequently in their mode of action. This information coupled with availability of linked molecular markers for many BPH R genes (Hu et al. 2016) has led to a spurt in molecular breeding aimed at pyramiding BPH R genes to provide durable resistance Wang et al. 2015;Fan et al. 2017;Wang et al. 2018;Han et al. 2018;Jiang et al. 2018). In contrast, less intense work is reported on WBPH resistance genes. Of the 12 major WBPH resistance genes reported, two being introgressed from the wild rice -Oryza officinalis, nine gene have been mapped with linked markers (Ramesh et al. 2014;Du et al. 2020). However, none of the genes has been cloned and characterized. BPH and WBPH are sympatric species occurring together in almost all rice ecologies (Horgan et al. 2020). Both are phloem feeders and transmit rice viruses. While BPH is reported to transmit rice ragged stunt virus and grassy stunt virus, WBPH has been reported to transmit southern rice black streaked dwarf virus (Pu et al. 2012). Both are capable of long-distance migration (Otuka et al. 2008). Phenologically, WBPH colonizes the crop early during vegetative stage while BPH appears at late active tillering stage. Interspecific competition generally results in dominance of BPH as the crop grows. While BPH is monophagous confined to rice (Oryza), having shifted its host from Leersia about 0.25 million years ago (Jones et al. 1996;Sezer and Butlin 1998), WBPH is oligophagous capable of feeding and surviving on plants from several genera of the family Poaceae (Zhou et al. 2013). Genomes of both insects have been sequenced (Xue et al. 2014;Wang et al. 2017) and of the two, BPH is reported to possess a larger genome (1141 MB) and the size of the WBPH genome has been estimated to be 720 MB. Despite apparent similarities between these two planthoppers, host-plant resistance is not common across these. It has been reported that BPH R genes reported thus far are not effective against WBPH, though Bph3 and Bph6 (Guo et al. 2018) have been claimed to be effective against WBPH also. Further, several land races and breeding lines have been reported to be resistant to both BPH and WBPH (Heinrichs et al. 1985;Bentur et al. 2011). Genetic analyses of some of these land races and breeding lines e.g. Sinna Sivappu (Ramesh et al. 2014) and ADR52 (Srinivasan et al. 2015) indicate that loci conferring resistance to BPH in these donor parents are different from those responsible for WBPH resistance. From the point of view of pest management, cultivating rice varieties resistant to BPH alone may prove disastrous as WBPH, in the absence of BPH, could fill in the vacuum and pose a more severe threat than what it would in the concomitant presence of BPH (Bottrell and Schoenly 2012). Therefore, from a scientific standpoint, it is important to ascertain how defense pathways that operate against BPH are different from those operating against WBPH. Omics tools have been successfully employed to investigate molecular pathways triggered or suppressed in rice genotypes when challenged by BPH. Zhang et al. (2004) studied expression profiles of a limited number of defense genes in a susceptible (MH63) and a resistant (B5) rice lines following BPH infestation and observed that genes related to signaling pathways, oxidative stress/apoptosis, wound-response, drought-inducible and pathogen-related proteins were up-regulated in the resistant genotype. Similarly, employing suppressive subtraction hybridization (SSH) tools, Yuan et al. (2005) identified 25 differentially accumulated cDNAs of genes representing several functional categories such as cell wall proteins, protein folding and degradation, proteinprotein interactions and/or signal transduction, lipid metabolism, stress response, and transport facilitation in a BPH susceptible cultivar Minghui63 at 32 h after infestation. Again, using an SSH approach, Wang et al. (2005), also identified 21 differentially expressed genes related to wound and stress tolerance in B5. Wang et al. (2012) employed microarray analysis to demonstrate differential expression of transcription factor (TF) genes in the resistant rice landrace Rathu Heenati and the susceptible TN1 at 24 h upon infestation with BPH. They identified 13 TF genes induced in the resistant cultivar. Li et al. (2017) also studied BPH resistance in Rathu Heenati through microarray analysis and concluded that salicylic acid plays a key role in the resistance process. Recent studies have focused on the role of miRNAs in conferring BPH resistance in rice (Wu et al. 2017;Ge et al. 2018;Dai et al. 2019). Using an approach combining both microRNA and transcriptome analyses Tan et al. (2020) identified 29 key miRNAs and 20 candidate genes regulating Bph6-mediated resistance in a transgenic rice line. Wei et al. (2009) adopted a proteomics approach to understand BPH resistance in a rice line with Bph15 gene and concluded that it has a different defense mechanism which involves Gns5 and the glycine cleavage system H-protein. Kang et al. (2019) studied the metabolite profiles in three rice varieties TN1, IR36 and IR56 following feeding by BPH to get an understanding of the metabolic mechanism of rice resistance. The only report, thus far, on global expression profiling of resistance pathway genes in rice against WBPH identified four key genes, located on chromosome 6, to be involved in ovicidal response of the rice variety CJ06 to WBPH infestation (Yang et al. 2014). Li et al. (2020) made a comparative transcriptome analysis of defense response of rice to BPH and striped stem borer infestation. However, to the best of our knowledge, no study has been reported that aims to understand differential resistance mechanism in genetically related rice genotypes against BPH and WBPH through expression profiling. In this study, we used F 14 RILs derived from a cross between the rice breeding line RP2068-18-3-5 (RP2068; resistant to both BPH and WBPH), and TN1 (susceptible to both the planthoppers). Using an RNA-seq approach, specific transcripts displaying differential induction were identified in the RILs and subsequently, gene expression of 27 selected target genes was validated to understand and obtain useful insights into the molecular process of rice resistance against two of its major pests i.e. BPH and WBPH. The results of the current study indicated that rice defense against WBPH is of lower order with emphasis on tolerance as against BPH where the emphasis is on antibiosis. Performance of RILs against BPH and WBPH All the 180 RILs were subjected to four phenotypic tests against BPH and WBPH, separately. In each of the tests, RILs could be grouped into RR with resistance to both the planthoppers; RS with resistance to BPH only; SR with resistance to WBPH only and SS with no resistance against both the planthoppers (Table 1). Thus, resistance in RP2068 against BPH was independent of resistance against WBPH. Under standard seedbox screening test (SSST) nine RILs displayed RR reaction; 16 were RS; 28 were SR and the remaining were SS. While resistance to BPH among RILs segregated in 25R:155S suggesting possible involvement of three genes (χ 2 = 0.343; P = 0.558), resistance to WBPH among RILs segregated in 37R:143S suggesting two gene involvement (χ 2 = 1.896; P = 0.168). Other phenotypic tests such as nymphal survival (antibiosis component of resistance), days to wilt (tolerance component) and nymphal preference (antixenosis component) also revealed similar trend of segregation of resistance among RILs against BPH and WBPH. Based on the overall performance ( Fig. 1) and in all the four phenotypic tests (Fig. 2), five RILs: TR3RR, TR94RR, TR145RS, TR152SR and TR24SS were selected for the further studies. These selected RILs had a genetic similarity ranging from 44 to 84% based on screening results involving 137 molecular markers polymorphic between the parents (Supplementary Table S1). Further, based on marker polymorphism in respect of the two linked markers RM488 and RM11522 (Naik et al. 2018), four of the five RILs: TR3RR, TR94RR, TR145RS and TR152SR are likely to carry Bph33 gene. RNA-Sequencing RNA-seq data generated from the RIL TR3RR, an individual line representing F 14 generation of a mapping population derived from a cross between TN1 and RP2068, produced 272 million raw reads from nine samples (Supplementary Table S2). Between 22 and 41 million raw reads were generated from individual samples. High quality clean reads of leaf sheath tissue samples were mapped to the reference genome of O. sativa indica (Oryza sativa indica; ASM465 build genome downloaded from the Ensembl Plants database) and the gene expression level was estimated. The variability among the biological replicates indicated R 2 value to be 0.759 for BPH, 0.785 for WBPH and when both BPH and WBPH data were pooled R 2 was 0.345 (Supplementary Fig. S1 A, B, C), indicating slight variability within the biological replicates for BPH and WBPH and moderate variability among the pooled data. A higher number of differentially expressed transcripts/genes (DEGs) was observed in the BPH infested plant tissue in comparison with the uninfested plant tissue than in the WBPH infested plant tissue in comparison with the uninfested plant tissue (BPH = 9361 vs WBPH = 8498). Among these, 3821 transcripts showed upregulation while 5540 showed downregulation against BPH in comparison with uninfested control tissue samples (Supplementary Data File No. 1). In the case of WBPH, 4096 transcripts showed upregulation and 4402 transcripts showed down regulation in comparison with uninfested control tissue samples (Supplementary Data File No. 2). These data when taken together (Fig. 3), a total of 1560 DEGs were found to be common among Fig. 1 Performance of RILs derived from a cross between TN1 and RP2068-18-3-5 against BPH and WBPH in standard seedbox screening test in the greenhouse. RILs in the trays from L to R are TN1(S parent), TR24SS, TR152SR, TR173SR, TR94RR, RP2068 (R parent), TR3RR, TR145RS, TR21RS, TR7SS, TN1 (S parent). Left tray was exposed to BPH while right tray was exposed to WBPH (Table 2) which, subsequently, formed the basis for selection of genes for validation study using qRT-PCR. Significantly, several of the transcripts represented the smaller subunit of rRNA of endophytic bacterial flora. More studies are planned to note role of endophytic bacterial flora in planthopper-rice interactions. Gene Ontology-Based Functional Annotation Gene Ontology (GO) enrichment was performed for all the DEGs in leaf sheath tissues of TR3RR plants when challenged by either BPH or WBPH. The DEGs were first grouped into UPUP, DNDN, UPDN and DNUP categories based on their expression (whether up or down regulated) against infestation by BPH or WBPH. The DEGs were further categorized into three major categories viz. biological process (BP), cellular component (CC) and molecular function (MF). However, several of the DEGs were annotated in two or all the three categories. Among the DEGs classified under biological process, 16 clusters were represented in both UPUP and DNDN groups; 6 clusters were exclusive to UPUP while 14 clusters were represented only in DNDN group ( Fig. 4a, b, c). While 12 clusters belonged to the UPDN group, only three were in DNUP group (Fig. 4d, e). There were a higher number of transcripts commonly down regulated compared with the number of commonly upregulated transcript in several of the clusters except under the broad biological process cluster category. Though 12 clusters belonging to the UPDN group related to cell cycle, regulation of cell cycle related biological processes category, these transcripts were observed to be of low abundance. The three clusters under DNUP group were related to photosynthesis and primary metabolism. Under cellular component, 14 clusters of transcripts were common under UPUP and DNDN group (Fig. 5a). Five of these were exclusively upregulated against both the planthoppers, while six of clusters were exclusively down-regulated (Fig. 5b, c). Nine of the clusters categorized under the UPDN group were cell divisionassociated while five of the clusters under DNUP were associated with chloroplast and photosystem II (Fig. 5d, e). Under the molecular function category also (Fig. 6), more number of the transcripts were down regulated in 11 of the 17 of the clusters represented in both UPUP and DNDN groups (Fig. 6a). Of this category, 10 and 4 clusters were exclusive to UPUP and DNDN groups, respectively ( Fig. 6b, c). Six clusters were in UPDN and three in DNUP groups. Genes in clusters relating to peroxidase, peptidase and transporter activity were down regulated against both BPH and WBPH. In contrast, genes pertaining to DNA helicase, ATPase, serine threonine kinase, and microtubular motor activities were upregulated against BPH but down-regulated against WBPH, and structural constituents of ribosome, calcium ion binding and electron carrier activity related genes were down-regulated against BPH but upregulated against WBPH. Structural constituents of ribosomes mostly represented the SSU-rRNA of endophytic bacterial genomes. As observed for the other two GO categories, unique transcripts associated with either of the planthopper's challenge were low in abundance. Identification of miRNA Profiles To identify the miRNAs, present in the RNA-seq data, all the transcript sequences in the size range of 20 to 24 nt were blasted against the miRbase. In all, 180,887 transcripts could be aligned with known rice miRNAs representing 713 families. Based on relative abundance of miRNA transcripts in plants subjected to infestation with either BPH or WBPH, 27 miRNAs were shortlisted. These have been recognized to have 121 target genes. Of these, expression of 37 genes was observed to be modulated in the plants following planthopper Fig. 4 Abundance of transcripts of pathways under biological processes in differential expression groups in TR3RR plants infested with either BPH or WBPH. UPUPupregulated against both BPH and WBPH; DNDNdown regulated against both BPH and WBPH; UPDN-upregulated against BPH but down regulated against WBPH; DNUPdown regulated against BPH but upregulated against WBPH feeding and these were identified to be modulated by 14 miRNAs (Table 3). Of the 37 genes targeted by 14 miR-NAs (Supplementary Table S6), five genes were observed to have ≥2-fold higher expression against BPH with no change in expression level against WBPH. Two of these genes are associated with regulation of DNA replication and proteasome assembly. One of the genes (BGIOSGA006222 -LOC_Os02g35900 -thioredoxin, putative) displayed > 2-fold down-regulation against BPH with no change against WBPH. On the other hand, four of the genes had ≥2-fold increased expression against WBPH with no change against BPH. These genes were identified as thylakoid formation1-chloroplast precursor, CBS domain containing protein; phosphoglycerate mutase and a gene of unknown function, BGIOSGA023232. In addition, of the remaining two genes (BGIOSGA027084 and BGIOSGA011796), one is proposed to be a patatin, recorded ≥2-fold downregulation against WBPH with no change against BPH. Thus, it is evident from the current results that miRNAs are targeting different sets of genes depending on whether the plants were challenged by BPH or WBPH. In all, 27 DEGs were validated in five of the RILs following infestation with either BPH or WBPH (Table 2). In the first set, 20 genes were analyzed through semi-quantitative RT-PCR (Fig. 7). In the second set, 14 genes including seven covered in the first set were validated through real time qRT-PCR in these genotypes (Fig. 8). Five of the RILs selected for the study included TR3RR, TR94RR, TR145RS, TR152SR and TR24SS based on their performance against the two planthoppers (Fig. 2). Expression of Of the 20 genes of the first set, 8 genes were selected from the UPUP group of DEGs (Fig. 7, panels 1 to 8); five genes each were from DNDN (Fig. 7, panels 9 to 13) and DNUP groups (Fig. 7, panels 14 to 18); and two genes from the UPDN group (Fig. 7, panels 19 & 20). The real-time PCR results were generally in agreement with RNA-seq data. Of the 8 genes in UPUP group, five genes (B3 DNA binding domain containing protein, HSP DnaJ protein-putative, aminotransferase domain containing protein, emp24/gp25L/p24 family protein and CutA, chloroplast precursor-putative) displayed distinct upregulation (≥ 2 fold) in both TR3RR and TR94RR against both BPH and WBPH; against BPH in TR145RS and against WBPH in TR152SR at one or both the time points (Fig. 7, panels 1 to 5). A similar trend was not distinctly observed for the remaining three genes (two undefined protein expressing genes on chromosomes 6 & 9 of rice and a plant-specific domain TIGR01589 containing protein coding gene), though upregulation was noted in TR3RR line against both BPH and WBPH. All the five genes from the DNDN group displayed either down regulation or low levels (< 2-fold) of induction in four of the RILs against both BPH and WBPH. Interestingly, two of the genes (abscisic stress and ripening, and glutathione S-transferase registered significant induction (≥2-fold) in the susceptible TR24SS in 3 of 4 instances against both BPH and WBPH suggesting their possible role in susceptibility against the planthoppers. Five genes representing the DNUP group showed upregulation (≥ 2-fold) only against WBPH in three of the resistant test RILs (TR3RR, TR94RR and TR152SR), but not in the susceptible TR145RS and TR24SS. These genes displayed either down regulation or mild induction (< 2-fold) in all the RILs when infested with BPH. Only two genes (growth regulating factor protein and centromere protein coding genes) displayed threshold level of upregulation only against BPH in three of the resistant RILs (TR3RR, TR94RR and TR145RS). Thus, it was evident that despite 13 of the genes showing common response against BPH and WBPH with either induction or suppression, seven of the genes displayed differential response against BPH as versus WBPH. Fourteen of the DEGs, as revealed by the RNA-seq analysis, were re-validated through quantitative real time RT-PCR. Three genes from UPUP group were reanalysed. Two of the genes (B3 DNA binding domain and heat shock protein DnaJ) showed similar expression as that observed in the first set (Fig. 8, panel 1 & 2). The third gene, aminotransferase domain containing protein did not show induction in one of the RILs, TR145RS, against BPH (Fig. 8, panel 3). Thus, these three genes represented common resistance pathways in rice RP2068 against both BPH and WBPH. The next set of six genes represented DNDN response group against the planthoppers. Three of these on re-analysis displayed identical response pattern as that observed in the first analysis using RT-PCR with either downregulation or poor induction (< 2-fold) against both the planthoppers (Fig. 8, panels 4 to 6). Unlike in the first set, glutathione S-transferase did not show induction in the susceptible TR24SS. But peroxidase precursor gene displayed some instances of induction in this line as also observed in the first set of RT-PCR analyses. The other three genes of the group (cytochrome P450-putative, Os9bglu30 -betaglucosidase and SCP-like extracellular protein) also displayed either down-regulation or poor induction in all the test RILs with some instances of induction in TR24SS (Fig. 8, panels 7 to 9). Response of one of the two isoflavone reductase genes (LOC_Os01g01650.1) was re-tested in DNUP group and the results obtained were similar to those observed in the first set. Two more genes tested under this group (OsSAUR19 -Auxinresponsive SAUR gene family member-expressed and MYB-like DNA-binding domain containing protein -putative) showed distinct induction against WBPH but low-level induction (< 2-fold) or down-regulation against BPH in the resistant RILs (Fig. 8, panels 10-12). Significantly, these two genes were also noted to be induced under compatible interaction between the susceptible TR142RS and WBPH suggesting induction to be not directly related to resistance among the lines tested. Only one of the genes tested (MYB family transcription factor-putative) displayed distinct up-regulation under incompatible interactions between resistant RILs (TR3RR, TR94RR and TR145RS) and BPH but not against WBPH. Overall results of RT-PCR further highlighted differential resistance mechanism in RP2068 against BPH and WBPH. Further, pathway analysis of these key 27 genes under four groups revealed major processes involving two or more of them ( Supplementary Fig. S2). Trehalose biosynthetic pathway and oxidation-reduction pathway were significantly upregulated in plants under attack by either of the planthoppers and these were mediated by three of the key genes. Proline transport, methylation and protein processing and transport process were enhanced by two of the key genes. In contrast, pathways responsible for unsaturated fatty acid and glucosinolate biosynthetic process, proteolysis, cytokinesis by cell plate formation, response to oxidative stress and induced systemic resistance were down-regulated through the involvement of expression of two to four of the eight key genes in DNDN group. Significantly, pathways related to defense against bacteria and fungi, response to UV, oxidation-reduction, photosynthesis, regulation of transcription, oligo-peptide transport were the principal processes up-regulated by two to four of the eight key genes (DNUP group) in plants following infestation with WBPH, but not BPH. One of the two isoflavone reductase genes (i.e. LOC_Os01g01650.1) was part of the pathway responsible for photosynthesis and expression of proteins. Only the MYB transcription factor family gene (LOC_Os12g13570.1) was involved exclusively in defense against BPH, but not against WBPH. Thus, it is obvious from the results presented here that RP2068derived plants respond differently against the challenge by BPH and WBPH. Discussion The insect pest complex of rice has representations from several feeding guilds, and these include defoliators, tissue borers, gall formers, phloem feeders and several others. Each of these categories is represented by multiple species which are largely sympatric, occurring together at the same time and place, with exception being the Asian and African gall midges (Orseolia oryzae; O. oryzivora); rice green leafhopper and green rice leafhoppers (Nephottetix virescens; N. cincticeps) which are parapatric, with discrete geographic distribution. Among the sympatric species, members differ in their host range adaptation. Monophagous yellow stem borer (Scirpophaga incertulas) coexists with oligophagous striped stem borer (Chilo suppressalis) and polyphagous pink stem borer (Sesamia inferens). Monophagous BPH shares space with oligophagous WBPH and polyphagous small brown planthopper-(Laodelphax striatellus; https://www.plantwise.org/KnowledgeBank). Many studies have been conducted to understand interspecific interactions that results in mutual survival of the sympatric species (Cheng et al. 2001;Horgan et al. 2020). However, focused studies on how rice plant deploys different defense strategies against these sympatric planthopper species are not available. Information gathered through such studies will have valuable implications in pest management. There have been several studies on understanding morphological, anatomical, physiological, biochemical and molecular basis of insect resistance in rice (see Du et al. 2020). One approach for these studies is based on forward genetics -with genetic characterization of resistance (R) genes, map-based cloning and understanding the function of these cloned genes. The other approach is reverse genetics-with analysis of genome wide gene expression of a set of resistant and susceptible varieties with appropriate controls and then observing and following the response of key genes to insect infestation. However, these two approaches are yet to converge and provide a vivid and comprehensive understanding of the diverse facets of insect resistance. At the outset, it is clear now that resistance in plants against chewing and tissue feeders involves JA (jasmonic acid)-mediated molecular pathways as compared to SA (salicylic acid)-mediated resistance that is deployed against sap sucking and gall forming pests (Baldwin and Preston 1999;Howe and Jander 2008;Wu and Baldwin 2010;Bentur et al. 2016). In the current study, omics tools have been employed to investigate molecular pathways triggered or suppressed in rice genotypes when challenged by BPH and WBPH to identify differences and commonality in defense pathways in rice against these planthoppers. Rice line RP2068-18-3-5 (RP2068), derived from the cross between an elite cultivar Swarnadhan and the land race Velluthachera, is resistant to gall midge, BPH and WBPH like the parent land race (Bentur et al. 2011). A mapping population consisting of advanced generation RILs from TN1 X RP2068 cross has been used to tag and map the gall midge resistance gene gm3 (Sama et al. 2014) and a BPH resistance gene Bph33 (Naik et al. 2018). Evaluation of this mapping population against both BPH and WBPH indicated that resistance to the two planthoppers is independent of each other (Table 1) and resistance level against WBPH is slightly lower than that against BPH (Fig. 1). Based on the performance in four phenotypic tests, five RILs with varying degrees of resistance against BPH and WBPH (Fig. 2) but with a high level of genetic similarity (Supplementary Table S1) were selected for further studies. RNA-seq technique was adopted to follow differentially expressed genes (DEGs) in one of the selected RILs TR3RR following infestation with either BPH or WBPH with RNA samples pooled for two time points 6 and 12 hai. Analysis of data revealed larger number of DEGs to be down-regulated as compared to the number of DEGs upregulated (Fig. 3). A similar trend was noted in BPH susceptible WT (Nipponbare) plants during early stage of BPH feeding (Tan et al. 2020). But in a transgenic line of the same cultivar with Bph6 gene (BPH6G) this trend was reversed. This difference is attributed to switching off of primary metabolism related activities in the plant to switch on defense related secondary metabolism and energy partitioning between the two (Guo et al. 2018). Significantly, higher number of DEGs were down regulated (5540) during interaction with BPH than with WBPH (4402) in TR3RR plant suggesting lower metabolic stress load. Further, Gene Ontology (GO) analysis of the DEGs pooled under four groups (UPUP, DNDN, UPDN, DNUP) also showed a predominance of down-regulated transcripts (Figs. 4a, 5a, 6a). Significantly, a small number of GO clusters represented differential response of the plant to the two planthopper infestations. While cell cycle, DNA repair related clusters were upregulated only against BPH, photosynthesis, protein synthesis and transport related genes were up-regulated only against WBPH under three GO groups (Figs. 4, 5). Thus, it appears that the plant is responding to BPH infestation at a higher order of resistance reaction where it is attempting to shut down several of the metabolic pathways compared with the situation when it is infested with WBPH where far fewer genes are down-regulated (Fig. 3) and in a manner more akin to a compensatory or tolerance mode of resistance reaction. The role of micro RNA (miRNA) in modulation of BPH resistance in rice has become the focus of recent studies (Dai et al. 2019;Tan et al. 2020). Negative control of a growth regulation factor gene (OsGRF8) by OsmiR396 resulting in reduced expression of a key gene, OsF3H, in the flavonoid pathway led to BPH susceptibility in rice ZH11 (Dai et al. 2019). In the current study, two of the five DEGs selectively upregulated in TR3RR against BPH, modulated respectively by OsmiR399k and OsmiR399g, were involved in DNA regulation and replication, and in transport and proteasome assembly. Of the four target genes selectively upregulated against WBPH two were modulated by OsmiR5791. The CBS domain containing gene Os09g26190 is reported to be associated with zinc deficiency tolerance and is a member of the network of genes related to photosynthesis, chlorophyll biosynthesis and proteins of thylakoid membrane (Lee et al. 2018). Another related gene Os07g37250 was modulated by OsmiR413. Thus, emphasis here against WBPH is on enhancement in photosynthesis to compensate for the loss through insect feeding. The only gene selectively down-regulated against BPH and modulated by OsmiR7692-5p was coding a thioredoxin, putative, representing a major protein in phloem sap (Ishiwatari et al. 1995) on which planthoppers feed. Similarly, two uncharacterized genes were selectively down-regulated against WBPH and modulated by OsmiR6345 and OsmiR413. One of these is suggested to code for Patatin, a member of phosphor lipase A family and considered inhibitory for insect development and growth (Strickland et al. 1995). These three uniquely down-regulated genes suggest their negative role in expression of plant resistance; enhancing antibiosis effects against BPH through lowering levels of thioredoxin in phloem sap; and suppressing such effects against WBPH through lowering levels of Patatin. Importantly, the diversity in resistance mechanisms against the two planthoppers, is obvious as evidenced by the likely different roles of miRNAs play in gene regulation during the interaction of rice and these pests. Based on the validation (using qRT-PCR) of expression of 27 shortlisted genes under four groups (UPUP, DNDN, UPDN, DNUP) three genes were identified that were upregulated against both the hoppers ( . Resistance conferred by this gene has been shown to arise due to loss of function. In contrast, LOC_Os03g06850 was significantly upregulated in the resistant RILs against both BPH and WBPH. Heat shock proteins (HSPs) are a family of proteins produced in response to stress, some of them like DnaJ act as chaperons. HSP genes have been often implicated in BPH resistance in rice (Wang et al. 2005;Wei et al. 2009;Naik et al. 2018). In the present study another HSP DnaJ gene responded against both the planthoppers. Other commonly upregulated genes included aminotransferase domain, emp24/gp25L/p24 protein associated with Golgi function; CUTA protein associated with chloroplast; plant specific TIGR01589 domain protein and two undefined expressed proteins. All these genes have not been earlier reported to be involved in plant defense against insects or pathogens. Among the members of the DNDN group, two genes cytochrome P450-putative and Peroxidase precursor displayed unique expression profiles in that both were down-regulated in resistant RILs while these were upregulated in susceptible RILs against both the planthoppers. The rice genome is rich in members of the cytochrome P450 super family (of 326 genes) with diverse regulatory role , and some of which are implicated in BPH resistance in rice (Wei et al. 2009;Tan et al. 2020). An increased activity of P450 74A2 gene was noted in both compatible and incompatible interactions of resistant and susceptible NILs with BPH15 gene (Wei et al. 2009). Of the two cytochrome P450 genes, one was found to be induced during early BPH feeding in transgenic line BPH6G while the other was repressed (Tan et al. 2020). Three genes representing peroxidase 12 precursor and two of peroxidase 2 precursor were found up-regulated during both compatible and incompatible interactions (Wei et al. 2009). But the magnitude of induction was more in compatible interaction. Peroxidase precursor gene (Os01g73200) noted in the present study was found to be induced during compatible interaction (Fig. 8). Four of the five POX genes were induced in susceptible lines against BPH (Wei et al. 2009). These genes are involved in Jasmonic Acid (JA) biosynthesis. Induction of JA pathway genes during compatible interaction by the planthoppers may be a counter defense strategy to suppress salicylic acid (SA) pathway defense as reported for the pathogenic fungi (Okada et al. 2015) and bacteria (Nomura et al. 2005). Glutathione S-transferase is another large family in rice with 59 genes that have detoxifying function against xenobiotics (Soranzo et al. 2004). Downregulation of two members of this gene family was noted in rice lines with or without Bph15 gene during both compatible and incompatible interactions (Wei et al. 2009). Suppression of this gene is likely to represent another counter defense strategy employed by the planthoppers. Wang et al. (2005) noted up-regulation of a member of 21-gene family of beta glucosidases during resistance reaction in rice lines B5 (with Bph14 and Bph15genes) and they speculated that this gene may be involved in functions other than volatile emission. MAP 3kinase gene family, of which STE_MEKK_ste11_ MAP3K.7 -STE kinase is a member, is reported to be involved in rice resistance to BPH with down-regulation of one of the members being reported by Wang et al. (2005). Other genes in this DNDN category i.e. multicopper oxidase domain is reported to be involved in conferring blast resistance in silicon-amended rice (Brunings et al. 2009) while SCP like extra cellular protein is not reported to be involved in rice defense against biotic stresses. Abscisic acid stress and ripening (ASR) gene is a member of the transcription factor family which altered expression of OsASR2, another member of the family, and influenced rice resistance to bacterial blight and sheath blight . Significantly, genes belonging to the groups UPDN and DNUP revealed differential mechanisms. Two isoflavone reductase-like (IRL) genes on chromosome 1 and 10 were dramatically induced in resistant RILs only against WBPH and not against BPH. IRLs of rice are not true isoflavone reductases like those of legumes and do not produce natural isoflavonoid products by acting on the substrate 2-hydroxy isoflavonoids (Kim et al. 2003). Nonetheless, IRLs are developmentally regulated or induced by biotic or abiotic stresses such as rice blast. Auxin-responsive SAURs, a 58-member RNA gene family in rice (Jain et al. 2006), plays a role in auxin synthesis and transport (Xu et al. 2017). The observation that high level of induction of OsSAUR19 in plants against WBPH, but not against BPH, may suggest suppression of auxin pathways in the present study against BPH infestation. C3HC4-type zinc finger proteins represent one of the largest groups of transcription factors in plants involved in stress responses. A member of Dof zinc finger family of transcription factors was differentially downregulated following BPH infestation (Yuan et al. 2005;Wang et al. 2005) while another member -zinc finger protein gene was upregulated (Li et al. 2016). In the present study, LOC_Os06g23274 coding for Zinc finger, C3HC4, domain protein was differentially expressed displaying its selective role against WBPH. The other two genes in this group: proton-dependent oligopeptide transporter and CRAL/TRIO domain protein, besides an undefined expressed protein, have not been earlier reported to be involved in plant defense against insects or pathogens. Interestingly, two of the MYB transcription factor representing genes showed reciprocal response to planthopper infestation. While Os12g13570.1 was upregulated against BPH and down-regulated against WBPH, Os11g03440.1 with MYB-like DNA-binding domain recorded down-regulation against BPH and upregulation against WBPH. The role of various transcription factors in expression of BPH resistance in the rice variety Rathu Heenati has been studied (Wang et al. 2012). In this study, most members of MYB family were observed to be down regulated after BPH infestation, suggesting these to be related to reduced photosynthesis rate, stomatal conductance and transpiration rate. Another two genes coding for a growth regulatory factor and a centromere protein were also observed to be upregulated against BPH and these were down-regulated against WBPH. Dai et al. (2019) showed the link between miR396 and a growth regulating factor gene OsGRF8 and a gene, OsF3H, involved in the flavonoid biosynthetic pathway, to conclude that miR396 has negative control of BPH resistance in the susceptible genotype. Likewise, the growth regulatory factor gene, LOC_Os03g47140, in the present study is being targeted by miR396f (Wen et al. 2016) or miR396c-3p (Tan et al. 2020). However, transcripts of all the members of this family of miR396 were more abundant in BPHchallenged plants than in WBPH-challenged plants. The centromere protein gene has not been reported associated with biotic stresses in plants as has been observed in this study. Pathway analysis with the 27 key genes ( Supplementary Fig. S2) further supported the facts that the response of RP2068 against WBPH was more compensatory or tolerance in nature with increased photosynthesis and protein synthesis and transport in the affected tissue while its response against BPH involved the induction of several active defense pathways targeting antibiosis. Conclusion RNA-seq data generated from infested and control tissues of TR3RRa RIL derived from a cross between susceptible TN1 rice and resistant RP2068 challenged with BPH or WBPHhelped to identify the pathway genes involved in resistance. Our results revealed that a larger number of DEGs were down-regulated, in comparison to up-regulated DEGs, in plants following the planthopper infestation. Identification of unique clusters of GO groups responding exclusively to one of the hoppers suggested diversity in defense strategies adopted by the plant against two different planthoppers from the same feeding guild. Further, functional validation of the selected 27 genes showed unique role of genes such as IRL, a growth regulating factor (GRF) and two members of MYB transcription factor family against one of the planthoppers. This, to the best of our knowledge, is the first study to demonstrate the selective expression of rice host genes upon attack from two major insect pests of rice using genetically similar host material. Such studies are not only important to dissect the plant responses to different insect pests but information derived from such studies are urgently required to consciously combine relevant resistance strategies against both the planthoppers for their effective management. Insects Adults of both BPH and WBPH were collected from farmers' field in Nalgonda district of Telangana state, India, during 2014-2015 and separate colonies were established on TN1 rice in greenhouses at ABF, Hyderabad. Care was taken to prevent population admixture. Nymphs or adults, arising from these populations reared in the greenhouse, of specified age/stage were used for the experiments. Screening and Selection of Recombinant Inbred Lines (RILs) A previously developed mapping population (F 14 ; 180 lines) (Sama et al. 2014) derived from a cross between rice varieties TN1 and RP2068 (TR) was used in the current study. All the 180 RILs were subjected to standard seedbox screening test (SSST), nymphal survival (NS), nymphal preference (NP) and days to wilt (DW) separately against BPH and WBPH. Standard Seedbox Screening Test (SSST) Degree of resistance, in terms of damage score, of parents and F 14 RILs was measured in standard seedbox screening test (SSST). In this method the parents and test lines were infested with 2nd instar BPH/WBPH nymphs, on an average of 8-10 nymphs per seedling 10-12 days after sowing (Naik et al. 2018). The test lines were arranged in a randomized complete block design (RCBD) and replicated three times. Susceptible TN1 was sown in two rows at the edge of box on both the sides while the resistant check (PTB33 for BPH and MO1 for WBPH) was sown in one row in the center. These plants were observed for damage and scored as per the standard evaluation system (SES) for rice (IRRI 2013) on a scale 0 to 9 when 90% of TN1 on both the rows were dead in about 8 to 12 days after insect release. Each test entry was scored by recording damage to each seedling; subsequently, the score for each replication was averaged and then the grand mean of the three replications was derived for the entry. Entries with damage score 1 to 3 against BPH and with score 1 to 4 against WBPH for rice were considered as resistant (as per SES) while those with score ≥ 8.0 were treated as susceptible for this study (Fig. 1). Nymphal Survival (NS) Nymphal survival was recorded on 30-day-old potted test plants of the RILs along with the resistant RP2068 and the susceptible TN1 parents, and the resistant checks PTB33 (BPH) or MO1 (WBPH). The plants were raised in 500 ml plastic pots with puddled soil from rice field which were randomized (RCBD) and then infested with ten 1st or 2nd instar nymphs per plant covered with a mylar film tube cage. Survival of the insects on the plants was observed daily and number of surviving nymphs was recorded until all surviving nymphs metamorphosed into adults. Three replications were maintained. Nymphal survival was expressed as percentage and means. Entries with ≤60% survival against BPH and WBPH were considered as resistant while those entries with > 60% survival were described as susceptible. For statistical comparison of means, values were transformed into arc-sine values (Gomez and Gomez 1984). Days to Wilt (DW) The tolerance component of resistance was studied using days to wilt test (Geethanjali et al. 2009). Briefly, the test plants along with the parents and the resistant checks were grown singly in 500 ml plastic pots. When plants were 30 days old, pots were randomized and covered with mylar tube cages and infested with 50 1st or 2nd instar nymphs of BPH/WBPH. Plants were observed daily. The day on which plant wilted completely was recorded. One pot represented a replication and the test was replicated three times. Entries with plants surviving more than 10 days were considered as resistant and other entries were treated as susceptible. Nymphal Preferences Antixenosis or non-preference (of nymphs for settling on seedlings) type of resistance mechanism was assessed while conducting the standard seed box screening test (Heinrichs et al. 1985). Planting of seedlings was conducted in a similar way describe in SSST above. After 10-12 days of sowing, second instar nymphs of BPH/ WBPH were released on the seedlings on an average of 8-10 nymphs per seedling. Each seed box was covered with a mylar film cage and was treated as a replication. Number of nymphs on each seedling was counted 24 and 48 h after infestation. Percentage of insects settled on each of the test entry was computed based on total number of insects noted for each replication (box). Entries with ≤10% of nymphs settling on plants were considered as resistant and those with > 10% of nymphs as susceptible. For statistical comparison of means, values were transformed into arc-sine values (Gomez and Gomez 1984). One-way ANOVA was performed on data of each phenotypic test and means were separated by HSD following Tukey and Kramer method (Tukey 1953) on MS Excel (https://www.youtube.com/watch?v= N7mkI8_xxc4&feature=emb_logo). Based on these tests, RILs showing RR (resistant to BPH and WBPH), RS (resistant to BPH only), SR (resistant to WBPH only) and SS (susceptible to both BPH and WBPH) were identified (Table 1). From these, five RILs with genetic similarity of ≥40%, based on screening of these RILs using 137 polymorphic markers (Sama et al. 2014;Naik et al. 2018;Sahu et al. unpublished), were chosen for the study (Supplementary Table S1). Performance of the selected RILs in each of the tests was compared through paired t-test with equal variance and one tailed distribution options (MS Excel, Office 365). Sample Collection for RNA-Seq In order to identify the pathway genes responsible for conferring resistance to BPH and WBPH in RP2068, an NGS protocol was used. RNA sequencing (RNA-seq), was conducted with RIL TR3RR (resistant to both BPH & WBPH). TR3RR seedlings were raised in nine 3 L plastic bucket pots (6 seedlings/pot; 3 pots as uninfested control, 3 pots infested with BPH and 3 pots infested with WBPH). Fifteen days after sowing, the designated pots were infested with 1st-2nd instar nymphs (5 nymphs for BPH and 10 nymphs for WBPH/plant). Lower leaf sheath samples (from three individual plants/ pot) were collected for 3 biological replications at two different time points: 6 and 12 h after infestation (hai). Total RNA was isolated from these nine samples, including uninfested control plants, using RNeasy plant mini kit (Qiagen, Germany) as per the manufacturer's guidelines. The RNA samples collected for each of the two time points (6 hai and 12 hai) for each replication were pooled prior to sequencing. RNA sequencing was carried out by M/s Genotypic Technology Pvt. Ltd., Bengaluru, India. RNA Quality Control The concentration and purity of the RNA was evaluated using the Nanodrop Spectrophotometer (Thermo Scientific 2000). The integrity of the extracted RNA was analyzed on the Agilent Bioanalyzer 2200 (Agilent, CA, USA) using the manufacturer's protocols. qRT-PCR For validation studies using qRT-PCR, similar experiment setup was repeated and in addition to TR3RR, four additional RILs (TR94RR, TR145RS, TR152SR and TR24SS) were included. Seedlings were raised in 3 L plastic bucket pots and 15 days after sowing they were infested with 1st-2nd instar nymphs of BPH or WBPH. Leaf sheath samples were collected from 3 replications of control and infested plants at two different time points: 6 and 12 hai. RNA isolation and cDNA synthesis were carried out following standard protocols (Biorad, USA). Twenty-seven genes were selected and validated; first 20 genes using semi-quantitative RT-PCR and 14 genes (including seven of the earlier set) were selected for real time quantitative RT-PCR (Table 2). Of the 27 genes selected for validation, eight genes each were from UPUP (noted to be upregulated during both BPH and WBPH infestation), DNDN (down regulated against both the planthoppers) and DNUP groups while three genes were from UPDN group, as revealed by the RNAseq data ( Table 2). Library Preparation and Sequencing RNA sequencing libraries were prepared with Illuminacompatible NEBNext Ultra Directional RNA Library Prep Kit for Illumina (New England BioLabs, MA, USA) at M/s Genotypic Technology Pvt. Ltd., Bengaluru, India. One μg of total RNA was taken for mRNA isolation, fragmentation and priming. Fragmented and primed mRNA was further subjected to first strand cDNA synthesis in the presence of Actinomycin D (Gibco, life technologies, CA, USA) followed by second strand synthesis. The double stranded cDNA was purified using HighPrep magnetic beads (Magbio Genomics Inc., USA). Purified double-stranded cDNA was end-repaired, adenylated and ligated to Illumina multiplex barcode adapters as per manufacturer's protocol. Illumina Universal Adapters were used in the study. Adapter-ligated cDNA was purified using HighPrep magnetic beads and was subjected to 14 cycles of Indexing-PCR (37°C for 15mins followed by denaturation at 98°C for 30 s, cycling (98°C for 10s, 65°C for 75 s) and 65°C for 5mins) to enrich the adapter-ligated fragments. The final PCR product (sequencing library) was purified with HighPrep magnetic beads, followed by library quality control check. Illumina-compatible sequencing libraries were quantified by Qubit fluorometer (Thermo Fisher Scientific, MA, USA) and its fragment size distribution were analyzed on Agilent 2200 Tape station. The libraries were sequenced by using Illumina HiSeq 4000 sequencer (Illumina, San Diego, USA) for 2 X 150 paired-end chemistry following manufacturer's procedure. Tool Description Analysis pipeline and the different software used for analyzing the raw sequencing data obtained from the HiSeq sequencer were as follows: Quality Check The raw data generated was quality-checked using FastQC. Reads were preprocessed to remove the adapter sequences and removal of the low-quality bases (<q30). Pre-processing of the data was done with Cutadapt (Martin 2011). HISAT-2, which is a splice aligner, was used to align the high-quality data to the reference genome with the default parameters (Kim et al. 2015). Transcript Abundance Estimate Cufflinks4 was used to estimate and calculate transcript abundance (Trapnell et al. 2010). The output from the analysis resulted in normalized read counts in the form of FPKM values. FPKM is a unit of measuring gene/transcript expression (Fragments Per Kilobase of transcript per Million mapped reads). Genome Mapping All the processed reads were aligned to the Oryza sativa indica genome downloaded from the EnsemblPlants database. An average of 44.93% of the reads were aligned to the reference genome. The alignment (BAM) files were viewed and inspected in standard genome viewer IGV browser (Thorvaldsdóttir et al. 2013). Transcript Identification and Quantification Transcripts were identified and quantified based on aligned reads. Transcript expression were generated through cufflinks. On an average 23,466 transcripts were expressed across all samples. Compiled expression profile at transcript level has been represented in form of FPKM Matrix [GT_SO_8261_Read_Count_Matrix.xlsx or Table S2]. Transcript Assembly Cufflinks-2.2.1 was used to assemble transcripts, estimate their abundances, and test for differential expression and regulation in RNA-Seq samples and to estimate the relative abundance of these transcripts based on read distribution support while accounting for biases in library preparation protocols (Trapnell et al. 2010). After mapping the sequences to the reference genome, the mapped files, as provided by Cufflink-2.2.1 software, was used to generate a transcriptome assembly. These assemblies were merged using the Cuffmerge, option which is included with in the Cufflinks package (Trapnell et al. 2010). The resulting alignment (in BAM file format) was used to generate transcript annotations (GTF format) using default Cufflinks parameters. This merged assembly provided a uniform basis for calculating gene and transcript expression under each treatment. The merged assembly was next analysed by Cuffdiff 4 to calculate expression levels and assign statistical significance to observed changes in expression levels (Trapnell et al. 2012). Differential Expression Analysis Cuffdiff4 was used to calculate the differentially expressed transcripts and categorize them into UP-, Down-and Neutrally regulated transcripts based on the log2fold change value at P ≤ 0.05 (Trapnell et al. 2012). Group-wise comparisons were performed to identify differentially regulated transcripts between two treatments. The transcripts that showed a log2fold change value less than − 1 were categorized as down regulated; those with greater than 1 were categorized as upregulated and ones with the log2fold change values between − 1 to 1 were categorized as neutrally regulated. Gene Ontology (GO) and Pathway Analysis For each transcript, gene ontologies were downloaded from ensembl biomart database (https://www.ensembl. org/biomart/martview). These GO were mapped to the differentially expressed transcripts/genes (DEG). Next, pathways for each gene were obtained from multiple database such as KEGG pathways and biomart and the compiled pathways for each gene were mapped to the DEGs. MapMan Functional annotation and metabolic pathway analysis were performed using MapMan (Thimm et al. 2004;Usadel et al. 2005). MapMan was also used to identify the functional categories associated with a set of sequences (e.g., differentially expressed) and to find the metabolic pathways or other cellular functions up-or down-regulated as revealed by the RNA-seq data. Functional classification in the mapping file (X4.2_ Oryza_indica) that structures the rice gene from an TIGR into distinct metabolic and cellular processes from the MapMan program was used. Differentially expressed rice genes were functionally annotated by performing Basic Local Alignment Search Tool (BLAST) alignment against the TIGR database. MapMan software was employed to show the differences in gene expression in different cellular and metabolic process. Ratios were expressed in a log 2 scale for importing into the software and changes in expression were displayed via a false color code. In Silico Analysis for miRNAs The raw sequence data obtained for both control and infested samples (BPH and WBPH) were used as input for psRNATarget software (http://plantgrn.noble.org/ psRNATarget/) to predict plant small RNA targets using default parameters. The miRNA precursors/miRNAs sequence of rice in the miRBase 21.0 (http://www.mirbase. org/search.shtml) database was used to identify the mature sequences and the families of miRNAs to which it belongs. The miRNA sequences were used as input in RNA fold database (http://rna.tbi.univie.ac.at/) to predict their secondary structure. Mature rice miRNA sequences for these families was downloaded from miRbase (http://www.mirbase.org/cgibin/mirna_entry.pl) and the sequence and BLAST analyses (Ref) was performed against indica rice genome (http://plants.ensembl.org/Oryza_indica/Tools/Blast/). Genes listed as overlapping/homologous with the reference sequence were considered as the target genes. Net, expression profiles of these genes in our mRNA library were obtained and MSU japonica locus Ids and the putative nature and function of the genes were identified (http://rice.hzau.edu.cn/cgi-bin/rice2/id_mapping_rs2). In Silico Pathway Analysis We performed network analysis using RiceNet v2 (Lee et al. 2018). All the 27 genes were queried separately in four groups (Table 2) under gene prioritization based on context associated hubs for discerning whether any of the major networks were invoked as defined by the representative genes in these four groups. Pathways involving two or more of the queried genes were identified. Semi-Quantitative PCR The transcriptome analysis results were further validated via semi-quantitative reverse transcription polymerase chain reaction (RT-PCR) with two types of samples: (i) the same RNA samples that were used for RNA-seq analysis and (ii) newly isolated RNA samples from independent infestation. About 3 μg of RNA was used for first-strand cDNA synthesis using the iScript cDNA synthesis kit (Bio-Rad, USA) following the manufacturer's guidelines. Gene-specific primers for the RT-PCR were designed using Primer3 Software (https://bioinfo.ut.ee/ primer3-0.4.0) (Supplementay Table S3). The PCR mix contained 30 ng of cDNA, 0.5 μM primers (forward and reverse each), 200 μM each of the dNTPs, 1 Unit of Taq polymerase and Taq buffer (Bangalore Genei Pvt. Ltd., India). The optimum PCR conditions including cycle number and cDNA amounts were standardized for each gene separately. The PCR, products were run on 1.5% agarose gel at 90 V for 1 h. and the agarose gels were documented using the Alpha Imager EP system (Cell Biosciences, USA). The captured images were analyzed using ImageJ software (Schneider et al. 2012). Twenty genes were analyzed (Fig. 7) with rice ubiquitin (GenBank accession number AK059694) as a reference gene for normalization and the fold change values were calculated between the relative expression values (REVs) of infested and control plants. One-way ANOVA was performed on fold change value data for each gene against specific planthopper and time point and means were separated by HSD following Tukey and Kramer method (Tukey 1953) on MS Excel (https://www.youtube.com/watch?v=N7mkI8_ xxc4&feature=emb_logo). Negative values were expressed as decimal fraction for analysis. Results are expressed as a graph constructed based on the log (2) values of the fold change using MS Excel (Supplementary Table S4, Fig. 7). Quantitative RT-PCR Real time RT-PCR was performed using CFX96 Real Time PCR System with the SYBR green chemistry (Bio-Rad, USA) according to the manufacturer's instructions. Rice ubiquitin gene, OsUbq (GenBank accession no. AK059694), was used as the endogenous control. Real Time PCR reaction volume of 10 μl contained 5 μl SYBR Green PCR Master Mix (Bio-Rad, USA), 500 nM each of forward and reverse primers and 30 ng of the cDNA samples. To calculate mean relative expression levels, cDNAs from three independent biological samples in two technical replications each were used. PCR was initiated with denaturation at 95°C for 5 min followed by 40 cycles of denaturation at 95°C for 10s and annealing and extension at 60°C for 30s. After 40 cycles, a melt curve analysis was carried out to determine the specificity of the reaction. After normalization, quantity of each mRNA was calculated from the threshold points (CT) located in the log-linear range. The data from different PCR runs or cDNA samples were compared by using the mean of CT values of the three biological replicates that was normalized to the mean of CT values of the endogenous gene. The relative standard curve method was used for the quantification of mRNA levels and displayed as Relative Expression Values (REV). Expression ratios were calculated using the 2 -ΔΔCt method (Livak and Schmittgen 2001). The data were analyzed using the Bio-Rad CFX Manager 3.1 Software (Bio-Rad, USA) with default baseline and threshold. Relative transcription levels are presented graphically. In all expression of 14 identified genes were validated in leaf sheath tissues of the plants (exposed to BPH or WBPH), separately for each time point. Results are presented as mean ± SD of relative expression in comparison with corresponding uninfested control sample. One-way ANOVA was performed on fold change value data for each gene against specific planthopper and time point and means were separated by HSD following Tukey and Kramer method (Tukey 1953) on MS Excel (https://www. youtube.com/watch?v=N7mkI8_xxc4&feature=emb_ logo). Negative values were expressed as decimal fraction for analysis. (Supplementary Table S5, Fig. 8). Data Availability The sequence data (raw data) generated in this study has been deposited at NCBI Sequence Read Archive
2021-03-07T14:38:07.736Z
2021-03-06T00:00:00.000
{ "year": 2021, "sha1": "1b498360a09b8054b27be4bd0254f7e4cc84fa1a", "oa_license": "CCBY", "oa_url": "https://thericejournal.springeropen.com/track/pdf/10.1186/s12284-021-00470-3", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2e43ff19c3bbe4e43683b55ca885c2f9defd2853", "s2fieldsofstudy": [ "Biology", "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
204943980
pes2o/s2orc
v3-fos-license
State-Dependent Entrainment of Prefrontal Cortex Local Field Potential Activity Following Patterned Stimulation of the Cerebellar Vermis The cerebellum is involved in sensorimotor, cognitive, and emotional functions through cerebello-cerebral connectivity. Cerebellar neurostimulation thus likely affects cortical circuits, as has been shown in studies using cerebellar stimulation to treat neurological disorders through modulation of frontal EEG oscillations. Here we studied the effects of different frequencies of cerebellar stimulation on oscillations and coherence in the cerebellum and prefrontal cortex in the urethane-anesthetized rat. Local field potentials were recorded in the right lateral cerebellum (Crus I/II) and bilaterally in the prefrontal cortex (frontal association area, FrA) in adult male Sprague-Dawley rats. Stimulation was delivered to the cerebellar vermis (lobule VII) using single pulses (0.2 Hz for 60 s), or repeated pulses at 1 Hz (30 s), 5 Hz (10 s), 25 Hz (2 s), and 50 Hz (1 s). Effects of stimulation were influenced by the initial state of EEG activity which varies over time during urethane-anesthesia; 1 Hz stimulation was more effective when delivered during the slow-wave state (Stage 1), while stimulation with single-pulse, 25, and 50 Hz showed stronger effects during the activated state (Stage 2). Single-pulses resulted in increases in oscillatory power in the delta and theta bands for the cerebellum, and in frequencies up to 80 Hz in cortical sites. 1 Hz stimulation induced a decrease in 0–30 Hz activity and increased activity in the 30–200 Hz range, in the right FrA. 5 Hz stimulation reduced power in high frequencies in Stage 1 and induced mixed effects during Stage 2.25 Hz stimulation increased cortical power at low frequencies during Stage 2, and increased power in higher frequency bands during Stage 1. Stimulation at 50 Hz increased delta-band power in all recording sites, with the strongest and most rapid effects in the cerebellum. 25 and 50 Hz stimulation also induced state-dependent effects on cerebello-cortical and cortico-cortical coherence at high frequencies. Cerebellar stimulation can therefore entrain field potential activity in the FrA and drive synchronization of cerebello-cortical and cortico-cortical networks in a frequency-dependent manner. These effects highlight the role of the cerebellar vermis in modulating large-scale synchronization of neural networks in non-motor frontal cortex. INTRODUCTION There has been growing evidence supporting cerebellar involvement in cognitive and affective functions (Hoppenbrouwers et al., 2008;Strick et al., 2009;Bostan et al., 2013). The cerebellum may promote synchronization of largescale networks and influence extra-cerebellar networks through multiple cortical and subcortical projections (Courtemanche et al., 2013;Farzan et al., 2016). The cerebellum shows regional variations in its relatively uniform circuitry (Cerminara et al., 2015) and, through its wide connectivity, can modulate specific circuits involved in motor control, cognition, and affect (Schmahmann, 2004(Schmahmann, , 2019. Because of its contribution to several neurological disorders and its extensive connectivity with extra-cerebellar structures, the cerebellum has been used as a therapeutic target for non-invasive stimulation techniques such as transcranial magnetic stimulation (TMS) and transcranial direct current stimulation (tDCS) (Hoppenbrouwers et al., 2008;van Dun et al., 2017van Dun et al., , 2018. Stimulation of the vermis, the most medial region of the cerebellum, results in positive effects on cognition and mood associated with modulation of frontal oscillations (Schutter et al., 2003;van Honk, 2006, 2009;Demirtas-Tatlidede et al., 2010). Stimulation of the fastigial nucleus (FN) following middle cerebral artery occlusion and chronic mild stress in rats also improves neuroprotection by suppressing death of cerebellar Purkinje cells and alleviates depressive-like behaviors (Zhang et al., 2017). Stimulation of the FN also affects local field potential (LFP) oscillations in the frontal cortex of anesthetized cats, where high frequency stimulation attenuates slow rhythms, and enhances 20-40 Hz oscillations (Steriade, 1995). Low frequency FN stimulation (1 Hz) was also shown to inhibit epileptogenic activity in the rat (Wang et al., 2008), while stimulating lateral cerebellar projections at 2 Hz has been shown to rescue medial frontal cortex delta activity in a rat model of schizophrenia (Parker et al., 2017). Brain imaging studies in humans have demonstrated functional connectivity between the cerebellum and prefrontal cortex (PFC) (O'Reilly et al., 2009;Buckner et al., 2011;Sang et al., 2012;Farzan et al., 2016). The underlying connections have also been well characterized in non-human primates (Kelly and Strick, 2003;Strick et al., 2009) and rodents (Watson et al., 2009(Watson et al., , 2014Suzuki et al., 2012) using neuroanatomical and electrophysiological approaches. In the urethane-anesthetized rat, stimulation of the medial PFC (prelimbic cortex; PrL) evoked responses in the contralateral vermis (lobule VII), while stimulation of the fastigial nucleus resulted in evoked potentials in the PrL, indicating reciprocal long-range interactions between the medial PFC and medial cerebellum (Watson et al., 2009(Watson et al., , 2014. Watson et al. (2014) also observed synchronous LFP activity in the theta range (5-10 Hz) between the fastigial nucleus and PrL during active locomotion and at rest. Coherent synchronization of rhythmic neuronal population activity between distant cortical regions is thought to reflect mechanisms that enhance communication between structures, and that coordinate contributions of brain regions to sensorimotor integration and cognitive function (Engel et al., 2001;Fries, 2015). There is growing interest in understanding how cerebello-cortical network interactions synchronize to modulate higher-order functions. The dorsolateral PFC and the vermis have been implicated in the pathogenesis of several neurological disorders (Baxter et al., 1989;Andreasen et al., 1996;Maeda et al., 2000a;Andreasen and Pierson, 2008;Koenigs and Grafman, 2009;Fatemi et al., 2012). Anatomical evidence in monkeys (Kelly and Strick, 2003) and functional connectivity studies in humans (Buckner et al., 2011;Farzan et al., 2016) have demonstrated pathways mediating communication between the vermal lobule VII and dorsolateral PFC. However, little is known concerning the frequencies of activity that promote coherent LFP oscillations in these structures most effectively, and to what extent the induction of coherent LFP activity may depend on the initial oscillatory state. In the current study, we investigated the effects of various frequencies of cerebellar vermis stimulation on the power and coherence of LFP oscillations in Crus I/II of the right lateral cerebellum (RCb) and bilateral dorsolateral PFC (frontal association area; FrA) in the urethane-anesthetized rat. Urethane is permissive to oscillations, and results in cyclic alternations between states similar to slow-wave nREM sleep and active REM sleep (Clement et al., 2008;Ros et al., 2009). Based on previous anatomical and functional connectivity studies (Akgören et al., 1996;Kelly and Strick, 2003;Buckner et al., 2011;Farzan et al., 2016), stimulation of the vermis was expected to strongly modulate LFP activity in the FrA and in the RCb. Stimulation was delivered to the most superficial layer of the cerebellar cortex to activate Purkinje cells (PCs) that project to the fastigial nucleus, which can modulate cortical areas via the thalamus and the cerebellar hemispheres via parallel fibers (Akgören et al., 1996;Lisberger and Thach, 2013). Inhibitory interneurons would also likely be activated by stimulation, which could result in complex frequency-specific interactions (Dugué et al., 2009). The goals of this study were to (1) characterize spontaneous LFP activity in the lateral Cb and bilateral dorsolateral PFC [FrA in the rat; (Uylings et al., 2003)] as well as coherence within this network during urethane-anesthesia, (2), assess the effectiveness of different frequencies of vermal stimulation in inducing changes in power and coherence in LFP activity in the lateral Cb and FrA, and (3), determine how slow-wave and activated stages of urethane anesthesia may modulate the responsivity of the network to stimulation. Lower frequencies of stimulation were expected to have a greater impact on slow oscillatory activity and coherence, and higher frequencies were expected to drive cortical beta and gamma oscillations (Steriade, 1995;Schutter et al., 2003;Schutter and van Honk, 2006;Wang et al., 2008;Parker et al., 2017). Surgery Six adult male Sprague-Dawley rats were used in this study. The anesthesia procedures were the same as used by Frederick et al. (2014). Briefly, rats were anesthetized with a 5% isoflurane and 95% oxygen mixture, and a catheter was placed in the jugular vein. Urethane (0.8 g/ml) was then administered intravenously to maintain anesthesia, and level of anesthesia was verified by ensuring that the foot-withdrawal reflex was absent throughout the experiment. Rats were placed in a stereotaxic apparatus and a regulated heating pad and insulating blanket were used to maintain body temperature near 37 • C. All procedures were in accordance with the guidelines of the Canadian Council on Animal Care and approved by the Concordia University Animal Research Ethics Committee. The skin was cut to expose the skull and a 2-2.5 mm craniotomy was performed in the occipital bone over the right cerebellar Crus I/II lobule. Holes were also drilled bilaterally over the FrA, and over the cerebellar vermis lobule VII. Bipolar electrodes, constructed from Teflon-coated stainless-steel twisted wire (125 µm tip diameter, with tips 1 mm apart in depth), were anchored to the stereotaxic apparatus. The stimulation electrode was inserted into the vermis lobule VII (AP -13.0; ML 0; V 3.3), and recording electrodes were inserted into the FrA (AP 4.7; ML ± 1.8; V 2.2). The recording electrode in the right Crus I/II was inserted at a 45 • angle, 3.2 mm lateral from the midline, to a depth of 1 mm from the surface of the cerebellum ( Figure 1A). The stereotaxic apparatus was grounded, and a bare stainlesssteel reference electrode (5 mm long) was placed between the skull and the surface of the temporal lobe. Two monopolar recordings and one bipolar differential recording were obtained from each site. Both types of recordings can be used to record LFP activity yet electrode placement must be aligned to a dipole to obtain an optimal bipolar signal (Buzsaki et al., 2012). Bipolar recordings were thus used when the signal amplitude was optimal, otherwise monopolar recordings were used. Recording Procedures Recordings in each animal were initiated with a 2 min recording of spontaneous baseline LFP activity in the RCb and FrA. LFP signals were band-pass filtered between 0.01 and 500 Hz, amplified (x1000; A-M Systems Model 1700), and digitized onto the computer's hard drive at a sampling rate of 1024 Hz using SciWorks software (Datawave Technologies, Loveland, CO, United States). LFPs were recorded bilaterally in the FrA in three animals, in the left FrA (LFrA) in one animal, and in the right FrA (RFrA) in two animals. Each recording trial, in which a different stimulation frequency was tested, lasted 2 min. Following a 30 s baseline period, stimulation was delivered for 1-60 s, depending on the frequency of stimulation, and this was followed by a post-stimulus recording ( Figure 1B). Biphasic square-wave pulses (0.1 ms duration) were delivered to the vermis using a stimulus generator (A-M Systems, Model 2100; Sequim, WA, United States). In addition to a single-pulse condition, in which pulses were delivered every 5 s for 60 s, stimulation frequencies were selected within all major frequency bands. Repeated pulses were delivered at 1 Hz (30 s duration, delta), 5 Hz (10 s, theta), 25 Hz (2 s, beta), and 50 Hz (1 s, gamma). Stimulation was delivered in ascending order (from lowest to highest frequency) in three animals and was delivered in randomized order in the other three animals. There were no significant effects of testing order on measures of power or coherence. Each frequency of stimulation was delivered at an intensity of 500, 750, and 1000 µA resulting in three trials at each stimulation frequency (two trials at each intensity were obtained in one animal; Table 1). Following recordings, animals were euthanized with an intravenous overdose of urethane. Signal Processing and Analysis Recordings were imported into MATLAB (Mathworks, Natick, MA, United States) for analysis. Signals were filtered using the function filtfilt, with a FIR equiripple low-pass at 250 Hz. Power spectral density analyses (short-time Fourier transform) were conducted using the spectrogram function with windows of 512 samples (0.5 s) and a 50% overlap. This resulted in good temporal resolution (0.25 s), allowing slow components of the signal to be quantified. Spectrograms were constructed to represent the frequency content of the signal as a function of time. For coherence analysis, the filtered signals were divided into epochs of 2 s. The magnitude-squared coherence, which indicates how closely related two signals (x and y) are in power across frequencies and the consistency of the phase relationship between the two signals at each frequency, was computed for each electrode pair using the mscohere function: where Pxx(f) and Pyy(f) are the power spectral densities of x and y, and Pxy(f) is the cross power spectral density. Each bipolar recording electrode provided two monopolar channels and one differential recording channel for each recording site (RCb, RfrA, and LFrA). One channel was chosen for analysis for each recording site. The differential bipolar recordings were chosen when possible, but the largest amplitude monopolar recording channel was used when similarity of the monopolar channels resulted in very low power in bipolar recordings. Coherence was calculated between RCb-LfrA (contra Cb-FrA), RCb-RfrA (ipsi Cb-FrA), and LfrA-RfrA (FrA-FrA), using the selected channels. Power and coherence values were integrated within each frequency band: delta ( , 0.01-3 Hz), theta (θ, 3-8 Hz), alpha (α,(8)(9)(10)(11)(12)(13)(14)(15), beta (β, 15-30 Hz), low gamma (low γ, 30-55 Hz), high gamma (65-80 Hz), and fast , and normalized by dividing by the number of frequency bins within each band. Frequencies between 55 and 65 Hz were left out to eliminate 60 Hz noise. Spectrograms and coherograms were inspected to assess the time-course of changes following stimulation, and power and coherence values were averaged across periods of 6 s. This resulted in five pre-stimulation periods (30 s period), 4 post-stimulation periods for single-pulse stimulation, and 8 post-stimulation periods for all other conditions. The relative changes in post-stimulation values of power and coherence were calculated from the mean pre-stimulation values for each period. Neocortical activity at ∼1 • Hz (delta) is associated with the slow-wave state under urethane anesthesia, and low-amplitude faster cortical oscillations are present during the activated state (Clement et al., 2008). To address the impact of the state-changes during anesthesia, we divided the trials into either the slowwave state (Stage 1) or the activated state (Stage 2) based on the amount of power in the delta band in cortical channels during the pre-stimulation period. The z-scores of cortical power in the FIGURE 1 | Schematic diagram of the location of stimulation and recording sites, and the experimental timeline. (A) Recording sites in the right cerebellum and the left and right frontal association area are represented by black dots, and the stimulation site in the cerebellar vermis is indicated by a red dot. Arrows represent coherence between sites. (B) After anesthetizing the animal, electrodes were inserted stereotaxically during surgery, and a baseline period of 2 min was recorded. There was also a baseline period of 30 s (blue) at the start of each 2 min trial. The duration of the stimulation depended on stimulation frequency and varied from 1 to 60 s (dark blue = 60 s, pink = 30 s, green = 10 s, orange = 2 s, dashed line = 1 s). Three trials were conducted for each of the five stimulation frequencies, and the animal was euthanized following recordings. delta band for all trials were plotted on a histogram (Figure 2A) and trials above the 60th percentile were classified as slow-wave, and trials below the 50th percentile were classified as activated state. Trials between the 50th and the 60th percentiles were considered as being in transition and were excluded from the stage analysis (see example in Figure 2B). The two subgroups of trials formed for each animal were used to determine if the stage of anesthesia has an impact on frequency-dependent effects of stimulation. Effects of stimulation did not typically outlast the 2 min trial duration, and this state-dependent analysis ensured that only trials representative of the slow-wave or activated state were included in the respective analyses. Statistical Analysis Repeated-measures ANOVAs were conducted using Tibco Statistica (Dell Software, Round Rock, TX, United States). Stimulation intensity had no significant effect on power [one rat in which 2 trials per intensity per stimulation frequency were obtained: F (2,288) = 1.92, p = 0.15]. Therefore, trials of different intensities were grouped together for each stimulation frequency. The order in which the stimulation patterns were delivered also had no significant effect on measures of power For initial analyses, the dependent variables were LFP power and LFP-LFP coherence, while the independent variable was the type of stimulation. Repeated measures ANOVAs, with 7 levels of frequency bands and levels of time (1 Baseline and 4-8 Post windows, depending on the condition) as repeated measures, and stimulation type (Stim-SP, Stim-1 Hz, Stim-5 Hz, Stim-25 Hz, and Stim-50 Hz) and site (RCb, LfrA, RfrA) as categorical factors, were performed on the relative changes from baseline for LFP power. A similar analysis was done for coherence but included the pair of recording sites (contra Cb-FrA, ipsi Cb-FrA, and FrA-FrA) as a categorical factor. Results were considered statistically significant if p < 0.05. To assess state-dependent differences in response to stimulation, separate repeated-measures ANOVAs for each stimulation frequency assessed changes in power or coherence in a given frequency band relative to baseline as a function of the time window and stage. These analyses were performed within each recording site for LFP power, and between each pair of recording sites for coherence (site or comparison type as categorical factor). Fisher's post hoc analyses (p < 0.05) were used to identify which components differed in statistically significant interactions. Spontaneous Power and Coherence Prior to assessing the effects of cerebellar rhythmic stimulation, spontaneous LFP power within each recording site, and coherence between each pair of recording sites were evaluated. Power was typically highest in the delta band in all animals, consistent with the slow-wave activity reported in urethaneanesthetized rats (Clement et al., 2008;Frederick et al., 2014). Baseline power spectra in all sites showed peaks in the delta band at about 2 Hz. This activity co-occurred with activity in the beta, low gamma and high gamma bands, which was superimposed on the slow waves. This higher-frequency activity was not always clear in power measures but was readily evident in coherence measures. The example in Figures 3A-C shows FIGURE 2 | Discrimination of trials recorded during slow-wave and activated states. (A) Histogram of z-scores of power in the delta band in cortical recordings during the 30 s baseline period, for all trials. The peak on the right represents greater delta power during the slow-wave state, and the peak on the left represents lower delta power during the activated state. The area between the red lines was identified as a transition period and corresponds to the median and to the 60th percentile of the distribution. The trials between the 50th and 60th percentiles were excluded from the stage-based analyses. (B) Example of trials attributed to the slow-wave or activated states in one animal (rat 5). Power in the delta band is plotted for a monopolar recording in the left frontal association area, and stimulation frequencies tested during each consecutive trial are indicated on the x-axis. Trials with values within the shaded blue area were excluded from analysis, and trials above and below the shaded areas were classified as representing the slow-wave, or activated states, respectively. SP, single-pulse. coherent activity in both delta and low gamma bands between the left and right FrA. LFP recordings also showed periods in which delta band activity was weaker, and there was a more broadband distribution that sometimes included periods of increased power and coherence around 8-10 Hz. The example in Figures 3D-F shows marked coherence in the alpha band (10 Hz) between the cerebellum and contralateral cortex, consistent with previous electrophysiological evidence in the cerebellum and neocortex (O'Connor et al., 2002). These periods when slow-wave activity subsides to give way to faster activity in neocortical sites are consistent with the activated state described by Clement et al. (2008). Effects of Stimulation An initial analysis was used to evaluate how LFP power and coherence were modulated by different frequencies of stimulation across all recorded trials. The time by stimulation frequency interaction was significant [F 16, 980) = 2.28, p = 0.003], with 50 Hz stimulation inducing more rapid effects on power, and singlepulse stimulation inducing more delayed effects. There was also a trend for a stimulation type by site interaction [F (8, 245) = 1.84, p = 0.071], with the cerebellar site differing from cortical sites in the responses to stimulation frequencies, and a significant time by site interaction [F 8, 980) = 2.44, p = 0.013] due to earlier responses of the cerebellar recording to stimulation compared to the two cortical sites. No main effects or interactions were seen for coherence in this overall analysis. Stage-Dependent Effects of Stimulation The initial analysis indicated that vermal stimulation had frequency-dependent effects on LFP activity in both cerebellum and cortical sites, but baseline LFP activity differed markedly between the slow-wave state and the activated states. We therefore separated trials between the slow-wave (Stage 1) and activated (Stage 2) states and conducted ANOVAs to evaluate specific effects of stimulation frequency during each stage, on LFP and coherence measures in specific sites for each frequency band. Overall, the stimulation patterns affected power to a greater extent than coherence. Stimulation at 1 Hz had larger effects when delivered during the slow-wave state (Stage 1), while single-pulse, 25 and 50 Hz stimulation had stronger effects in the activated state (Stage 2). Figure 4 indicates maximal poststimulus changes in power in each frequency band induced by each stimulation frequency for all sites; the left panels show power changes during slow-wave activity, and those on the right show changes during the activated state. Relative % change for the different stimulation patterns, in numerical values, are given in the Supplementary Material. Single-Pulse Stimulation Single-pulse stimulation (slow-wave: n = 10 trials from 4 rats; activated state: n = 8 trials from 3 rats) had several effects on power in cerebellar and cortical sites, but no significant effect on cerebello-cortical or cortico-cortical coherence. In general, single-pulse stimulation had a more robust effect on LFP power when delivered in the activated state (Stage 2). Single-pulse stimulation in Stage 2 resulted in increases in power in the and θ bands in the cerebellum, and in a broader range of frequency bands in cortical sites (up to low γ in the LFrA and up to high γ in the RFrA). On the other hand, single-pulse stimulation during Stage 1 activity resulted in mixed effects on power in all sites. Maximal changes in power ranged between 5 and 15% from baseline (see cerebellum and in the RFrA, but during Stage 2, power was increased in all sites. The main effect of stage in θ and β, was also due to increases in power during Stage 2, especially for the RFrA. Single pulse stimulation therefore induced the greatest increases in cortical power when delivered in the activated state, in a wide range of frequencies (Figures 4D,F). There was also a main effect of site in [F (2, 39) = 3.87, p = 0.029], θ [F (2, 39) = 4.80, p = 0.014], and β [F (2,39) = 3.75, p = 0.032], with the RFrA showing the greatest changes. Overall, there were more increases in power in the RFrA following single-pulse stimulation, and those increases were mainly in the activated state. Hz Stimulation Stimulation at 1 Hz had a much stronger effect on power during the slow-wave state than during the activated state. There were several changes in power in all sites in Stage 1 but very few in Stage 2 (Figure 4, Stim-1 Hz). We did not find any main effects or interactions in the ANOVA for coherence. 1 Hz stimulation during Stage 1 increased power in the RCb in the α and β bands, but decreased RCb θ power in Stage 2. In Stage 1, there were decreases in θ, α, β and high γ in the LFrA. In the RFrA, power in slow frequencies ( , θ, α, and β) decreased, while power in faster frequencies (low γ, high γ, and Fast) increased ( Figure 4C). These changes ranged between 33% decreases (in ) and 15% increases (low γ). Figure 5 shows example LFP traces and power spectra of 1 Hz stimulation trials in the slow-wave state (Figures 5A,C), examples in the activated state (Figures 5B,D), and the mean percent changes in power relative to baseline in the delta ( Figure 5E) and low gamma bands (Figure 5F) for the group of rats (slow-wave: n = 7 trials from 4 rats; activated state: n = 14 trials from 6 rats). These state-dependent effects on power were supported by statistically significant Stage by site interactions in the band, with the RFrA showing a reduction during Stage 1 [F (2, 45) = 5.31, p = 0.009], and in the θ band, with reductions in the R and LFrA during Stage 1 [F (2, 45) = 3.52, p = 0.038]. Overall, the RFrA showed the greatest change between stages. This implies that 1 Hz stimulation during the slow-wave state can shift LFP activity to higher frequencies, decreasing 0-30 Hz activity while increasing activity in the 30-200 Hz range. Hz Stimulation 5 Hz stimulation (slow-wave: n = 8 trials from 3 rats; activated state: n = 8 trials from 4 rats) led to marked changes in power during both Stage 1 and Stage 2, but there were no main effects Hz Stimulation Stimulation at 25 Hz induced greater effects on power and coherence during the activated state. During Stage 2, there were large increases in the band in all sites (66% in the RFrA, 13% in the LFrA, and 41% in the RCb). Power also increased in θ in the LFrA and in θ, α, β, and low γ in the RFrA. In Stage 1, stimulation at 25 Hz decreased cortical power in high frequency bands (LFrA: . 25 Hz stimulation therefore had strong, state-dependent effects on LFP activity in cortical sites, with an increase of lower frequency activity in the activated state and a decreased activity in faster bands in the slow-wave state (Figures 4C-F, Stim-25 Hz). These state-dependent effects of 25 Hz stimulation are illustrated in Figure 6 where example LFP traces and power spectra in both stages (Figures 6A-D), and the mean percent changes in power relative to baseline in the delta (Figure 6E), theta (Figure 6F), and low gamma bands (Figure 6G), for the group of rats (slow-wave: n = 7 trials from 4 rats; activated state: n = 7 trials from 5 rats), are shown. State Frontiers in Systems Neuroscience | www.frontiersin.org FIGURE 5 | Stimulation at 1 Hz for 30 s during the slow-wave state decreases delta activity and increases low gamma activity in the right frontal association area (RFrA). (A,C) Examples are shown in which 1 Hz stimulation (during the period between the vertical red dashed lines) was followed by either greatly reduced slow-wave activity (A, rat 2), or a more moderate reduction in slow-wave activity (C, rat 6). Power spectra show corresponding reductions in power in the delta band from pre-stimulation (Pre-stim, black line) to post-stimulation (Post-stim, green line). (B,D) During the activated state, 1 Hz stimulation did not significantly affect power in the delta band in the RFrA. Examples of LFP traces in the RFrA and corresponding power spectra are shown for two animals (B, rat 2; D, rat 4) in which there were minimal changes post-stimulation. (E,F) The mean percent changes in power relative to baseline are shown for the group of animals (slow-wave: n = 7 trials from 4 rats; activated state: n = 14 trials from 6 rats) for the delta band (E) and for the low gamma band (F) for all three recording sites. Results are shown for the eight 6 s time windows following stimulation (Post1-Post8). The reduction in power in the delta band, and the increase in power in the low gamma band occurred only during the slow-wave state (blue lines) in the RFrA. in which the contra Cb-FrA comparison showed the greatest difference between Stages. 25 Hz stimulation therefore induced larger increases in coherence in the activated state, especially in the contra Cb-FrA comparison where coherence was increased during Stage 2 and decreased during Stage 1. This shows that 25 Hz stimulation can entrain and synchronize activity in the β band in cerebello-cortical networks when delivered in the activated state. Hz Stimulation Overall, stimulation at 50 Hz (slow-wave: n = 9 trials from 4 rats; activated state: n = 9 trials from 5 rats) had a greater effect on power during the activated state. Once again, more effects were noted for power than for coherence. There were increases in power in all sites in Stage 2 and a large increase in the RCb (80%) 4). (E-G) The mean percent changes in power in cortical sites relative to baseline are shown for the group of animals (slow-wave: n = 7 trials from 4 rats; activated state: n = 7 trials from 5 rats) for the delta band (E), theta band (F), and low gamma band (G). Results are shown for the eight 6 s time windows following stimulation (Post1-Post8). Increases in power in the delta and theta bands occurred in the activated state (red line), while reductions in power in the low gamma band were more reliable in the slow-wave state (blue line). in Stage 1. Power also increased in α, β, and high γ in the LFrA in Stage 1, while it decreased in the 3-30 Hz range in Stage 2 (Figure 4, Stim-50 Hz). There was a main effect of Stage in [F (1, 36) = 6.47, p = 0.015] with increases in power being greater in Stage 2, suggesting that the activated state was the most responsive within the 0-3 Hz range. A significant Stage by time interaction in [F (8, 288) = 2.11, p = 0.035] showed that power increased at Post1 in both stages, but then decreased slightly below baseline in Stage 1 while remaining elevated in Stage 2. There was also a time by site interaction in [F (16, 288) = 2.06, p = 0.010], indicating the cerebellar site was affected earlier and more strongly by the stimulation. We also saw a Stage by site interaction for low γ [F (2, 36) = 4.18, p = 0.023], in which the power in the cerebellum was also affected more strongly than cortical sites. This implies that the effects 50 Hz stimulation differed the most as a function of stage in the cerebellar site. Analysis of coherence showed that there was a main effect of time in high γ [F (8, 248) = 2.49, p = 0.013], with an early increase, followed by a slight decrease in coherence. When exploring specific Stage by site by frequency interactions for high γ, it was found that ipsi Cb-FrA and FrA-FrA coherence increased and contra Cb-FrA coherence decreased following 50 Hz stimulation. DISCUSSION The cerebellum is thought to play an important role in cognitive function through its interactions with the prefrontal cortex (Hoppenbrouwers et al., 2008;Strick et al., 2009;Bostan et al., 2013). Both slow and fast oscillatory rhythms are thought to coordinate interactions between the cerebellum and cortical sites (O'Connor et al., 2002;Courtemanche and Lamarre, 2005;Ros et al., 2009;Courtemanche et al., 2013;Popa et al., 2013;Chen et al., 2016), and rhythmic cerebellar stimulation has been used as a therapeutic intervention in some disorders (Schutter et al., 2003;van Honk, 2006, 2009;Demirtas-Tatlidede et al., 2010). The present study has examined the effects of cerebellar vermal stimulation at various rhythms on the entrainment of cerebellar and cortical LFPs under urethane anesthesia. Our results show that there are frequency-specific effects of cerebellar stimulation on both cerebellar and cerebral cortical LFP spectral properties, and that cerebellar stimulation at high frequencies (25 and 50 Hz) can also promote coherence in this cerebello-cortical network. Our findings also indicate that the effects of vermal stimulation are highly dependent upon the initial state of the networks, and that markedly different patterns of results were obtained, particularly for cortical sites, when stimulation was applied during the slow-wave versus the activated state. Cerebellar vermal stimulation during either the slow-wave state or activated state produced different effects on cerebellar hemispheric LFPs. Single-pulse and 50 Hz stimulation led to opposite changes in LFP power when delivered during the slowwave state as opposed to the activated state (Figures 4A,B). For this site, stimulation at a low rate would produce variable effects on the slower frequency bands; at higher rates, the effect was mostly to decrease the power at low gamma frequency and higher. This effect was clear for the 5, 25, and 50 Hz stimulations, and was most potent during slow-wave activity. There was also an effect of the 25 and 50 Hz stimulation in increasing power in the delta band. Stimulation of the vermis induced markedly different effects on the prefrontal cortex LFPs depending on the initial state. In the slow-wave state, 5 and 25 Hz stimulation induced a strong decrease in power in the beta to Fast frequency bands in both the right and left FrA. Stimulation at 1 Hz during the slow-wave state also had a strong effect: delta-to-beta activity decreased, while the low gamma-to-fast activity increased in the right FrA. In the activated state, however, stimulation using single pulses, and at 5, 25, and 50 Hz resulted in an overall increase in power across the delta-to-beta bands. State-Dependent Effects of Stimulation One of the main findings in our study is that the effects of stimulation were influenced by the stage of urethane anesthesia (Clement et al., 2008). This highlights the importance of the initial oscillatory state in determining the susceptibility of target structures for changes in LFP oscillations and entrainment within different frequency bands. Previous research investigating the effects of vermal stimulation on frontal oscillations in humans, cats, and rodents showed that low frequency stimulation mainly affects slow activity, while stimulating at higher frequencies increased activity in faster bands (Steriade, 1995;Schutter et al., 2003;Schutter and van Honk, 2006;Parker et al., 2017). Experiments reported here used various stimulation frequencies, and demonstrated a range of effects that were dependent on baseline oscillatory state. Pathways Mediating the Effects of Stimulation Reciprocal anatomical connections have been well established between the cerebellum and prefrontal cortex, via cerebellothalamo-cortical and cortico-ponto-cerebellar pathways (Kelly and Strick, 2003;Strick et al., 2009;Watson et al., 2009Watson et al., , 2014Buckner et al., 2011;Farzan et al., 2016), but how rhythmic cerebellar output modulates cortical activity is still an open question. The stimulation in the cerebellar vermis, in reaching the prefrontal cortex, likely coursed through the fastigial nucleus and then to the thalamus (Bostan et al., 2013;Lisberger and Thach, 2013). Stimulation of Purkinje cells, in the outermost layer of the cerebellum, leads to changes in the cerebellar output, which in turn modulates the output of deep cerebellar nuclei (DCN) Manto, 2013, 2016;Das et al., 2017). Because inputs from Purkinje cells to the DCN are inhibitory, increased activation of Purkinje cells with high frequency stimulation (Maeda et al., 2000b;Hallett, 2007) inhibits the tonic activity of the DCN. This would in turn decrease excitation in the thalamus. However, it is also quite possible that DCN neurons could also show rebound excitation (Buzsaki, 2006). Subsequent activation of extra-cerebellar areas via the thalamus may thus occur through rebound excitation within the DCN (Buzsaki, 2006;Hoebeek et al., 2010). This phenomenon has been reported mainly in thalamic, cortical, and DCN neurons (Grenier et al., 1998;Buzsaki, 2006;Hoebeek et al., 2010;Boehme et al., 2011). After the initial inhibition induced by stimulation, the T-channel is activated causing Ca 2+ influx, which leads to a slow rebound spike. Thus, the initial hyperpolarization of the fastigial nuclei, induced by electrical stimulation of the vermis, would lead to a burst of rebound spikes in the DCN up to 100 ms after the hyperpolarization ceases. If these spikes occur in synchrony and interact with the necessary opposing currents (mixed cation current, I h ), oscillations could be generated and would then propagate to thalamocortical pathways (McCormick and Pape, 1990;Buzsaki, 2006). The emergence of different oscillatory patterns thus depends not only on the strength and frequency of the applied stimuli, but also on factors regulating the intrinsic excitability and rhythmicity of neurons. Indeed, in this mode, when the effects of stimulation on oscillations rely on rebound excitation mechanisms (high frequency stimulation), the initial state of the neuron strongly impacts the effects of inputs (Buzsaki, 2006). This is in line with the state-dependent effects of stimulation that we have found here, with stimulation at high frequencies (25 and 50 Hz) leading to greater changes when initially in the activated state, and stimulation at 1 Hz inducing more effects in the slow-wave state. Low frequency stimulation on the other hand may cause a decrease in activity of Purkinje cells (Chen et al., 1997), which would reduce inhibitory input to the fastigial nucleus, and result in greater excitatory drive to the thalamus. In a study investigating responses to different types of stimulation, singlepulse stimulation of the cerebellar cortex (paravermal lobules VI/VII) increased the chance of spiking for a short period poststimulation (after a latency of ∼8 ms), but did not alter the firing frequency of DCN neurons (Hoebeek et al., 2010). Although the mechanisms are not fully understood, this effect on spike timing occurred in the absence of rebound excitation. In this study, vermal stimulation could entrain cerebellocortical networks. However, given the duration of the changes observed, stimulation was unlikely to have induced long-term potentiation (LTP). For instance, high frequency stimulation (100 Hz in bursts of 15 pulses, for a total of 1500 pulses) applied to the parallel fibers, in the most superficial layer of the cerebellar cortex, has been shown to induce LTP at synapses between parallel fibers and Purkinje cells (Jörntell and Ekerot, 2002). Therefore, although we did not assess induction of LTP in this study, the number of pulses delivered in our study was likely too low to lead to lasting plastic changes. Cortical Effects Our results show that the effects of stimulation are state-dependent. Indeed, LFP activity fluctuates in urethaneanesthetized rats in cyclic alternations that are similar to sleep stages (Clement et al., 2008). When applied in the slow-wave state, the faster (5, 25, and 50 Hz) stimulations produced a noticeable decrease in cortical power for the faster frequency bands. In the activated state, the same stimulations produced an overall increase in power across the slower bands. The bands most affected were thus markedly different between the two states. The influence of the initial brain state on the effects of stimulation has been investigated in humans, in studies using TMS or direct cortical stimulation, as well as in rats (Jackson et al., 2008;Alagapan et al., 2016;Connolly et al., 2016;Silvanto et al., 2017). The effects of the stimulation at 1 and 25 Hz provide good examples of this modulation by state. During the slow-wave state, 1 Hz stimulation would increase power in the 30-200 Hz range, while decreasing power in the 0-30 Hz band, however, 1 Hz stimulation did not show any effects in cortical sites during the activated state. The effects produced by 1 Hz stimulation can be interpreted partially by the mechanism of generation of slow-wave activity in thalamocortical networks, modulated by thalamic neuronal activity. Delta activity in the brain, sleeping and anesthetized, stems from an interaction between thalamic and cortical oscillators (Steriade, 2003). Optogenetic stimulation of thalamocortical neurons at 1 Hz triggers their firing of bursts of action potentials, and is also an optimal frequency for inducing cortical slow waves; stimulations at 1.5 Hz or higher on the other hand failed to entrain EEG activity (David et al., 2013). It is possible that 1 Hz stimulation of the cerebellar cortex in our recordings disrupted thalamic mechanisms that mediate delta activity, in a manner specific to the slow-wave state. This could decrease activity in the delta range while increasing activity in faster frequency bands. Optogenetic stimulation of cerebellar projections at 2 Hz was also shown to re-establish normal levels of delta activity in an awake rat model of schizophrenia, which shows lower delta activity in the medial frontal cortex similar to observations in schizophrenic patients (Parker et al., 2017). The threshold stimulation frequency for the cerebellar cortex to entrain delta activity would likely then be affected by the initial state in the cerebral cortex. As our results show, the initial state strongly affects the optimal stimulation pattern for entrainment at various frequencies. Conversely, in the slow-wave state, 25 Hz stimulation produced a decrease of power in a wide range of higher frequencies , while inducing a strong increase in the 0-55 Hz band in cortical sites during the activated state. This can be compared to the work of Steriade (1995), who used 300 Hz stimulation of the fastigial nucleus in ketamine/xylazine anesthetized cats, and showed an attenuation of slow rhythms, and an enhancement of 20-40 Hz oscillatory activity in the frontal cortex. We did not find this strong effect of high-frequency stimulation in decreasing slow-wave activity in our recordings, but we did find an increase in beta/gamma power following 25 Hz stimulation in the activated state. In addition, using stimulation at 100-200 Hz of the brachium conjunctivum (i.e., afferents to the thalamus from the cerebellar nuclei), the same team (Timofeev and Steriade, 1997) found an activation of the cat EEG at 30-100 Hz during ketamine/xylazine anesthesia. Again, we found a similar increase in power in these bands only during 25 Hz stimulation in the activated state. Differences in these results could be due in part to the type of anesthetic used, or to the different axon conduction speed and synaptic delays characteristic of different species (Buzsáki et al., 2013). Differences could also be due to the much higher stimulation frequencies used in those studies (Steriade, 1995;Timofeev and Steriade, 1997). In the rat, in order to increase motor cortical excitability, stimulation of the lateral cerebellar nucleus at different frequencies showed a greater facilitation at 30 Hz, similar to our effects at 25 Hz in the activated state (Baker et al., 2010). Similarly, stimulation of the same nucleus at 30-50 Hz increased contralateral cortical excitability, measured as motor evoked potentials, in a rat model of stroke (Park et al., 2015). It would be interesting to monitor the oscillatory state differences prior to stimulation in these awake animals, as it could have played a role in the optimal responsivity to stimulation. Overall, it does appear that prefrontal cortex networks can generate and resonate with beta and gamma rhythms and that activity in these bands can be modulated by cerebellar output (Sherfey et al., 2018). Cerebellar Effects and Minimal Effects on Cerebello-Cortical Coherence We assessed here whether different frequencies of patterned stimulation can entrain cerebellar frequency-specific patterns that have been previously described (Courtemanche et al., 2013). In the slow-wave state, stimulation at 1 and 50 Hz produced more effects on cerebellar oscillatory activity. These stimulation frequencies caused an augmentation of 8-30 Hz power, which corresponds to the range of oscillatory activity in the granule cell layer (GCL) of the cerebellum, and this effect could have been mediated directly, or through pathways projecting to the GCL such as the parallel fibers. In addition, stimulation at 50 Hz also caused a strong increase in delta power, which could be due in part by the brief nature of this stimulation train, which may have activated multiple cerebellar units in-phase, likely through parallel fibers, especially if the stimulation was timed with the ascending phase or peak of a slow rhythm. In the activated state, effects within the 8-30 Hz range were mostly absent, but stimulations at 25 and 50 Hz had strong effects on the 0-8 Hz activity in the cerebellum. This could be again due to a phasic increase in excitation timed with a slow cerebellar rhythm. The 5 Hz stimulation also increased power in the theta band, potentially affecting a theta-related oscillatory pattern already present in the cerebellum (Courtemanche et al., 2013). Our results showed significant effects of stimulation mainly for measures of power across frequency bands, and less so for coherence measures. Synchronized activity between the cerebellum and cortex has been observed in a variety of contexts. Coherent activity in the alpha and beta frequency ranges occurs in the cerebellum and sensorimotor cortex during actions requiring somatosensory monitoring (O'Connor et al., 2002;Courtemanche and Lamarre, 2005). Functionally as well, synchronization of LFPs between the medial prefrontal cortex and the cerebellum at 5-12 Hz has been linked with adaptive performance in eyeblink conditioning during the early stages of learning (Chen et al., 2016). Multiple brain regions, including the amygdala, hippocampus, medial prefrontal cortex, and cerebellum must coordinate to acquire a variety of learned responses, such as the conditioned eyelid response in the eyeblink conditioning paradigm (Lee and Kim, 2004). Coherent activity between the cerebellum and prefrontal cortex across a variety of bands may thus contribute to acquiring appropriate behaviors through associative learning and during performance. There are other clear indications that the cerebellum is important in cortical synchronization (Courtemanche et al., 2013). The functional role of the cerebellum in gamma-band coherence between areas of the cerebral cortex has been demonstrated in rats; inactivating the cerebellum with a muscimol injection disrupted cortical coherence in gamma between the sensory and motor cortices, potentially interrupting transmission of sensorimotor information between these areas (Popa et al., 2013). Finally, a recent study also showed Purkinje cell simple spike timing is related to coherent cerebral cortical oscillatory activity (McAfee et al., 2019). Why did we not find clear effects on coherence? An obvious first consideration is the anesthetic state. The anesthetic state carries with it clearly different patterns of large-scale oscillations and coherence than the awake state (Steriade, 2003), but urethane anesthesia has been shown to be permissive to network oscillations (Maggi and Meli, 1986;Clement et al., 2008;Frederick et al., 2014;Robinson et al., 2017). Coherent activity in our preparation was abundant (see Figure 3), and the coherent slow-wave state represented a majority of the total duration of our recordings (e.g., see Figure 2). Ros et al. (2009) have shown that the cerebellum can generate slow oscillations that are synchronized with those of the neocortex, and that neocortical oscillations drive cerebellar rhythms. The strong slow-wave state throughout the recordings may have hindered our capacity to detect coherence effects. The state of the network in this study was likely similar to a restingstate condition, first described in idling states and during early stages of sleep in fMRI studies, but also in the anesthetized state in humans, monkeys, and rodents (Lu et al., 2007;Raichle, 2015). It is thus quite possible that during anesthesia, as in sleep, large-scale coherent slow-wave mechanisms protect the cortical circuits from outside disturbance via the thalamocortical circuit isolation properties (Steriade, 2003), and that this may reduce effects of stimulation on coherence. It is possible that over our particular anesthesia modes, coherent activity between sites acts as a filtering mechanism to suppress external inputs, as happens during functional inhibition to optimize performance (Courtemanche et al., 2003;Jensen and Mazaheri, 2010). Another consideration is that the placement of our frontal lobe recording electrodes and/or our cerebellar stimulation electrodes was perhaps not optimal to evaluate between-site synchrony. The methodological approach of evaluating the location of cerebral cortical best response to stimulation through evoked potentials could help determine an optimal electrode alignment (Watson et al., 2009(Watson et al., , 2014 and might further help in finding coherent sites. Effects of Cerebellar Stimulation on Frontal Cortical Networks Cerebro-cerebellar loops involving prefrontal cortical areas have received increased attention over the last few decades. The initial explorations concerned cerebro-cerebellar relationships in sensorimotor circuits (Allen and Tsukahara, 1974;Sasaki, 1979;Bloedel and Courville, 1981;Morissette and Bower, 1996;Courtemanche and Lamarre, 2005). In parallel, the identification of cognitive roles for the cerebellum was being progressively characterized through neuropsychological testing and studies in patients (Leiner et al., 1991;Akshoomoff and Courchesne, 1992;Courchesne et al., 1994;Akshoomoff et al., 1997;Mangels et al., 1998). Advances in brain imaging also indicated a cerebellar role in cognition (Roland, 1993;Allen et al., 1997Allen et al., , 2005Schmahmann, 1997;Allen and Courchesne, 2003;Buckner, 2013;Schmahmann, 2019). Anatomical reports showed precise functional connections between the cerebellum and prefrontal cortex in the primate (Schmahmann and Pandya, 1997a,b;Kelly and Strick, 2003) which could mediate cerebro-cerebellar loops involved in cognitive operations. The cerebello-cerebral connectivity displays multiple parallel loops mediating processes related to sensation, movement, and thought (Middleton and Strick, 2000;Strick et al., 2009;Bostan et al., 2013;Bostan and Strick, 2018). Even though the nature of the prefrontal cortex in rodents is the object of some debate, multiple cognitive and executive functions are performed by prefrontal cortex or similar regions in rats (Kolb, 1984;Uylings et al., 2003;Dalley et al., 2004;Kesner and Churchwell, 2011;Leonard, 2016). Cerebello-prefrontal cortex connectivity has also been confirmed using physiological and anatomical measures (Suzuki et al., 2012), and has been explored electrophysiologically, by evaluating cerebellar evoked potential and cellular responses obtained through medial prefrontal cortical stimulation, as well as through measures of prefrontal cortex neurophysiological activity following fastigial nucleus stimulation (Watson et al., 2009;Watson et al., 2014). In addition, related to the results presented here, Watson et al. (2014) found a cerebello-cerebral directed coherence pattern in the theta range that was prominent during active locomotion, showing that these connections could support cerebello-cerebral communication. Our current study has contributed to understanding how different frequencies of cerebellar oscillations modulate oscillations and coherent activity within cerebello-cortical networks (Courtemanche et al., 2013). The effects of cerebellar stimulation on the activity of cerebello-prefrontal loops remains largely unexplored in the rat, in which the underlying neuronal pathways and mechanisms can be assessed. Our findings show that even under anesthesia, cerebello-cortical network interactions can be modulated through cerebellar stimulation. A multi-site multi-electrode approach could enhance the fine-grained mapping through evoked responses and/or unit activity to study the spatiotemporal properties of cerebello-cortical connectivity. Such an approach would also allow changes in evoked synaptic responses to be monitored in association with ongoing EEG rhythms and state (Ozen et al., 2010;Marquez-Ruiz et al., 2014), to provide some insight into the strength of synaptic pathways during oscillations (Timofeev et al., 1996;Rosanova and Timofeev, 2005). Future studies could also investigate the effects of cerebellar stimulation in awake animals, during behavior or rest. These would require the study of oscillatory and synchronous networks at smaller timescales, with analytical methods able to follow fast changes in network configuration, such as phase synchrony analysis (Lachaux et al., 1999). The initial oscillatory or activation state can be expected to strongly impact the effects of stimulation. In the activated state, the cerebellar stimulation with single-pulses, as well as with repeated pulses at 25 Hz, was optimal in generating increased deltagamma band activity, which could correspond to an analog of cortical cross-frequency entrainment (Helfrich et al., 2014(Helfrich et al., , 2016. It would also be interesting to test the physiological effects of cross-frequency coupled nested rhythms in the awakebehaving animal. As the cerebellum contributes to higher-order functions, understanding how cerebello-cerebral loops operate and respond to stimulation is essential in uncovering the underlying physiology, but also in developing new methods to address numerous disorders. DATA AVAILABILITY STATEMENT The datasets generated for this study are available on request to the corresponding author. ETHICS STATEMENT The animal study was reviewed and approved by Concordia University Animal Research Ethics Committee. AUTHOR CONTRIBUTIONS ST, CC, and RC designed and prepared the experiments, acquired the data, and wrote the manuscript. ST and RC analyzed the data. FUNDING Funding for this study was provided by a grant from the Concordia VPRGS to RC and CC, and from grants from the FRQS (Québec) to the Center for Studies in Behavioral Neurobiology. CC was funded by NSERC. ST received the William R. Sellers graduate award for this thesis work. ACKNOWLEDGMENTS We thank Dr. Thanh Dang-Vu for comments on the design and data analysis of the study, to Dr. Jennifer Robinson for support in some experiments, to Ariana Frederick for graphical examples, and to Marie-Ève Dumas for providing some coding support in the analysis.
2019-10-29T13:09:06.491Z
2019-10-29T00:00:00.000
{ "year": 2019, "sha1": "2429a930beb73ac1786b456095f397eff722a83b", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fnsys.2019.00060/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2429a930beb73ac1786b456095f397eff722a83b", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
244175119
pes2o/s2orc
v3-fos-license
Mean Field Derivation of DNLS from the Bose–Hubbard Model We prove that the flow of the discrete nonlinear Schrödinger equation (DNLS) is the mean field limit of the quantum dynamics of the Bose–Hubbard model for N interacting particles. In particular, we show that the Wick symbol of the annihilation operators evolved in the Heisenberg picture converges, as N becomes large, to the solution of the DNLS. A quantitative Lp\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$L^p$$\end{document}-estimate, for any p≥1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$p \ge 1$$\end{document}, is obtained with a linear dependence on time due to a Gaussian measure on initial data coherent states. The aim of the present paper is to show that the quantum expectation ofb k / √ N is close to u k (t, ω) in a suitable L p -measure sense when N is large, which is precisely what we mean by the mean field limit of (2). This is achieved thanks to an explicit estimate in terms of the parameters of the model, global in time. The literature on the mean field derivation of the nonlinear Schrödinger equation (NLS), the Hartree equation, and more in general about the study of many-body quantum-mechanical systems, is quite rich (see Sect. 2). However, it seems that a direct mean field derivation of the DNLS (1) together with quantitative, explicit estimates is missing. In some works (see for example [1,33,37] and references therein) the DNLS is obtained directly from the NLS equation, in the framework of the tight-binding approximation. However, combining these two kinds of results, a growth exponential in time of the mean field estimate for DNLS follows (essentially due to the Grönwall lemma). In the present paper, a growth linear in time of the mean field estimate is provided. Our first step to deal with the mean field asymptotics is to consider the rescaled operatorsâ and the Heisenberg equation whereâ k (0) :=â k and 1 ≤ k ≤ L. Notice that Eq. (4) is clearly the operator counterpart of (1). We now rewrite Eq. (4) in terms of the Wick symbols (see Sect. 3.1) of the operatorsâ k (t) and H. Let φ ω (z) := e ω·z− 1 2 |ω| 2 be the normalized coherent states in F B (C L ) and recall thatâ k φ √ Nω = ω k φ √ Nω . Define the symbols ρ k (t,ω, ω) := φ √ Nω ,â k (t)φ √ Nω ; (5) H N (ω, ω) := φ √ Nω , Hφ √ Nω = N H(ω, ω) (6) = N 1≤j≤L E j |ω j | 2 + J(ω j+1 ω j +ω j , ω j+1 ) + U 2 |ω j | 4 . (7) Then, by the Wick bracket (see [9,15]) we get the equation with initial data ρ k (0, ω,ω) = ω k . We recall that, as an asymptotic series, that here the Wick bracket is a finite sum { ·, H } Wick = L 1 + L 2 where (10) Notice that L 1 is the Poisson bracket in the variables (ω, ω). Thus, it is easily seen that the DNLS (1) exactly reads By denoting Δ := {(ω, ω) | ω ∈ C L } ⊂ C 2L , and (Φ t , Φ t ) : Δ ⊂ C 2L → C 2L the flow ofγ = i(∂ ω H(γ), −∂ωH(γ)), it follows that The equality (12), together with Eq. (8), tells us that ρ k − u k is a kind of semiclassical perturbation term, and thus we expect ρ k − u k → 0 as N → +∞. Indeed, we will prove such a result with respect to an L p (μ N )-norm, where p ≥ 1 and μ N is a suitable Gaussian measure, invariant under the DNLS flow. With respect to this target, recall that the total number operator defined as is conserved by the quantum flow, i.e., Moreover, the well-known 2 -conservation law for the DNLS can be rewritten as Both these two important properties will be used in the proof of Theorem 1, and for this reason we define the invariant Gaussian probability measure where ω = x + iy, dω ∧ dω := π −L dxdy and c N,L := N L is the normalization constant. This measure is linked (see Proposition 1) to a weighted trace formula involving Wick operators that will be an important tool to our approach. We are now ready to state the main result of the paper. Theorem 1. Let u(t, ω) be the flow of the DNLS Eq. (1) and let ρ k (t, ω) be the solution of (8) Here, Γ and S(α, β) denote the Gamma function and the Stirling numbers of second kind, respectively. We notice that (17) can be written with the condition L/N → 0 as N → +∞, this means that the number of particles can be supposed to be large with respect to the number of L "sites" of the Bose-Hubbard model, which is the regime considered in some experiments, see for example [40]. We also stress that the L p -norm used above allows to discuss, in the measure sense, the pointwise estimate for |ρ k −u k |(t,ω, ω). Indeed, we have the following Corollary 1. Fix a parameter 0 < < 1 2 and define the set Then, for any Notice that p → A p is an increasing function, whence inequality (21) provides, when N is fixed and p is large, a measure of the region where |ρ k −u k | is large. On the other hand, in the case of a fixed p ≥ 1 and large values of N we have a vanishing measure of the region where |ρ k − u k | is super-linear in time. We underline an important consequence of this observation. Indeed this means that, from the viewpoint of the Gaussian measure, if this superlinear (in time) mean field estimate is sharp then it is associated with a set of coherent states which is negligible as N → +∞ with the rate shown in (21). Of course, any exponential (in time) upper bound gives rise to the same conclusion. An interesting open problem is to show that the same feature holds for more general quantum dynamics than the one associated with our manybody operator (2). Our paper deals with the Bose-Hubbard model, a simpler setting with respect to that of quantum field theory. However, the explicit estimate in terms of the parameters of the model and its linear dependence on time in (17) seem to be a novel and promising result with respect to other kind of mean field estimates on the NLS equation. Furthermore, we stress that Theorem 1 can be seen as an Egorov-type result, written to the first order and with respect to the L p -norm, for Wick symbols. With respect to this observation, we recall Proposition 5.1 in [16] where it is proved the convergence, as → 0, of the Wick symbol of an evolved quantum observable toward the Weyl symbol composed with the Hamiltonian flow. In this result, the well-known bound of the Ehrenfest time |t| < T is shown. We also recall Proposition 5.10 and Theorem 5.6 in [5] where, in the framework of evolved Wick operators on the Fock space and with a quantum dynamics much more general than our, it is proved the convergence toward the solution of the Hartree equation as 1 N → 0, but the estimate on the remainder in Theorem 5.6 is again local in time. In our main result, we avoid locality in time by making use of the L p (μ N )-norm (the meaning and the properties of the measure μ N are clarified in Proposition 1 and 2). In Sect. 7.2 of [26], the authors discuss, thanks to the Wick quantization for a class of symbols, how the many-body quantum mechanics of bosons can be viewed as a deformation quantization of the Hartree theory. We stress that our paper also makes use of Wick operators, but deals with a different class of quantum dynamics and another way to get the derivation of the mean field dynamics, which is here a discrete NLS. To conclude, we stress the absence in (17) of the parameters E and J involved in the quadratic part of the operator H in (2), in agreement with a well-known elementary result: any quantum expectation of the Heisenberg equations of a linear system (quadratic Hamiltonian) yields the classical equations of motion. Thus, the distance between the Wick symbol ρ k solving Eq. (8) and the k-th component u k of the flow for Eq. (11) is ruled only by nonlinearity, namely by the parameter U . As a consequence, Theorem 1 holds for Hamilton operators H with a completely general quadratic part. This is not the primary target of the present work, but we observe here that a more general setting of H ensures a larger set of invariant measures for the DNLS flow and whence an interesting open problem is to study the link between this kind of mean field estimates and the possible various invariant measures. We also remark that the equation in (1) with general quadratic part, that usually describes particles in one-dimensional periodic lattice, can be used also to modelize two-and three-dimensional lattices with different topologies (see for example [22] and references therein). The paper is organized as follows. In Sect. 2, we shortly comment on the most influential results in the literature on the subject, to finally stress the main innovations characterizing our work. Sect. 3 is devoted to the proofs of Theorem 1 and Corollary 1 stated in the Introduction; the proof is divided into various technical steps. Sect. 3.1 is "Appendix" consisting of three subsections. Synopsis of the Literature and Motivations of the Work We here provide a commented list of some papers involving NLS, Hartree equation, and more in general the study of many-body quantum-mechanical systems in mean field and semiclassical limit, relevant and somehow connected to our work. A first reference work in the field, is the review [39], where the author discusses a variety of classical as well as quantum models for which kinetic equations can be derived rigorously, and where the probabilistic nature of the problem is emphasized. A second reference paper, of particular interest to our work for its use of coherent states, is [29], where Hepp shows that in the many-body framework the classical limit of the expectation values of products of Weyl operators, translated in time by the quantum dynamics and taken on coherent states centered in x-space and p-space are shown to become the exponentials of coordinate functions of the classical orbit in phase space. The Hepp results have been extended in [27]. For a recent review and discussion of Hepp's method, we also address the reader to Sect. 10.4.2 of [21]. The convergence of the N -particle Schrödinger dynamics of bosons toward the Hartree dynamics is proved in the mean field limit in [24]. The authors work in the Heisenberg picture (as in our paper) with a class of bounded operators, whereas we consider annihilation operators, that are unbounded, and another type of convergence. A rigorous derivation of the cubic NLS in dimension one is shown in [4]. An approach for deriving higher-order corrections to the mean field limit for the quantum systems is provided in [11], a simple and effective method is given in [35]. A reference role in the literature is played by those recent works dealing with the rigorous version of the Bogoliubov theory of superfluids; see, e.g., the review [38]. The Gross-Pitaevskii equation is rigorously deduced, for example, in [14], whereas the fluctuations around it are studied in [13,17]. The convergence to a limiting Hartree dynamics is instead studied in [6,36], whereas in [7,10] and the Hartree-Fock-Bogoliubov and the Bogoliubov-de Gennes equations are derived by the method of the quasi-free reduction. Further results are, for example, a derivation of the 1D focusing cubic NLS obtained in [20], where the difficulties due to the attractive interaction are discussed and new energy estimates are shown, and a mean field derivation of the defocusing 2D cubic NLS is provided in [28]; the mean field dynamics of a mixture of bosons is treated in [32]. The literature on the subject is actually huge, and the above description represents just a short summary of it. However, with the respect to the existing framework of methods and results, our contribution here is characterized by a certain number of aspects deserving a short discussion. 1. We consider the Bose-Hubbard lattice model (2). This is certainly a context much simpler than the quantum field theory of a boson gas generally considered in the literature quoted above. However, its interest is motivated by the modern experiments on many-body effects in optical lattices, where the lattice models are the basic tool for the theoretical interpretation of the results [12,19]. 2. The privileged physical quantity considered here is the coherent expectation of the local annihilation operator, which is shown to satisfy the DNLS equation within a certain approximation limit. As specified below, we would be able to do the same with any observable, say any polynomial of the Dirac operators. The choice of the annihilation operator, besides its simplicity, is quite natural if, along an Ehrenfest-like line of thought, one wants to compare the quantum expectation of the Heisenberg equation of a certain operator with the "classical" scalar equation obtained by replacing the operator with its quantum expectation in the Heisenberg equation itself, as heuristically done in physics. 3. A clear innovation of our approach consists in distributing the initial data (i.e., coherent states) according to an invariant probability measure, which then calls naturally for the use of the L 2 norm (which is then generalized to L p ), as typical and meaningful in statistical mechanics, the right framework for this kind of problems; for the experimental relevance of coherent states see [12,19]. However, in certain applications, e.g., quantum computing, special initial conditions, and thus point-wise estimates, may play the relevant role. Indeed, we refer for example to the quantum walks in the Bose-Hubbard model, that are unitary processes describing the evolution of initially localized wave functions on a lattice potential, see [30]. Concerning the choice of the measure, we take the Gaussian one, inherited by the quantum trace measure with density e −λ N , λ being a suitable parameter. Of course, in a statistical mechanical framework, one would like to work with the Gibbs-Von Neumann density, namely e −β H , β being the inverse temperature. The latter point makes part of a work in progress. Here, we only stress that the Hamiltonian (2) reads H = E N + · · · , the dots denoting the other two terms, N commuting with both of them. As will be further discussed below, and expected from the tight-binding assumptions made to deduce the Bose-Hubbard model, the first term E N is the leading one with respect to the other two. In a sense, we are thus considering an invariant measure that is approximately connected to the Gibbs one. 4. The joint use of an invariant measure on the initial coherent states and of the Wick formalism allows us to bound distance between the symbol of an observable at time t and the classical evolution of its initial symbol by a constant growing linearly with t. The linear dependence on time is not a particular feature of the Gaussian measure, the latter being instead quite convenient in order to get an explicit estimate of the overall constant multiplying time. Within the framework of results in measure on coherent states, the linear growth in time of our bound represents an interesting news, since most of time dependencies obtained in the literature up to now are typically exponential, which is an unavoidable consequence of the Grönwall lemma. Results In this section, we provide the proof of the main theorem we have stated in the Introduction. To such a purpose, we will need some preliminary lemmas and propositions. Among them, Lemmas 1 and 2 are just quoted and used, their statements and proofs being reported at the end of the section. The following result provides a weighted trace formula for Wick operators, involving the positive definite operator e −λ N with λ > 0. In order to make a link with the Gaussian measure μ N given in (16), we have to write a bijective relation between the parameters λ and N . This result will be useful for the subsequent result on expectation values of Wick operators under quantum dynamics. Proposition 1. Let μ N be as in (16). Let Op W (g) be a Wick operator on where γ λ := Tr(e −λ N ) and e λ = N + 1. Proof. We begin by the equality and notice that the Wick symbol of e −λ N reads Equality (25) allows to write the constant Thanks to the Wick-product, (24) can be rewritten as We also remind formula (2.38) in [15] that provides a link between Wick and anti-Wick symbols where ∂ω k ∂ω k , and recall that Wick and anti-Wick symbols of e −λ N are unique. Now write explicitly the Wick-product and integrate by parts the integral in (27). This gives (formally) Recall that where c N,L := N L . Our target is thus to prove the well-defined equation namely which is solved by μ = N/(N + 1), and since μ = 1 − e −λ we recover Remark 1. We now recall thatb † kb μ = Op W (g) when g =ω k ω μ , (see Sect. 3.1). For these Wick operators, Proposition 1 reads Since {z α √ α! , α ∈ Z n + } is an orthonormal set in the Fock-Bargmann space (see [15]), an easy computation shows that Thus, equality (37) can be considered as the version, in the Fock-Bargmann space, of the Quantum Wick Theorem showed in [25] that works in the Fock space and with the related bosonic creation and annihilation operators of quantum field theory. In the next, we provide a kind of quantum mean value formula for the time evolved G(s) := U † (s) GU (s) where U (s) = e −i Hs with H as in (2) and G = Op W (g) are Wick operators (see Sect. 3.1). This result will be applied within the proof of Theorem 1 for operators of type G = (â † kâ k + 1 N ) p with creation and annihilation operators as in (3). This tool allows to avoid, in our setting and for our estimates, the well-known problem of Ehrenfest time, as well as to avoid the application of Grönwall Lemma (and thus exponential in time upper bounds) used in many papers on mean field estimates for NLS equations. Proof. We apply Proposition 1 and recall that the trace is invariant by unitary conjugations of operators, so that Now we recall that [ N, H] = 0 and whence [ N, U (s)] = 0, which gives and applying again Proposition 1 for G(s) := U † (s)Op W (g)U (s) and, in view of Remark 2, we conclude This observation will be useful in the application of this equality with a = √ N . We provide two technical lemma used in the next. Lemma 1. Let P(ω, ω) be as in (68), then for any 1 ≤ p < ∞ there exists a positive constant C 1,p such that Moreover, for a positive constant C 2,p . Proof. . We first notice that for any fixed ω ∈ C L , P(ω, ω) is a sum of real nonnegative numbers , so by using Hölder inequality we get Since for any integrating with respect to Gaussian measure and performing the change of variables ω j = √ Nω j we have For each j = 1, . . . , L, we factorize the integrals not containing ω j , so introducing the variable v ∈ C and its corresponding measure dv ∧ dv we have where the Euler Gamma function has been introduced. Notice that in the first line we exploited the definition of c N,L = N L , while in the second line we used the fact that we have exactly L equal integrals. Taking the fourth of root in the last expression, we get inequality (44) with To get (45), we need to compute the mean value ϕ √ Nω ,n α k ϕ √ Nω =: n α k for any positive integer α, wheren k =â † k (0)â k (0). We find that from the definition of Wick- * product there exists a recurrence relation between these quantities where 1 is the constant function 1(ω, ω) ≡ 1, so that in general where S(α, β) is the Stirling number of the second kind with integer parameters α and β (see computations below). Sincen k and N −1 commute as operators we can expand (n k + N −1 ) 2p using the binomial theorem Taking again the fourth root, we get (45) with constant We now complete the proof of this lemma, showing that the coefficients of the polynomial in (52) are the Stirling number of the second kind (see [3], Par. 24.1.4 for their definition and properties). By the recurrence relation (51), we have where we used the fact that S(α, α) = S(α, 1) = 1, as is easy verified using (51). Comparing the last expression with the general expansion of n α+1 k as in (52) with exponent α + 1, we see that which is precisely the recurrence relation defining Stirling numbers. Proof. We begin by In particular, the second term can be rewritten and this last form equals φ √ Nv , Op W (g) In what follows, we get an estimate for |ρ k (t,ω, ω) − u k (t, ω)| for any fixed ω ∈ C L . This will be used, in the proof of Theorem 1, to have the L p (μ N ) estimate. Proposition 3. Let with H as in (6). Let u(t, ω) := (u 1 , . . . u L )(t, ω) be the solution of (1), and Then, ds. (69) Proof. The semigroup identity applied to our case gives where the operator L 2 reads and thus We now recall the definition where φ √ Nv denotes the complex conjugated coherent state. Thanks to Lemma 2, Applying twice this formula, we get Applying the same computations for the derivatives onv j , we get The sum in (73) can now be rewritten as which simplifies to The sum exhibits the following upper bound namely We need to get an estimate for â † and thus We now look at By using againâ and hence Inserting (90)-(93) into (88), we get Thus, Ann. Henri Poincaré We observe that and As a consequence, and recall equalities (71)-(73) which imply the statement (69). In view of previous propositions, we can now provide the proof of the main result of the paper. Proof of Theorem 1. Recalling (69), we define the positive function and for the sake of simplicity we avoid to write the dependence on (ω, ω). More in details, The Hölder inequality fg L 1 ≤ f L p g L q with 1/q + 1/p = 1, allows and hence t 0 ψ(s) ds This gives We now focus our attention to The invariance of μ N under the flow Φ t−s implies where we have just defined the positive definite operator Now assume that p = 2 m with m ∈ N so that We get, thanks to the normalization of μ N , and recalling (109), The Cauchy-Schwarz inequality gives Sinceâ † k (s)â k (s) + 1 N is positive definite, we have the upper bound Observe that, since U (s)U (s) = Id, Now apply Proposition 2 and Remark 3 in order to rewrite (114) as Integrating these terms, see Lemma 1, we have two constants C 1,p and C 2,p > 0 such that and Thus, We are now in the position to conclude so that by defining we have, in the case p = 2 m with m ∈ N, Now observe that, thanks to normalization of μ N and a simple application of Hölder inequality, we have ρ for any α ≥ p. Thus, fix α := 2 p so that ensures now for all p ≥ 1 the inequality where the last inequality is guaranteed since μ N is a Gaussian type measure and |ω k | p is a polynomial term. Inequality (125) gives ρ k (t) − u k (t) L p (μN ) < +∞ and thus ρ k L p (μN ) < +∞. An immediate consequence of Theorem 1 is the next corollary. Proof of Corollary 1. Let 0 < < 1 2 and define the set Then, Recalling inequality (125), we get hence under for example the assumption that |U rs kj | ≤Ū . Indeed, in this more general case the dependence from the parameter L into the L p -estimate of Theorem 1 will be L 4 /N in place of L/N , namely where the constant A p will be changed by a more general formula. In the current paper, we have considered the most simple setting involving the quantum dynamics driven by H as in (2) in order to make more clear and direct the exposition of our approach of mean field estimates for Wick operators on Bargmann space. Moreover, concerning our focussing on the dynamics of a k as the main physical quantity, we stress that along the same lines, we can treat the dynamics of polynomials in a k and a † k . For example, the quantity could be considered in the same way, and the asymptotics as N → +∞ recovers, in L p (μ N )-norm, the quantity related to the components j and k of the DNLS flow at time t. Finally, for what concerns the problem of reduced density matrices (see for example [35,36]), we recall the setting of the operator Γ Ψ : L 2 → L 2 (see Sects. 10 where Ψ = Ψ (N ) is fixed in the N -particle sector F (N ) L 2 s (R N ; C) of the Fock space, and where b * (u j ) and b(u k ) are the bosonic creation and annihilation operators on the Fock space, and associated with a fixed orthonormal set {u j } j∈N of L 2 (R; C). A preliminary treatment of the link between Γ Ψ with the contents of our paper is reported in the PhD Thesis of one of the authors [34]. Fock-Bargmann Space and Wick Quantization In this subsection, we provide an overview of Fock-Bargmann space and the Wick quantization, recalling the basic notions we need in the framework of this paper. Here, we mainly follow the notations of Sect. 5.2 in [15], but we also address the reader to Sects. 1.6-2.7 of [23] and Sect. 2.6 in [21]. LetĀ(C n ) be the set of the anti-analytic functions f : C n → C. The Fock-Bargmann space is defined as with a scalar product here z := x + iy and dz ∧ dz := π −n dxdy, thus the integral can be written over R 2n . In this paper, we consider n = L where L is a parameter of the Bose-Hubbard model. The creation and annihilation operators are defined as: Notice thatb † k ,b k are well defined on F B (C n ) and [b k ,b † μ ] = δ kμ Id. Coherent states are represented, with its normalization e − 1 2 |ω| 2 , as For a given operator A : whereas outside the diagonal (ω, ω) the Wick symbol reads The Wick quantization of an entire function σ : C n × C n → C is given as In view of these settings, we have A = Op W (σ( A)), and we call this a Wick operator. To be more precise about the set of these operators, suppose that A (possibly unbounded) is defined on F B (C n ) together with its adjoint A † , and assume that for all ω ∈ C L , φ ω belongs to the domains of A and A † . Then, ω → σ( A)(ω, ω) is a smooth function on C n and moreover σ( A)(ω, ω) is the restriction on the diagonal of σ( A)(z, ω) as in (137), which is furthermore an entire function (see [23]-pg. 139). As shown in Proposition 1.69 of [23], any entire function K(z, ω) is uniquely determined by its restriction to {z =ω}. Thanks to these observations, A = Op W (σ) is uniquely related to the symbol on the diagonal, and for this reason frequently in the literature one refers to A as the Wick quantization of (136). We also stress that if the starting point is the definition (138) one must prove, for a given entire function σ, that Op W (σ) is well defined on F B (C n ) is the sense we have just described. A simple computation shows that These equalities directly allow to write the Wick symbol of the Bose-Hubbard operator H in (2) φ ω , and the rescaled Wick symbol We also recall that any bounded operator on F B (C n ) is a well-defined Wick operator. Furthermore, a large class of Weyl operators (see Sect. 2.1 in [23]) can be rewritten as a Wick operator by the following link of symbols σ Wick = e Δ/2 σ Weyl , namely (for = 1) see also Proposition 2.97 in [23] for the link with standard quantization, and more detailed setting for the allowed symbols. The set of Wick operators is closed under composition, and the Wickproduct is defined as the symbol of the composition of two operators, It can be shown (see [15]) the following asymptotics (in multi-index notation) where ∂ω r := ∂ω i1 ∂ω i2 . . . ∂ω ir . About the convergence of the right-hand side, we address the reader to [9]. We stress that in the asymptotics (144) the semiclassical parameter is = 1 and the absence of the factor 2 r used in [9] is a consequence of the setting of the scalar product in (133). The Wick bracket is defined as the symbol of the commutator For trace class Wick operators, a very useful formula gives Tr (Op W (σ)) = σ(ω, ω) dω ∧ dω. Remark 4. Let U (t) := e −i Ht andb k (t) := U † (t)b k U (t). Recalling Remark 2, we have thatb k (t) is a Wick operator for any t ≥ 0. We denote now its symbol by σ k (t,ω, ω). Moreover, it is easy to see that ρ k (t,ω, ω) hence the operator Op W (ρ k ) is well defined in the Wick quantization. Remarks on Phase Space Analysis Let Ψ (t) be the solution of the quantum dynamics for some fixed initial data Ψ (0) ∈ F B (C L ), for example a normalized function in the N -sector of F B (C L ) whence representing N -particle states (see [23], pg. 48) and Π Ψ (t) the related projection operator. Let us consider the related Wick symbol σ 0 (ω, ω) := σ W ( Π Ψ (0) )(ω, ω) = | φ ω , Ψ(0) | 2 . A way to study the semiclassical localization of the operator as the parameter h = 1/N → 0 is to describe the essential support (see Theorem 8.16 in [41]). In order to do this, one has to study the semiclassical Wave Front set, also called Frequency Set (see for example [31], or [18] for the periodic setting which is the one of the current paper) where ψ α belongs to an orthonormal set, given for example by eigenfunctions labeled by the multi-index α ∈ N L of the number operator N := k b † b k . The phase space localization of this set can be described by the use of the coherent states φ √ Nω and study of the FBI transform φ √ Nω , B(t)ψ α (see Sect. 3.6point 3 of [31]) which reads (thanks to coherent states decomposition) Notice that these are Gaussian-type integrals. Our estimate in Proposition 3 can be obtained in the same way for | φ √ Nω , B(t)φ √ Nv | (i.e., also outside the diagonal ω = v), where instead of the evolved annihilation operator a k (t) we have Π Ψ (t) . Thus, our L 2 -estimates in Theorem 1 with respect to Gaussian measures can be easily adapted to get a semiclassical estimate for the integral (152) with a more general invariant measure. To conclude, we stress that higher-order corrections of (150) implies better localization estimates for the function ω → φ √ Nω , B(t)ψ α in (152). This can be done in the spirit of Egorov Theorem (see [8,16]) by iterating the semigroup identity (70) working on the time evolved Wick symbols governed by Eq. (8).
2021-10-18T17:05:06.863Z
2021-10-01T00:00:00.000
{ "year": 2021, "sha1": "1f2c123adb79ec069b8537177f8a5baf4e44b0f3", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00023-021-01112-6.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "0ef0c23c7002dd204a8558dacb3da018dc323314", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Physics" ] }
236600917
pes2o/s2orc
v3-fos-license
Environmental protection as a global bioethical principle : Protestant faith tradition in conversation with United Nations Educational , Scientific and Cultural Organization The discipline of bioethics as differentiated from medical ethics originated in the 70s of the previous century. It differs from medical ethics in respect of the fact that it focuses not only on the needs and issues of the individual but also on social and environmental aspects connected with health. It has become clear that biomedical ethical proposals and solutions alone are not sufficient for promoting health and that the input of social, economic and environmental determinants of health is necessary (Ten Have 2019:17). Van Renselaer Potter (1911–2001), an American oncologist, also the first person to use the term bioethics, used cancer as an example to explain bioethics. In the ethical narrative of cancer, the emphasis cannot be on the medical and biological aspects of the individual person, for example the right to health, genetic predisposition and treatment, but part of the ethical context also comprises the social discourse, for example cigarette smoking and air pollution (Ten Have 2019:2–4). Introduction The discipline of bioethics as differentiated from medical ethics originated in the 70s of the previous century. It differs from medical ethics in respect of the fact that it focuses not only on the needs and issues of the individual but also on social and environmental aspects connected with health. It has become clear that biomedical ethical proposals and solutions alone are not sufficient for promoting health and that the input of social, economic and environmental determinants of health is necessary (Ten Have 2019:17). Van Renselaer Potter (1911-2001, an American oncologist, also the first person to use the term bioethics, used cancer as an example to explain bioethics. In the ethical narrative of cancer, the emphasis cannot be on the medical and biological aspects of the individual person, for example the right to health, genetic predisposition and treatment, but part of the ethical context also comprises the social discourse, for example cigarette smoking and air pollution (Ten Have 2019:2-4). A special milestone in the development of global bioethics is the acceptance of the Universal Declaration on Bioethics and Human Rights (hereafter UDBHR) of the United Nations Educational, Scientific and Cultural Organization (hereafter UNESCO) by the member states in 2005. The objective of the UDBHR is to unite the global community in terms of bioethical ideals (soft law), which combine individual values (articles 3-7) with social (articles 8-16) and environmental aspects (article 17) of health. Connecting the environment and health is not a very new idea, but the focus on the global range is new. The UDBHR is of utmost importance, because it is the first and, up to now, the only global bioethical instrument that was directly accepted by all states of the The Universal Declaration on Bioethics and Human Rights (UDBHR) is an important, modern human rights instrument regulating global bioethical challenges. The Protestant faith tradition was excluded from any discourse regarding the UDBHR; consequently, the universality and credibility, especially in Protestant circles, have been questioned. For the Protestant faith tradition, the voice of the Bible is decisive. An ethical foundation for article 17 of the UDBHR (enviromental protection and health) is, therefore, important, as it can contribute to the internalisation of the principle. In the analysis of article 17, it has been shown that the international community is convinced that an irrefutable relationship exists between nature and the health of the human interconnectedness. A damaged creation harms the health of the human and, therefore, the protection of nature is an indisputable obligation. From a Protestant ethical perspective, this global principle could be associated with or founded on the themes of creation, sin, covenant, Christology and eschatology. Grounded in this preliminary evaluation, article 17 can be supported by the global Protestant community. A few facts from South Africa indicate the necessity of promoting the global bioethical principle in this country. world, including South Africa (UNESCO 2005). All the other international influential instruments are either regionally bound (European Convention on Human Rights and Biomedicine 1997) or they focus on a specific profession such as the World Medical Association (WMA) Declaration of Helsinki, 2013. By accepting the UDBHR, all states also agreed to promote the principles in their distinctive countries in solidarity with each other (articles 23, 24;IBC 2008:45;Ten Have & Jean 2009:17;UNESCO 2006). Langlois (2013), who researched the reception of the instrument in South Africa and Kenia, indicated that its influence is limited in these countries, saying: [T]he Universal Declaration helps put bioethics on the agenda of States … It appears to have had little or no impact in South Africa, however, on what is a growing and developing bioethics community. (p. 154) The aim of this article is to formulate a Protestant ethical foundation for article 17 of the UDBHR. Vorster (2017:33, 243) understands the quest for a theological foundation as the study of Scripture to determine whether a specific human ethical idea could be associated with the Biblical message -in other words, what would Scripture say about the global principle declared in article 17? Article 17 ('Protection of the environment, the biosphere and biodiversity') reads as follows (UNESCO 2006): Due regard is to be given to the interconnection between human beings and other forms of life, to the importance of appropriate access and utilisation of biological and genetic resources, to respect for traditional knowledge and to the role of human beings in the protection of the environment, the biosphere and biodiversity. (n.p.) According to Ten Have (2019:1-18), who made a probing analysis of article 17 of the UDBHR in a global context, the following three matters are addressed by this article (see also Mathooko 2016:529): • The interconnectedness between the human and the environment; • Access to water and food and protection and • Traditional knowledge. In this study, only the first aspect of article 17 will receive attention, namely, the interconnectedness between the human and the environment -in other words, between the environment and human health. Why would it be necessary to present a Protestant ethical foundation for this article? There are two reasons. The first focuses on a UNESCO rationale and the second on a Protestant rationale. This article forms part of a larger academic project in which I am investigating the UDBHR theologically. I have already discussed the above two reasons in great depth elsewhere (Rheeder 2017(Rheeder , 2018(Rheeder , 2019a(Rheeder , 2019b and will, therefore, only present a brief overview of them here. It is important from a UNESCO perspective that the UDBHR should have the widest possible support, which would undeniably contribute to its credibility. According to article 14 of the UDBHR, the inclusion of all faith traditions is a basic right; therefore, the exclusion of groups on any ground is rejected (UNESCO 2006). UNESCO (2003UNESCO ( -2005 emphasises, in particular, that the consultations took place as widely as possible during the development of the declaration. Intensive discussions were held with Islam, Confucianism, Buddhism, Hinduism and Roman Catholic faith traditions, according to several sources (Gallagher 2014:135;IBC 2004:2-4;Ten Have & Jean 2009:31). The Protestant faith tradition, however, was excluded from these consultations (Andanda et al. 2013). After the acceptance of the declaration in 2005, Prof Henk Ten Have, who had managed the process for UNESCO, remarked the following (Ten Have & Jean 2009): One lesson from the presentations and discussions was that although there are differing moral views, common values can be identified … In the end official representatives of states, but also of cultures, traditions, and religions, could agree on 15 ethical principles of global bioethics. (p. 14) The statement is not truly convincing, as the Protestant faith tradition did not give its response or consensus to the UDBHR. One could, therefore, agree with Andanda et al. (2013) that the Protestant faith tradition was excluded from these consultations. It has to be kept in mind that there are between 800 000 and 1 billion Protestants amongst the more or less 3 billion Christians in the world (Pew Research Center 2011). The exclusion casts suspicion on the selfdefinition of the UDBHR as 'universal principles based on shared ethical values ' (par. 10;UNESCO 2006). A Protestant foundation can start filling the gap left by the exclusion of the Protestant perspective by engaging in an informal discussion with UNESCO with a view to make a humble preliminary contribution to promote the credibility of the UDBHR. From a Protestant perspective, it is also important to present a theological foundation. I have remarked above that the UDBHR defines itself as universal principles grounded in shared ethical values. The declaration bases the principles on the fact that the values are accepted by all or by the majority. The argument that the Protestant faith tradition merely has to accept these values because of a global consensus is not convincing. Epistemologically, the Protestant faith tradition does not base ethical values on consensus but on Christian sources. Matz (2017) (see also Pauls & Hutchinson 2008:431) summarises this point of departure as follows: For Protestants, Scripture is the ultimate authority for faith, life, and doctrine, and this is no less true in the field of social ethics … Scripture is foundational for Protestant social ethics …. (loc 183) For this reason, Vorster (2015:109), a human rights expert from the Protestant tradition, connects shared values to the second commandment (Ex 20:4-6), saying, 'Uiteindelik bied die geskrewe Woord die beginsels vir die etiek en is dit ook die toetssteen van alle etiese kodes en handelinge' ' [E] ventually, the written Word provides the principles for ethics and that is the acid test for all ethical codes and acts'. The view of Van Leeuwen (2014:419) confirms this epistemological point of departure in his overview of the UDBHR from a Protestant perspective. An ethical code such as the UDBHR has to be tested based on Scripture. It does not mean, however, that natural law (human reason, emotion and democratic approval) is epistemologically rejected, but it has to be always tested according to Scriptural principles to constitute a complete Protestant social ethics (Douma 1997:70). The implication is the formulation of Scriptural arguments that accept or reject an ethical code, in this case, article 17 of the UDBHR. Up to now, no Protestant investigation has been made into article 17 and, therefore, this article could be regarded as an introductory attempt to fill the gap. A possible reason why the declaration has not made an impact in South Africa yet is the fact that the country is predominantly Protestant and that a foundation appropriate to the Protestant faith tradition is lacking. Habermas (2012:324), Hauerwas (2012) and Rawls (1993:134) are of the opinion that followers of the Protestant faith tradition will find it difficult to internalise and apply the principles to the UDBHR in practice without a Protestant theological foundation. In the Protestant community of South Africa, such a foundation could promote the acceptance of the global principles of the instrument, including the principle contained in article 17. The methodology that will be followed is that article 17 will firstly be briefly analysed. Secondly, the analysis will be evaluated and founded from a broad Protestant social-ethical perspective. Article 17 of the UDBHR will now be analysed and discussed. Points of departure Firstly, in analysing article 17 of the UDBHR, UNESCO sources and commentaries will be used as far as possible, with a view to describe the 'UNESCO perspective' as accurately as possible. Secondly, the juridical-hermeneutic stance is taken that the analysis will take place according to article 31 of the Vienna Convention on the Law of Treaties, which states (Kirby 2009): [S]uch instruments are to be … interpreted in good faith in accordance with the ordinary meaning to be given to the terms of the Treaty in their context and in the light of its object and purpose. (p. 73) The relationship Article 17 of the UDBHR presents a new global bioethical principle of which the basic viewpoint is that the environment and human health (biomedical ethics) should not be separated from each other (UNESCO 2008:66). '[E]nvironmental security is no longer peripheral to the issues of human health', Tandon (2009:253) wrote in his commentary on article 17. Several UNESCO commentaries express the opinion that the general point of departure of article 17 is utilitarian in nature, namely, that the environment could have far-reaching positive or negative consequences for human health (Hattingh 2014:234;Tandon 2009:247-250;Ten Have 2019:36). The environment is essential to health, which means the environment is valuable (Ten Have 2019: 2-3, 16, 20, 33, 45-48;UNESCO 2011:78). Already in articles 1 and 14 of the UDBHR, the relationship between the environment and health is declared as a fundamental point of departure. Article 1 states that the scope of the declaration is addressing bioethical challenges that amongst others include the relationship between health and the environment. It reads as follows: '[T]his Declaration addresses ethical issues related to medicine, life sciences and associated technologies as applied to human beings, taking into account their social, legal and environmental dimensions' (Tandon 2009:249;UNESCO 2006). In article 14, the fundamental relationship between human health and the environment is articulated even more clearly (Ten Have 2019:26, 49, 55, 108-109). The article states the principle of social responsibility of the government and all sectors of society is to improve the health of all people, because 'the enjoyment of the highest attainable standard of health is one of the fundamental rights of every human being' (article 14:1-2). Article 14 (sections a-e) formulates five actions (health determinants) that can improve or harm the health of people. The third determinant is articulated as the 'improvement of … the environment' (article 14.2c;Tandon 2009:249). Because of the relationship between the environment and human health as declared in articles 1 and 14 and, especially 17, it is important that the environment should be protected (see article 17): '[D]ue regard is to be given to … the role of human beings in the protection of the environment, the biosphere and biodiversity' (UNESCO 2006). A significant remark of Ten Have and Jean (2009) is as follows: '[T]he UDBHR recognizes that humans have a special responsibility to protect biodiversity and the biosphere within which human beings exist'. The reason for this responsibility is that protection of the environment can be regarded as a form of healthcare (see also Likinda 2016:273, 277; Ten Have 2019:39). Rationale In the above discussion, I have stated that the point of departure in article 17 is the supposition that there is a relationship between the environment and human health and that this relationship can have negative (and positive) effects on human health. The fact that the environment can have negative effects on human well-being leads to the responsibility to protect the environment. The question arises now, on what does article 17 base this point of departure? It is based on two arguments, namely, a philosophical and a practical foundational supposition. The philosophical foundation is found in the use of the concept of 'interconnectedness' and the practical foundation is found in the concepts of 'environment, the biosphere and biodiversity'. Open Access Interconnectedness What is the philosophical foundation of the viewpoint that a relationship exists between the environment and human health? UNESCO states the following in article 17: '[D]ue regard is to be given to the interconnectedness between human beings and other forms of life' (UNESCO 2006). This statement is already found in the Preface of the declaration where the international community is encouraged to realise 'that human beings are an integral part of the biosphere' (par. 11;Tandon 2009:247-248;UNESCO 2006). The concepts of 'interconnectedness' and 'integral part of' presume a certain view of the human and nature. The UNESCO rejects a dualistic view that separates the human and the environment from each other. The human and nature are not opposing and separate realities in which the human is a neutral and innocent subject and nature is a danger and enemy; the evil does not transfer from nature to humans and threaten them. The neutral human is not surrounded by a hostile nature, which has to be dominated by the human. Nature might be the origin of diseases, but diseases also result from human intervention in nature (Ten Have 2019:92). It can rather be said that there is a deep relationship, an interconnectedness, an integral coexistence between the environment and the human. The human and the environment form an interwoven coexistence. It can be described as an interdependence. The human forms part of nature or the environment (UNESCO 2008:68). What the human does has value for nature and what happens in the environment has an influence on the human. The human being is also an ecological being. Nature alone cannot be blamed for the health condition of the human. The UNESCO does not deny the debate that focuses on the value of the human and the intrinsic and instrumental value of nature, but it rather departs from a philosophy of an interconnectedness existing between the environment and the human: '[T]he fate of nature and human beings cannot be delinked. Rather that attributing value to either human or nature, it is a relationship that should be valued' (Ten Have 2019:71). Within this interconnectedness and the integrated totality, it happens that the human harms nature, which in turn leads to nature having a harmful or detrimental effect on the health of the human. Because the human and nature reciprocally influence each other and the environment can harm the human, the human stands in an ethical relationship with the environment, biosphere and biodiversity (Ten Have 2019:39, 50, 71-72). Environment, biosphere and biodiversity What is the practical reason for the supposition that an integral relationship exists between the environment and health? Brief attention will now be given to practical examples to demonstrate the relationship between the environment, the biosphere and biodiversity. Article 17 uses these three concepts, namely, 'environment', 'biosphere' and 'biodiversity', to prove the interconnectedness between the environment and human health: '[D]ue regard is to be given to the role of human beings in the protection of the environment, the biosphere and biodiversity' (UNESCO 2006). Both Hattingh (2014:229) and Ten Have (2019:20) are of the opinion that currently these concepts are not clear and that no acceptable universal, objective and scientific definitions exist (see also Likinda 2016:276). Despite the similarities and differences, the terms are generally used as synonyms and most commentators prefer to use an overarching term such as 'earth system' (Hattingh) or 'all living things' (Ten Have). Article 17 of the UDBHR uses the term 'forms of life': ' [D]ue regard is to be given to the interconnectedness between human beings and other forms of life' (UNESCO 2006). Broadly speaking, these concepts could be described as follows: environment indicates the large totality as well as the requirements for life in general (water cycle, photosynthesis and absorption of heat); biosphere refers to smaller, relative independent sections of the environment and it is a geographical term; biodiversity indicates the differences in species, between species and biospheres (Ten Have 2019:19-43; WHO 2015:1-43). In the discussion of the practical reason the emphasis will now be on a few examples that indicate what the differences between the three concepts (environment, biosphere and biodiversity) are, whilst these examples will also demonstrate the negative relationship between nature and human health. It is impossible, however, to give attention to all the practical examples, which are many, in the limited space. The reader who is interested in more practical examples is advised to consult the following two comprehensive works: Connecting Global Priorities: Biodiversity and Human Health A State of Knowledge Review by the World Health Organization (2015) and a book by Ten Have (2019), namely, Wounded Planet: How Declining Biodiversity Endangers Health and How Bioethics Can Help. It has to be noted that the UDBHR indicates in its Preface that the information documents of the World Health Organization (hereafter WHO) can be used as background in explaining the instrument (par. 6; UNESCO 2006). Human activities have a big impact on the environment. An example is the effect of air pollution, which presents a serious challenge to humans (WHO 2015:63-74). Air pollution is associated with approximately 6.5 million deaths worldwide per year, with 50% of the deaths occurring in China and India. Diseases generally connected with air pollution are cancer, cardiovascular and chronic lung diseases. The biggest cause of air pollution is the production of energy. It is said that in 2013, approximately 23 000 premature deaths, 12 000 new cases of chronic bronchitis and 21 000 hospitalisations occurred because of power generation in the European Union. Air pollution is a global problem, because the poisonous substances released by coal in generating power affect people in countries that do not burn coal themselves (Ten Have 2019:23-25, 74, 102-103). It is known that biodiversity found in the shape of trees (and in the shape of plants) plays an important role in the quality of air in the biosphere. Trees can remove a large quantity of pollution from the air. In some cases, however, biodiversity is destroyed by pollution, especially when the leaves are seriously damaged (Likinda 2016:274). In addition, biodiversity is also destroyed by habitat destruction and misuse. In this way, the destruction of biodiversity indirectly contributes to air pollution and accompanying health problems (Hattingh 2014:232;Ten Have 2019:4, 65). Ironically, the health industry also carries some of the blame for air pollution. Weaver (2016Weaver ( :2772 comments on the situation, saying, ' [B]ioethicists are increasingly documenting the impact the health-care industry has on the environment'. The health industry is a large energy consumer that produces millions of tonnes of waste and carbon dioxide per year (Fiore 2016). There is also a serious indictment of an alarming connection between the generation of energy, air pollution and health problems in South Africa (Olutola & Wichmann 2020). A biosphere is a geographic area where diversity exists in symbiosis. Biodiversity is fundamental to the optimal functioning of a biosphere (Likinda 2016:274, 276). Ten Have (2019:41) states that 'human health … and healthy ecosystem are linked'. The human is part of the biotic community or biosphere. There is an interconnectedness of existence. Bilharzia (schistosomiasis) is a big problem in sub-Saharan Africa (WHO 2015:56-59). Approximately 76% of the population of sub-Saharan Africa lives close to rivers, lakes and other water resources. The disease infects almost 300 million humans worldwide, and nearly 93% of the infections occurs in sub-Saharan Africa. Bilharzia seriously affects the internal organs (liver, intestines and bladder), whilst it also inhibits the growth of children. Schistosomiasis is caused by a parasitic worm that lives in a water snail as its host for a large part of its life cycle. People are infected when the worm leaves its host and penetrates the skin. In Malawi, overfishing has been identified as a reason for the increase in infections. Fish are the natural enemies of the water snail host. Overfishing has, therefore, resulted in an increase in water snail hosts and, therefore, of the harmful worms. In this way, the number of infections is on the increase. In Cameroon, deforestation is associated with bilharzia. Deforestation increases sunlight penetration, causes changes in the flow rate of water, as well as varying water levels, and an increase in plant growth. These changes in the biosphere contribute to the increase in the number of water snail hosts, which should not have happened (Likinda 2016:274). The flow of infected water is not limited to a single region or country (Hattingh 2014:232, 234). The disease is also a big problem in South Africa. The parasite is brought to South Africa by infected immigrants from sub-Saharan Africa (Chimbari Moses et al. 2017 (biomass). With the destruction of the environment, mainly by foreigners (construction companies from Brazil), the health system, rituals and well-being of the community are negatively influenced. Deforestation destroys the following: (1) biospheres and diversity; (2) resources such as drinking water, fresh air, materials for rituals, artefacts and buildings, natural food (plants and animals); (3) spaces for houses, temples and villages, small-scale farming, medicinal plants, natural smells, sounds and the colour of landscapes (very specific environment for ceremonies); (4) the well-being of the communities and the happiness of the forefathers; and (5) the indigenous culture, which includes traditional medicine and health science. All these losses in biodiversity have led to big physical and psychological health problems in this indigenous community (Likinda 2016:279). In South Africa, depression is also connected with the lack of access to green landscapes (Tomita et al. 2017). The losses in biodiversity are well known: dinosaurs, dodos, mammoths and passenger pigeons. It is generally accepted that almost 150 species are lost every day (Likinda 2016:274). Some other facts are (Ten Have 2019:25): • One out of every four mammals and one out of every eight bird species are endangered. • Forty one per cent (41%) of amphibian species, 33% reef building corals and 30% of pine tree species are endangered. • The existence of 9000 animal species is endangered. • Eight thousand and five hundred (8500) plant species are endangered. • Every year, about 7.3 million hectares of forests are lost. • The majority of tropical forests have already been destroyed. • During the past 35 years, the original biodiversity has been reduced by more than a quarter. These losses are caused by pollution, overutilisation of natural resources, ozone depletion, habitat destruction and fragmentation, soil erosion and climate change. The loss of biodiversity is tragic because of two reasons: (1) the loss is irrevocable and (2) humanity does not realise the medicinal potential the world is losing. About 1.8 million species are known, but it is surmised that there are between 5 million and 30 million species. Biodiversity has to be protected as a potential medicinal source for the present and future generations (Hattingh 2014:233, 240;Ten Have 2019:23-26, 70;WHO 2015:2). In sum, three conclusions can be made from the analysis of the global principle. Firstly, the international community is convinced that an irrefutable relationship (interconnectedness) exists between the environment and human health. Secondly, this assumption is based on two reasons. The first reason recognises a philosophy of interconnectedness and the second is found in practice within the environment, biosphere and biodiversity. Thirdly, because of the relationship between the environment and human health, the protection of the environment, biosphere and biodiversity is an urgent obligation. Subsequently, a Protestant ethical perspective on this global principle will now be discussed -in other words, can the Protestant faith tradition give its consent to this principle together with the global community? A protestant perspective In the light of the analysis of article 17 of the UDBHR, it will now be determined whether the following concepts can be Biblically approved: (1) interconnectedness between the human and the environment and, because of this relationship and (2) the obligation to protect nature. Before giving attention to these bioethical concepts, it is important to make a few remarks on hermeneutics. Hermeneutics Already in 1954, the Protestant theologian Joseph Sittler had the vision that a 'theology of the earth' had to be developed (Horrell 2011:255). This vision is noteworthy because it was articulated long before the publication of the influential Silent Spring (1962) by Rachel Carson, the founding of Greenpeace (1971) and Friends of the Earth (1972), as well as the sensational article by White (1967), in which he criticises the Christian life and world view as the most important reason for ecological problems. The statement was also made long before the well-known Protestant theologian Jürgen Moltmann (1985) pleaded for Christian ecological awareness. The question now arises how Scripture should be approached in developing a 'theology of the earth', which can be used to evaluate article 17 of the UDBHR. Up to and including the influential work of Thomas Kuhn, under the influence of positivism, it was accepted that practising science (and Scriptural hermeneutics) is a neutral and objective act. Science and hermeneutics were separated from all paradigmatic assumptions. With the rise of the postfoundational philosophy of science, the subjectivity of science and hermeneutics was recognised and it was accepted that there would always be a 'leitmotiv as a presupposition' when practising science. A paradigmatic point of departure just have to be acknowledged (Vorster 2017:362 An example is that Habel rejects all efforts to mitigate the clear anthropocentric meaning of Genesis 1:26-28; therefore, he has no option but to reject the verses as an unfortunate insertion in the light of the eco-principles (Habel 2000:46-47). From a Protestant perspective, this point of departure is problematic, as the authority is shifted outside the Bible or the Christian tradition (Horrell 2011:258). This last criticism leads to the second approach, which is defended by several Protestant theologians. A special example of this approach is found in the hermeneutics suggested by the well-known theologian, namely, Ernst Conradie who focuses on ecology and the Bible (Horrell 2011:259). The viewpoint of Conradie (2006) is that the use of 'Biblical keys' (also called heuristic keys, themes and motives) could be an important and responsible methodology in Biblical hermeneutics. He explains these keys, saying: [T]he keys are not directly derived from either the Biblical texts or the contemporary world but are precisely the product of previous attempts to construct a relationship between text, tradition and context … Doctrinal keys are comprehensive theological constructs which may be used to establish a relationship between the Biblical texts and a contemporary context. (p. 306) One could reason that the text, texts and tradition construct the theme, after which the theme can be used to evaluate the text, texts and contexts. Conradie refers to stewardship as an example of such a key. The use of themes as hermeneutic points of departure is accepted and applied by several Bible commentators in the Protestant tradition. In this connection, Moltmann (1977) and Vorster (2017) should be mentioned. Theologians working specifically in the field of bioethics are Childress (2002), Douma (1997), Macaleer (2014) and Rusthoven (2014). It has to be recognised that a variety of moral positions coexists in Protestantism; thus, it will be difficult to find a universally accepted Protestant position with regard to any bioethical challenge. The approach in this study is, therefore, not connected with any specific tradition within Protestantism, precisely to promote the universality and credibility of article 17 as far as possible in Protestantism. Childress (2002), therefore, recommends that one or more Protestant themes have to be selected on the basis of which a bioethical problem can be investigated. With a thematic approach, this study also follows the broad thematic approach of the UNESCO research series with the title 'Advancing Global Bioethics' in their discussion of the relationship between the UDBHR and broad Protestant theology (see 'Religious perspectives on social responsibility in health', in Tham, Durante & Gómez 2018). An attempt will now be made to develop a Protestant foundation for article 17 grounded in the relevant themes of creation, sin, covenant, Christology and eschatology. Theological ethics as a science is closely intertwined with the total field of theology which, on the one hand, benefits from the insights of other theological disciplines but which, on the other hand, also wants to make its own contribution (Van Wyk 1986:15). Several theologians believe that theological ethics can not only focus on scientific deepening but also have the duty to either confirm or criticise claims to truth in society and culture with the existing theological insights (testing according to Vorster) (Plantinga, Thompson & Lundberg 2010:19). Nullens and Volgers (2010:127) warn against exclusive and extreme specialisation in theology that could lead to the loss of public relevance. The unique contribution of this study tests the most recent global bioethical principles of the UDBHR using the latest Protestant theological insights and aims to contribute to the public awareness and acceptance of global bioethical principles within the Protestant faith tradition. Creation God as the creator of heaven and earth ( Horrell 2011:190). What does it mean that creation was good in the eyes of God? Being created good indicates the high inherent dignity of creation (Horrell 2011:190;McKim 2017:217). Therefore, precisely because of the inherent dignity of creation, it is found that already before the fall into sin, God stated the principle that the human has to respect creation. He said to the human that he was not allowed to eat 'from the tree of the knowledge of good and evil'. The human had to leave the diversity alone and protect it. The reason for the command was that the destruction of biodiversity had also serious health implications (death) for the human being (Gn 2:17 -New International Version). In his commentary on the narrative of creation before the fall into sin, Frame (2008:269) interprets the above truth as follows: 'And we must protect plant and animal life, and their habitats, if we and our descendants are to survive'. What does it mean that the human was created good? In general, the expression is interpreted as an indication that the human, who was created in the image of God, has dignity. In addition, commentators on Scripture indicate that this expression also means the human was created as an interconnected (bound to earth) being. The human is not only like God but also like creation. The human-created like creation shares the same substance as creation (Gn 2:7; 3:19). As a creature, the human was made by God from the same ground and clay as the plants and trees covering the earth (Vorster 2017:359). ' [O]ur very creatureliness is something we have in common with nature, rather than with God', is the argument of Frame (2008:269). According to Moltmann (1985:18, 51), the human was named 'earth' ('Adam'), which indicates that humanity is also the image of the earth. To be imago mundi means the human 'remains bound up with the earth' and implies that the human and creation cannot be separated from each other. Because of this created relationship, the narrative of creation wants to bring it urgently to the attention of the human that what happens to the one will also happen to the other. When nature is harmed, the human is harmed, and impairment of the human means per definition impairment of nature. What does it mean that the Sabbath is a good institution? Moltmann (1985) judges as follows about the meaning of the creation of the human and the institution of the Sabbath: It is true that, as the image of God, the human being has his special position in creation. But he stands together with all other earthly and heavenly beings in the same hymn of praise of God's glory, and in the enjoyment of God's sabbath pleasure over creation, as he saw that it was good. (p. 31) The human and creation stand together. There is interconnectedness: together the human and creation (environment, biosphere and biodiversity) should enjoy the Sabbath (Ex 20:8-11). They have equal dignity: one cannot rest without the other. The rest of the human also implies the rest of the nature; if nature is unable to rest, the human also cannot rest (McKim 2017:217). Already in the creation narrative, it is to my mind clear that the ideas found in article 17 of the UDBHR, namely, that interconnectedness exists between the human and the environment and that creation has to be protected, find strong support in Scriptural thinking. Sin The creation of the human as an interconnected being also has a great potential danger for the human and creation. The command Adam and Eve received to protect biodiversity was given to them as a covenantal command they had to obey. It implied that disobedience would have serious consequences (Gn 2:16-17; 3:17). As already indicated above, the human received the command to protect a part of the biosphere. The narrative emphasises the fact that the human ignored the covenantal command of God, in this way trying to demonstrate their carte blanche to God and creation. A sinful will and deeds had become part of humanity. Serious consequences for the human were the reality of death as well http://www.ve.org.za Open Access as a changed heart that would be continually disobedient to God (Kreider 2019b:219). The disobedience of the human also had serious consequences for nature. The earth is no longer only a human-friendly and idyllic environment, it is a transformed environment that could have serious detrimental effects on human health. It now produces thorns and thistles (Gn 3:18). The environment can now tear humans apart, kill them (Jdg 8:7) and harm and inflict pain on the human body (Ezk 28:24). Creation displays an aggression that hits the human being with its 'fists' and can cause him serious injuries (2 Cor 12:7). In the narrative of Jesus, the thorns in the shape of a crown on his head are connected with his physical and psychological suffering and eventually his death (Mt 27:29, Jnh 19:2-5). Thorns symbolise the interconnectedness between creation and the human, as the environment can lead to the suffering and death of the human. Grudem (2010) wrote the following: Here the expression 'thorns and thistles' functions as a kind of poetic image, a specific, concrete example that represents a multitude of things -such as hurricanes, floods, droughts, earthquakes, poisonous plants, poisonous snakes and insects, and hostile wild animals -that make the earth a place in which its natural beauty and usefulness are constantly mixed with other elements that bring destruction, sickness, and even death. Nature is not now what it was created to be, but is 'fallen'. (pp. 321-322) In this theme of the fall into sin, it is important to note that mention is not only made of a one-way movement where nature harms the human but also of the opposite movement where 'because of you' (NIV) the ground is cursed, as God states clearly (Gn 3:17-18) (Kreider 2019b:219). It is the human's doing that there are thorns and thistles now. Through the decisions and actions of the human, creation has been affected and transformed in such a way that a destructive interconnectedness now exists between creation and the human being. The human's actions affect creation and in turn creation can harm the human (O'Brien 2010:139). The theme of the fall into sin supports the point of departure of the UDBHR that nature can be affected because of the irresponsible deeds of the human, which in turn leads to the fact that the best interests of the health of the human can be affected. This discussion has confirmed that a relationship exists between creation and health. Covenant The narrative of Noah and the Ark tells the story of God's covenant of grace or his relationship with all humanity (Gn 6-9, Moltmann 1999:110). Three matters that are important for the theme of this study are found in this story. Firstly, a prominent idea is found again: when creation is severely harmed because of the selfish deeds of the human, it is followed by reverse events -this time a big water flood causes the death of humans. The fact the human had become corrupt and that his thoughts were evil (Gn 6:5-7, 12 -NIV) grieved God, which led him to destroy the human and creation by a flood (Gn 7:21). It is clear that the human and creation exist in an interconnected relationship in which the actions of the human harm creation, and creation severely harms the health of the human. In this regard, O'Brien (2010) writes that: [I]t is unequivocally clear that the root cause of the flood was human wickedness. This is a story about the reality that all creatures suffer the consequences of bad decisions on the part of one species. (p. 139) Secondly, within the reality of destruction, it is found that God is still protecting and preserving the human and nature (Kreider 2019a:224). God expected Noah to build an ark and commanded him to take his family and a big diversity of natural life into the safe environment of the ark with the purpose 'to keep their various kinds alive throughout the earth' (Gn 7:3). The fact that God protected the human and creation together is also 'an expression of the same interconnected thinking reflected in the covenant: we are graced by God alongside other creatures, and we will survive only alongside other creatures', is the opinion of O'Brien (2010:138). It is clear that the protection and interconnectedness of the human and creation are inclusive concepts that cannot be separated from each other. Thirdly, after destroying and saving the human and nature, God made a covenant with Noah. He promised he would never again destroy the human and creation by a water flood and gave the rainbow as a symbol of the promise (Gn 8:22; 9:9-10). The fact that the covenant was made with the human being and all living creatures is an indication of the interconnectedness of existence (Moltmann 1999:113). God's promise that he would not destroy the human and creation again indicates his will that nature should be protected. According to Moltmann (1999:110), this covenant points to the right of creation to be protected. From the covenant, it is clear that interconnectedness and protection are two concepts inherent to the message of the Bible. The notion of the relationship between the human, environment and health is illustrated even more clearly in the covenantal story of Exodus. The first 15 chapters of Exodus deal with the liberation of the covenantal people from the power of an oppressive regime. According to Olson (2011:294-295), this story connects the ethical principle of justice with ecology. The narrative of the 10 plagues is of special importance, as the story of the fall into sin is repeated here. God sent a series of plagues or ecological disasters (polluted land and water, harm to the environment, threats to food security, climate change) to the Egyptians with the purpose of convincing the pharaoh to set the covenant people free from slavery and oppression. All the ecological disasters had the potential to make the human seriously ill, even causing them to die. During the sixth plague, God instructed Moses and Aaron to take handfuls of soot from the furnace and toss it into the air before the pharaoh (Ex 9:8-11). Soot was probably a by-product of wood burning in the Egyptian furnaces and it was expected from the Israelites to operate the furnaces under difficult circumstances (5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15)(16)(17)(18)(19). According to the Biblical text, humans and animals became seriously ill (festering boils) after the soot mixed with the air or atmosphere and a fine dust formed (Ex 9:9). According to Mazokopakis and Karagiannis (2019:311-312), it is possible that microbes developed in the carcasses of dead animals (fifth plague) and were transferred to humans by mosquitos (third plague) and flies (fourth plague), which also caused the terrifying disease. Mazokopakis and Karagiannis (2019) summarise the application of the sixth plague as follows: [… T]he sixth Egyptian plague, as described in the Book of Exodus, constitutes the earliest medical report on the detrimental impact of soot/dust upon the human health and the environment, which is now well-known, having been documented over time since then. (p. 312) As in the case of the fall into sin, it is not a movement in one direction where the environment causes the sickness of the human; that is, it is not only a movement from creation to the human, but there is also a movement from the human to creation. On the one hand, it is clear from Exodus that the direct action and insensitive decision of the pharaoh (not stopping the oppression) led to environmental disasters (Ex 7:22; 8:32; 9:34); on the other hand, it is clear that humans were the cause of the dangerous dust (Olson 2011:295). Through the decisions and actions of the human, creation is affected and transformed. The decisions and actions of the human affect creation, which can result in creation harming the human (O'Brien 2010:139). The danger presumed in article 17 of the UDBHR, namely, that the human's damage to the environment, the biosphere and biodiversity has big disadvantages for human health, is confirmed by the covenantal message and forcefully brought to the attention of the Christian community. The story of the covenant again confirms the relationship between creation and human health and indicates why nature has to be protected. Christology I have already referred to the argument of White (1967) that the Christian faith has contributed to the development of science and technology in modern times with the sad outcome of the uncontrollable destruction of creation. It is generally accepted that a specific interpretation of the human image is responsible for this view and the related destruction. According to Genesis 1:26-28 (also 2:15), the human created in the image of God received the assignment to rule (râdâh) over and subject (kâbash) the earth. The Hebrew word for 'to rule' has the negative meaning of 'trample' or 'trampling' according to some commentators (Vorster 2017:357). God did indeed put everything under the feet of the human (Ps 8:7). In addition, 'subjection' has the connotation of brutal subjection of enemies as slaves in imprisonment (Stott et al. 2006:154). Such an interpretation of the human as the image of God gives some Protestants the rationale to reject the idea of protecting creation on the grounds that it would be unimportant; therefore, they can refuse to respond to the global call of article 17 of the UDBHR. The above explanation of God's command to the human to rule over the earth is not convincing because of two reasons. Firstly, the context shows that the human, who is created in the image of God, has to be the steward of God on earth by ruling in his place over the earth (Horrell 2011:260). The earth belongs to God (Ps 24:1), which implies that the human does not have autonomy with regard to creation. God provides guidelines that determine the relationship of the human with creation. According to Frame (2008:743-744) and Vorster (2017:356-360), the following are guidelines of God for the human as a steward of creation: • Creation has to be cared for ( Secondly, an in-depth Christological contra-argument is verbalised by Lundberg (2011): In view of the NT portrayal of Jesus as the true 'image of the invisible God' (Col 1:15), the work of the stewardly imagebearing takes on even crisper contours. (p. 191) Several Bible commentators identify with Christ's explanation of what it means to be in the image of God and what its implication would be for the human in his relationship with creation (Lundberg 2011:191). Christ instructed his followers to love like He did (Jnh 13:34;15:12, Vorster 2017:165). A few examples are briefly discussed below. God loved the world and so Christ came to the earth to carry the punishment for sin, so that the human would not be destroyed forever (Jnh 3:16). As Christ is co-creator of the cosmos (Col 1:15, Heb 1:2, Op 3:14), his incarnation also has redemptive meaning for the cosmos. It was part of Christ's work on earth to counteract destruction. It should not be overlooked that the redemptive love of God is not merely anthropocentric, but that it also includes the world or cosmos. It is the task of the human now to love creation redemptively by protecting it against destruction (Rm 8:21, Lundberg 2011:190). In addition, it has to be mentioned that Christ gave special priority to vulnerability. Lundberg (2011:191) connects vulnerability with creation when he refers to the emphasis Christ placed on human responsibility towards the interests of 'the least' . Although the human can also be vulnerable to the overwhelming force of creation, it does not mean that the environment, biosphere and biodiversity cannot be exceedingly vulnerable to the technological abilities of the human, as Ten Have (2019) Creation can be 'sonder klere, siek of in die tronk wees' ['without clothes, sick or in prison'] and, therefore, human has to be the voice of the groaning creation (Vorster 2017:357). Creation has to be protected against the abuse of power precisely because destruction can cause diseases in the human (Mt 25:43-44). Finally, Vorster (2017:357) also connects Christ's role as servant with the relationship of human with creation (Mk 10:45, Phlm 2:5-8). In one of his earlier works, Vorster (2007:13-20) develops an ethics of virtues (ethics of attitude), which is based on Philippians 2:5-8 and in which the servant role of Christ is central: • Christ emptied himself, which means that He detached himself from his heavenly responsibilities with the purpose to be available on earth for the earth (Phlp 2:6). • He made himself available as a slave with the purpose to serve the human and creation (Phlp 2:7). • He humiliated himself (Phlp 2:8), which implies that He was the least, set his own interests aside and made certain sacrifices with the purpose of promoting the interests of the human and creation. • He was obedient to God (Phlp 2:8), which means He fulfilled the will of God for the human and creation. The human is, therefore, not in a relationship of anthropocentric domination of creation, but in a relationship in which he has the role of a servant that has to protect and promote the interests of creation in recognising the will of God (Lundberg 2011:191). Moltmann (1985:31) even mentions that the Sabbath, and not the human, is the high point of God's creation (Gn 2:1-3). From a Christological perspective, being in the image of God means that creation has to be respected and protected. In this sense, Protestant ethics supports article 17, which calls upon the international community to protect creation. The unique contribution of Christology, which by its very nature is absent from article 17 of the UDBHR, is that conservation is inherently a form of service to God. Eschatology Eschatology deals with the events in the last period of existence of both the human and creation. An important notion in eschatology is the kingdom of God (Marshall 1995:354). Vorster (2017:136) is of the opinion that the kingdom as a present ('an already') and a future ('not yet') reality has become a prominent idea in Protestant thinking. As events in the present, the kingdom in its broken appearance was introduced with the coming of Christ (Mt 4:17, Matz 2017:loc 186) and will be transformed into an eternal kingdom of perfect glory (Col 3:4, Pt 2 1:11). This eternal kingdom becomes the content of believers' hope. Believers set their hope on the promise that the human and creation will be freed in the eternal kingdom. For the human and creation, it will be a place of glory freed from tears and destruction by war and death (Rm 8, 2 Cor 4:17, Rv 21). Part of Christian hope is that God wants to free the human and creation from injustice, suffering and pain (Vorster 2017:112). This eschatological vision, the eternal kingdom of hope, has now to be striven after as far as possible (Horrell 2011:259). Christ encouraged his followers, saying, 'But seek first his kingdom and his righteousness, and all these things will be given to you as well' (Mt 6:33). Righteousness means realising the 'armour of light' or the content of the eternal light of the kingdom as far as possible (Rm 13:12-13, Gill 1995:457). Moltmann (2012:7) explains the idea of light as part of armour, saying, '[A]s Paul, in his ethic of hope, calls for the "weapons of light", so the awakening of hope carries the promised future of righteousness into one's own life'. The citizens of the kingdom are inspired and called upon to show now already through their ethical actions where they are going (Du Rand 2015:215). What is the future righteousness that has to be striven after now? Future righteousness would mean that decisions and actions should have the aim to improve people's lives (without tears), to create an environment in which life can flourish (absence of war) and where one can promote health (no more pain and death) (Marshall 1995:354). A special way of improving the lives of humans is to create a favourable environment and to protect health by protecting and caring for creation. Kreider (2019b) refers in this connection to the eschatological vision that is found in Revelation 11:18, where God judges people 'who destroy the earth' and comments as follows on this text (see also Is 11:5-9): Destruction of the planet is not merely accomplished by active and wilful rebellion. Passivity, too, is failure to care for the earth and is tantamount to destroying it … Several practical implications follow. (1) Creation care is a gospel concern, for it is a life issue. Healthy human and animal life depends on a good environment that includes clean air and water and one in which disease and decay is controlled. (p. 221) From the eschatological perspective, protection and respect for creation are emphasised. In this sense, there is eschatological support for the content of article 17. Conclusion The UDBHR is an important, modern human rights instrument regulating global bioethical challenges. The Protestant faith tradition was excluded from any discourse regarding the UDBHR; consequently, the universality and credibility, especially in Protestant circles, have been questioned. For the Protestant faith tradition, the voice of the Bible is decisive. An ethical foundation for article 17 is, therefore, important, as it can contribute to the internalisation of the principle. In the analysis of article 17, it has been shown that the international community is convinced that an irrefutable relationship exists between nature and the health of the human interconnectedness. A damaged creation harms the health of the human and, therefore, the protection of
2021-08-02T00:05:52.639Z
2021-05-11T00:00:00.000
{ "year": 2021, "sha1": "07d46928ea2262f9b73c900ec4c1cdf9c75ed8ec", "oa_license": "CCBY", "oa_url": "https://verbumetecclesia.org.za/index.php/ve/article/download/2186/4553", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "2162773017d808ff9be3b7d6b3f93bc8848b0cbd", "s2fieldsofstudy": [ "Environmental Science", "Philosophy" ], "extfieldsofstudy": [ "Political Science" ] }
73532789
pes2o/s2orc
v3-fos-license
Native Oils from Apple , Blackcurrant , Raspberry , and Strawberry Seeds as a Source of Polyenoic Fatty Acids , Tocochromanols , and Phytosterols : A Health Implication 1Department of Animal Nutrition and Feed Science, National Research Institute of Animal Production, 32-083 Balice, Poland 2Department of Animal Products Technology, University of Agriculture in Krakow, Balicka 122, 31-149 Kraków, Poland 3Central Laboratory, National Research Institute of Animal Production, 32-083 Balice, Poland 4Institute of Plant Products Technology, University of Life Sciences in Poznan, Wojska Polskiego 31, 60-624 Poznan, Poland 5Department of Horse Breeding, Institute of Animal Science, Agricultural University,Mickiewicza Street 24/28, 30-059 Kraków, Poland Introduction Poland is the leading European manufacturer of fruit juice, in particular concentrated apple juice.Fruit juice and drink production was 1100 thousand tons in the 2005/2006 season and varied between 900 and 1300 thousand tons in the subsequent years, whereas the amount of berries produced for juice is almost 500 thousand tons [1,2].A valuable coproduct of juice production is pressing residue known as pomace, which in addition to being high in nutrients is a rich source of biologically active substances called nutraceutics, that is, unsaturated fatty acids, natural antioxidants (phenolic acids, flavonoids, anthocyans, tocopherols, and tocotrienols), carotene pigments, phytosterols, minerals, aromatic substances, pigments, bacterial and viral inhibitory substances, ballast compounds, fibre, and pectins [2][3][4][5][6][7].Dried pomace is put through a sifting process to produce seeds that form 5-70% of total pomace weight depending on type of dried fruit.Increasing attention has been paid during the last decade to the fact that some seeds may contain fats of high nutritional, dietetic, and even therapeutic value.The major lipid components of oils are triacylglycerols (esters of glycerol and fatty acids).Less important components found in much smaller amounts are nontriacylglycerol compounds such as phospholipids, sterols, tocopherols, and carotenoids [8,9].These components not only determine the nutritional value of the oils but also have a significant effect on their stability, in particular the oxidative stability. Seeds of berries, including strawberries, raspberries, blackcurrant, and apples, are a rich source of polyenoic 2 Journal of Chemistry fatty acids (EUFA).These acids are not synthesized in the human body and have to be supplied through diet.Human nutritionists recommend that Poles should consume diets lower in fats with the change of its structure by increasing the intake of fats that contain polyenoic fatty acids.In addition to linoleic acid (LA) and long-chain polyenoic fatty acids (LC PUFA), an important role among them is played by 18carbon polyenoic fatty acids having a triene structure-linolenic (ALA) and -linolenic (GLA)-which belong to two biochemically different families of n-3 and n-6.It has recently been emphasized that n-3 fatty acids serve important physiological and health-promoting roles, especially in preventing cardiovascular diseases [10].Considerable attention has also been given to the health-promoting role of -linolenic acid, especially with regard to inflammatory, allergic, and cardiovascular diseases [11]. Recent years have seen much more intensive research on compounds that protect the body from the harmful effects of free radicals and other active forms of oxygen.Lipophilic components of vegetable oils, which show antioxidant activity and an ability to scavenge free radicals, are worthy of special notice.To date, vegetable oils rich in 18-carbon polyenoic fatty acids having a triene structure were used as pharmaceutical preparations available in capsules.However, the current oil production technology that uses cold-pressing in nitrogen gas or supercritical carbon dioxide extraction enables the oils to be obtained in almost unchanged form.They are more abundant in side compounds of high biological and antioxidant activity.Occurrence of antioxidants that inhibit unfavourable changes and knowledge of their activity and stability is essential not only to technologists but also to nutritionists.The shelf life of oils can be extended by using a variety of procedures that protect freshly extracted oils, such as limiting or eliminating oxygen contact, light exposure, and contact with prooxidative metal (copper and iron) ions, as well as supplementing the oil with oxidation inhibiting substances.Various antioxidants are used for this purpose.Efforts are made to limit the use of synthetic antioxidants on the grounds of health risks.Considerable emphasis is placed on the use of natural or nature-identical antioxidants.Their broad antioxidative properties may help to limit autoxidation of vegetable oils rich in polyenoic fatty acids having a triene structure.When they are added to the diet, they may have beneficial effects on the human body because of their free radical scavenging capacity. In a search for new sources of these biologically valuable fats, we performed chemical analyses of oils obtained from the pressing of strawberry, raspberry, blackcurrant, and apple seeds in terms of the composition and content of fatty acids, tocopherols, tocotrienols, and phytosterols. Materials and Extraction of Fat from the Seeds.Oils from blackcurrant, raspberry, strawberry, and apple seeds originated from Mega-Sort company (Poland), which specializes in the drying and packaging of fruit pomace produced after extraction of fruit and vegetable juices.Pomace with about 55% moisture content, originating from Hortex company (Poland), was dried on drum driers to reduce the moisture below 10%.Dried fruit pomace was then cut and ground and the seeds were separated.The production line (Scorpion, Poland) included a chopper, a separator, and a pneumatic tunnel, in which the seeds were separated from the other parts.Oils were obtained from the seeds on a standard technological line used for cold-pressing of oilseeds (Farmet, Czech Republic) and equipped with a UNO screw press, a sedimentation tank, and board and candle filters.Seeds were subjected to a press head temperature of 55 ∘ C for 20 s.The pressed and filtered oils from raspberry and strawberry seeds were placed in dark glass containers with added N 2 , tightly closed, and refrigerated at 4 ∘ C until further analyses. Determination of Fatty Acid Composition in Oils. Gas chromatography was used to determine higher fatty acids in fruit seed oils in the form of methyl esters, following saponification of the fatty acids contained in the sample. Equipment.A Varian 3400 gas chromatograph (USA) equipped with an 8200 CX autosampler and FID detector was used.Data were integrated using Varian Star 4.5 software.Conversion was carried out using Excel.Sample preparation also involved the use of 7 mL screw-cap tubes (Schott), chromatography vials, a water bath, and a nitrogen solvent evaporation system. Procedure for PUFA Determination.Oil samples (about 100 mg) were weighed into 7 mL screw-cap tubes (Schott), saponified with 0.5 N NaOH in methanol (80 ∘ C), and esterified with BF 3 in methanol [15,16].Methyl esters of fatty acids were extracted with hexane.After salting out with saturated NaCl solution (0.58%), the hexane layer was collected into a chromatography vial and determined by gas chromatography. 2.3.Determination of -, -, -, and -Tocopherols and Tocotrienols in Oils.Tocopherols (-, -, -, and -tocopherols) and tocotrienols (-, -, -, and -tocotrienols) in fruit seed oils were determined according to the method described by Gąsior et al. [17] using normal-phase high-performance liquid chromatography (NP-HPLC) [18].Tocopherols and tocotrienols were determined after saponification of the sample in the presence of potassium hydroxide and ethanol, followed by extraction with a mixture of ethyl acetate/hexane (1/9, v/v).Analyses were performed with standard solutions (0-24 g/mL) containing a mixture of tocol standards, using -tocopherol from Fluka (USA), -, -, and -tocopherols from Calbiochem (USA), and tocotrienols from Davos Life Science Pte Ltd. (Singapore).Final results were corrected for tocol content in a blank sample and recovery determined by the standard addition method (89-102%). Determination Procedure. Oil samples (about 20 mg) were weighed into 12 mL Schott tubes with an accuracy of 0.0001 g. 1 mL of pyrogallol in ethanol (60 g/L), 0.5 mL of aqueous solution of potassium hydroxide (600 g/L), 0.5 mL of aqueous solution of sodium chloride (10 g/L), and 0.5 mL of ethanol were added to these.Vials were screw-capped, agitated on a Vortex shaker for about 10 s, and then transferred to a water bath (70 ∘ C), where the samples were saponified for 45 min.After cooling, 4 mL of aqueous solution of sodium chloride (10 g/L) was added and this was extracted twice (4 mL each) with a mixture of ethyl acetate and -hexane (1 : 9, v/v), shaking (Vortex) the screw-capped tube for about 0.1 min.The extracts were pooled and evaporated dry under nitrogen (15 mL bottles) in a water bath (40 ∘ C).After dissolving the residue in 0.5 mL of a 1% mixture of isopropanol and -hexane (v/v), the solutions (50 L) were injected onto a chromatographic column.The same procedure was carried out when making blank samples except that no sample was added. Determination of Phytosterols in Oils. The sterol content of lipids from the fruits was analysed according to the AOCS Official Method Ch 6-91 [20].Phytosterol standards originated from Calbiochem (USA).0.05 g of oils was weighed with an accuracy of 0.001 g into the reaction tubes, adding 100 L of 5 -cholestan as an internal standard and the samples were saponified in 2 mL of 1 M KOH in methanol for 18 h.After adding 2 mL of water, sterols were extracted three times with a mixture of hexane and MTBE (1 : 1) for 5, 3, and 2 min.Then the solvent was evaporated under a stream of nitrogen and the samples were dissolved in anhydrous pyridine and silylated with Sylon BTZ reagent (Sigma-Aldrich, USA).Sterols were separated using capillary column DB-35 ms (25 m × 0.2 mm × 0.33 m; J & W Scientific, Folson, CA, USA) on a HP 5890 Series II gas chromatograph.Hydrogen was used as a carrier gas at a flow rate of 1.5 mL min −1 .The following temperature programme was used for the column from 100 ∘ C to 250 ∘ C (25 ∘ C/min) for 1 min then to 290 ∘ C (3 ∘ C/min).The initial and final temperature were held for 5 and 15 min, respectively.Injector and detector temperature were 300 ∘ C. Sample injection was performed by injecting 1 L of the sample onto the column, using the splitless mode of injection for 1 min.Sterols were identified by comparing the retention times of these authentic standards. Determination of Acid Value in Oils.About 5 g of oil was weighed into a 250 mL flask with an accuracy of 0.01 g.The sample was dissolved in 50 mL of hot ethanol.Titration was performed by mixing the flask content with KOH solution in ethanol to the end point in the presence of phenolphthalein.The end point was when adding one drop of lye which caused a weak but perceivable change in colour for at least 15 s.The blank sample was titrated [21]. The acid value was calculated using the following formula: where : weight of fat, : volume of potassium hydroxide solution used for fat sample titration, 0: volume of potassium hydroxide solution used for blank sample titration, and : concentration of KOH solution (0.1 N). Determination of Peroxide Value in Oils.About 2 g of oil was weighed with an accuracy of 0.001 g and transferred into a conical flask.The flask was filled with 10 cm 3 of chloroform, mixed until the fat was completely dissolved, filled with 15 cm 3 of acetic acid and 1 cm 3 of KI solution, and closed with a ground-in stopper.Flask content was agitated for 1 min and then left in the dark for 5 min.75 cm 3 of distilled water (rinsing the stopper thoroughly) and 5 drops of starch solution were added to this, which after mixing was titrated with a 0.002 N solution of sodium thiosulfate.The blank sample was prepared simultaneously [22].Peroxide value was calculated using the following formula: where : volume of sodium thiosulfate solution used for fat sample titration (cm 3 ), : volume of sodium thiosulfate solution used for blank sample titration (cm 3 ), and : weight of sample (g). Statistical Analysis. Analysis of the chemical composition, the content of bioactive compounds, and physicochemical properties in the oils was performed in 3 replications; the results were expressed as mean and coefficients of variation (CV) (Table 1). Results and Discussion Biooils are biologically the most valuable plant fats due to their composition that reflects the actual structure of all substances found in the seeds from which the oil was extracted. The WHO/FAO Codex Alimentarius Commission provided a definition of biooil that corresponds to virgin oil and determines the conditions for its extraction.Only mechanical extraction methods that ensure high quality of biooils are allowed.Pressing of oilseeds using hydraulic or screw press at low temperature regimes is therefore the only expression method.The procedures allowed to remove impurities from oil include water washing and centrifugation as well as sedimentation and filtration.No phospholipids, tocopherols, sterols, or carotenoids are removed from these oils.The high quality of these oils is conditional on the use of selected, fully mature seeds.The moisture content of the seeds obtained was in the range of 7.5-9.5%.Most oil was found in apple seeds, followed by strawberry, blackcurrant, and raspberry seeds (20.2, 18.5, 16.2, and 13.5%, resp.).These results are confirmed by the findings of other authors [14,23,24] and the small differences are due to the quality of extracted seeds and oil extraction technique (including press type and filtration method).The analysed oils were characterized by increased peroxide values (within the normal range) of 8.78, 8.39, 9.45, and 10.59 mq O 2 /kg for strawberry, blackcurrant, raspberry, and apple seed oils, respectively.Cold-pressed virgin oils are characterized by increased peroxide and acid values as a result of protein, mineral, and other impurities that favour oxidation processes being left after oil extraction [5,25]. Fatty Acids in Oils from Strawberry, Blackcurrant, Raspberry, and Apple Seeds.In addition to lifestyle and living conditions, diet is one of the most important determinants of our health and well-being.Advances in understanding the action of diet ingredients that may have a beneficial effect on the human body made it possible to design and produce food with specific health-promoting effects, rich in various bioactive components.Among these, polyenoic fatty acids are important, which play a significant role in preventing metabolic diseases of modern civilization.Polish fat products (oils, margarines, spreads, 100% fats) contain small amounts of n-3 polyenoic fatty acids but practically have no -linolenic acid.For this reason, efforts are made to increase their dietary content.Some vegetable oils are a rich source of these acids.Oils considered a rich source of PUFA having a triene structure (-linoleic acid C18:3 Δ 9,12,15 and -linolenic acid C18:3 Δ 6,9,12)-such as linseed, false flax, hemp, borage, and vipers bugloss, as well as strawberry, blackcurrant, raspberry, and apple seed oils-are characterized by widely different composition and content of individual fatty acids.This particularly refers to the group of polyenoic acids, including those having a triene structure.The composition of fatty acids in the oils from strawberry, blackcurrant, raspberry, and apple seeds was characterized by a high content of unsaturated fatty acids (90.8%, 88.6%, 94.0%, and 86.9%, resp.)(Table 2).The largest differences in the fatty acid profile of the analysed oils were observed within -linolenic and linoleic acids.Oils from strawberry and raspberry seeds had high levels of linoleic acid C18:2 (45.4% and 49.0%) and alpha-linolenic acid C18:3 (29.0% and 33.0%, resp.).The richest source of gamma-linoleic (C18:3) and stearidonic acids (C18:4) was blackcurrant seed oil (18.5% and 3.6%, resp.).Apple seed oil had a high content of oleic acid C18:1 (29.4%), which is in agreement with the findings of Yukui et al. [24].Linseed oil is one of the richest sources of -linolenic acid, whose average content in commercial oil is 57.3%, as confirmed by Choo et al. [26]. The analysed oils were found to be high in -linolenic acid; this especially concerned oils from strawberry, raspberry, and blackcurrant seeds (29.0%, 33.0%, and 13.5%, resp.).In the apple seed oil, the level of -linolenic acid did not exceed 1%.In addition, considerable amounts of this acid are found in linseed, false flax, vipers bugloss, and hemp oils [14,[26][27][28][29]. ).The dominant isomers of tocopherols in oils from raspberry, blackcurrant, and strawberry seeds were and -tocopherol, and the same oils were poor in isomer (Table 3).A different pattern was evident for the profile of tocopherols in apple seed oil, in which the concentration of isomer was highest (62.7 mg/100 g) compared to , , and isomers of 41.7, 21.2, and 13.6 mg/100 g, respectively. In comparison with the results of Helbig et al. [3], the oil extracted from blackcurrant seeds had higher concentration of tocols and a similar profile.Goffman and Galletti [30] reported blackcurrant seed oil to contain a total of 1716 mg/kg tocopherols, of which 34.8% was -tocopherol, 60.2% was tocopherol, and 5.0% was -tocopherol.Meanwhile, Velasco and Goffman [31] found that blackcurrant seed oil contained a total of 531 mg/100 g tocopherols on average, of which 89.6% was isomer.Shahidi and Shukla [9] reported that blackcurrant seed oil contained a total of 1500 mg/kg tocopherols on average.The tocopherol content of oils from blackcurrant and raspberry seeds was comparable to commercial oils rich in tocopherol, that is, maize and soybean oils at 162 and 180 mg/100 g oil, respectively [32].We can attribute differences in tocol levels to oil extraction method.The tocopherol content of oils is considerably affected by the refining process, which removes about 40% of tocopherol.In a study on oxidative stability of oils, Kamal-Eldin [33] found that problems in stabilizing vegetable oils by the addition of tocopherols are due to the fact that native tocopherols in these oils are at optimal levels necessary for their stabilization.content in blackcurrant and borage oil is 1.2% [9].The main sterols of vegetable oils reported by Warner and Mounts [34] and Rudzińska et al. [35] are -sitosterol, campesterol, stigmasterol, brassicasterol, Δ5-avenasterol, Δ7-stigmasterol, and Δ7-avenasterol (Table 4).In most oils, their total content ranges from 400 to 800 mg/100 g, but there can be considerable differences in the content of these compounds between some oils [9,36].Chromatographic analysis of the oils revealed 10 different phytosterols.Oils from blackcurrant, raspberry, strawberry, and apple seeds contain considerable amounts of phytosterols.The richest sources of phytosterols were blackcurrant seed oil (6824.9g/g) followed by raspberry (5384.1 g/g), strawberry (4643.1 g/g), and apple seed oil (3460.0g/g). Phytosterols in The dominant compound in the analysed oils was sitosterol, whose content ranged from 2630 g/g in apple seed oil to 3630 g/g in blackcurrant seed oil.Sitosterol is an important phytosterol that reduces the absorption of cholesterol, which helps to maintain a low level of total cholesterol in peripheral blood.The analysed samples also contained considerable amounts of other phytosterols such as campesterol, sitostanol, cycloartenol, and citrostadienol.Unlike cholesterol, phytosterols generally have a positive effect on human health.They bind bile acids and reduce the risk of high blood levels of total cholesterol without affecting the levels of HDL cholesterol [37].Another positive effect of phytosterols is that they inhibit the development of intestinal cancer.In human and animal bodies, phytosterol shows mainly anticarcinogenic, antioxidative, and cholesterol-lowering activities [38]. Conclusions The oils obtained from strawberry, blackcurrant, raspberry, and apple seeds are a rich source of essential unsaturated fatty acids (EUFA), tocochromanols and phytosterols, which could find wide application in the cosmetic, pharmaceutical, and food industries. The native oils from strawberry, raspberry, blackcurrant, and apple seeds can be regarded as special oils (biooils), which, due to their possible nutraceutical effects, could find broader use not only in the cosmetic but also in the food industry.They could find special application in the design and production of foods with specific health-promoting effects, rich in various bioactive components helpful in preventing metabolic diseases of modern civilization. Table 1 : Physicochemical properties of oils from strawberry, raspberry, blackcurrant, and apple seeds. Table 2 : Composition of fatty acids in oils from strawberry, raspberry, blackcurrant, and apple seeds (% of total fatty acids). Oils from Blackcurrant, Raspberry, Strawberry, and Apple Seeds.In most vegetable oils, sterols are the principal component of unsaponifiable substances, whose Values are mean± and coefficients of variation (CV).
2018-12-28T17:54:43.360Z
2015-01-14T00:00:00.000
{ "year": 2015, "sha1": "f5acc1bceccd421c877d6a8dd50159b8e04cea90", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/jchem/2015/659541.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f5acc1bceccd421c877d6a8dd50159b8e04cea90", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Chemistry" ] }
2112993
pes2o/s2orc
v3-fos-license
Estimating within-study covariances in multivariate meta-analysis with multiple outcomes Multivariate meta-analysis allows the joint synthesis of effect estimates based on multiple outcomes from multiple studies, accounting for the potential correlations among them. However, standard methods for multivariate meta-analysis for multiple outcomes are restricted to problems where the within-study correlation is known or where individual participant data are available. This paper proposes an approach to approximating the within-study covariances based on information about likely correlations between underlying outcomes. We developed methods for both continuous and dichotomous data and for combinations of the two types. An application to a meta-analysis of treatments for stroke illustrates the use of the approximated covariance in multivariate meta-analysis with correlated outcomes. Copyright © 2012 John Wiley & Sons, Ltd. A2. Derivation of equation (9) We now derive equation (9), the covariance term between probabilities for two outcomes, given that outcome 1 is nested within outcome 2. We illustrate the covariance term for the treatment group; the covariance term for the control group can be derived in a similar way. Suppose we observe 1t S out of N t participants with outcome 1, and 2t S out of N t participants with outcome 2. Since outcome 1 is nested within outcome 2, we have Typically this situation will be associated with 2 1 t t n n  , since information on event 'A or 1 t t N n  . For the 1 2  t t n n participants for whom we do not know their outcomes, we assume they are missing at random and assign a proportion     This yields a revised value of a t , say S S a a n n n S . A3 Derivation of equation (12) Now we derive equation (12), the covariance between two inverse sample variances for two continuous outcomes within a study. We use the properties of the bivariate chisquared distribution to assist this deviation. Bivariate chi-squared distribution Let 1 2 ( , ) Z Z be a two dimensional correlated random vector with mean 1 2 ( , ) T B1 Simulation procedures Overview We simulate meta-analysis data for two outcomes, with outcome 1 being a continuous variable and outcome 2 being a dichotomous variable. The treatment effects are measured using a mean difference and a log odds ratio for the two outcomes respectively. To induce correlation between the outcomes within studies, we simulate individual participant outcomes from a bivariate normal distribution and dichotomize the second variable. The correlations between the continuous and dichotomous outcomes, and between the treatment effect estimates for the two outcomes, are estimated empirically. Simulation parameter specification We consider a wide range of sample sizes to assess the dependence of estimation properties on the number of studies ( s N ), the number of participants in treatment ( t N ) and control groups ( c N ), as well as the degree of dependence between outcomes. We approximately mimic the SBP and DBP outcomes in the acute stroke data by setting the between-study parameters as between-study correlation ( b  ), setting each to be either zero or strong (0.9). We perform simulations under a scenario in which the within-study treatment effects are the same for every study, and for a scenario in which within-study treatment effect variances vary across studies. We achieve the latter by keeping the between-participant outcome variances the same, and varying the sample sizes within studies. The full set of scenarios considered is provided in Supplementary Table B1. Outcome data generation We start by simulating true treatment effects for two continuous outcomes using a bivariate normal distribution: with parameter values as specified above. Within each study, we simulate individual outcome data for each participant i, centering the control group participants on the observed control group mean across acute stroke trials for outcome 1 (the means and standard deviations for outcome 2 are arbitrary). The treatment group means are obtained by applying the simulated treatment effect for the study: We then dichotomize each 2tsi y and 2csi y using cut-points t c and c c which are chosen so Results Summary statistics from the simulations are given in Supplementary Table B2 -B4. Our first observation is that treatment effects are well estimated by all methods, and there is little difference between univariate and multivariate approaches. We note that multivariate meta-analysis is most likely to offer advantages over a univariate approach when there are non-ignorable missing data (Kirkham et al. 2012). Our simulation study used complete case data and did not address this issue. Comparing results across Tables B2-B4 shows the impact of the extent of heterogeneity for outcome two, with 2 2 2 2 0, 0.1 , 0.2   , respectively. Table B2 shows that when 2 2 0   , the multivariate approach reduces bias in estimating 1  but increase bias for 2  . It is therefore not clear whether a multivariate approach is better than a univariate approach when there is heterogeneity in one outcome but not in the other. However, when 2  slightly departs from zero, improvement in estimates for between-study variance is evident in scenario 13-16,18,21-22 in Table B3. If we increase the 2  to 0.2 2 , there are consistent improvements in parameters estimates for between-study variance for both outcomes through all scenarios in Table B4. This suggests that a multivariate approach is likely to outperform a univariate approach when there is heterogeneity for both outcomes, but not necessarily otherwise. We turn now to the impact of the magnitude of the correlation. Strong within-study correlation is assumed in scenarios 13-20 and 25-32. In these situations, both UM and MM(0) are misspecified models with respect to the within-study correlation. MM( e  ) assumes a common correlation between treatment effects and MM( o  ) assumes a common correlation between outcomes, using our formulae to approximate the withinstudy covariance. We expect that UM and MM(0) will be worse than the latter two with misspecification of the within-study correlation; while MM( o  ) should be similar to MM( e  ). We observe that bias for 2 11  in UM and bias for 2 22  in MM(0) are inflated in scenarios 13-20 and 25-32. In Table B3, the bias for between-study correlation b  in MM(0) is sometimes inflated, particularly in scenarios 13-14, and 17-18 where the between-study correlation is zero; the biases are 0.78, 0.92, 0.85 and 0.66, respectively. These biases appear to be serious, given the parameter space [-1, 1] for correlation coefficients. Similarly findings are given in scenarios 25-26, and 29-30. These indicate that when correlation is strong (weak) for within-(between-) study, then assuming zero within-study correlation will introduce bias into estimates for between-study correlation. This is one other situation in which correct specification or approximation of withinstudy correlation is important. Estimation from MM( o  ) is generally not worse than MM( e  ). The particular situation in which we expect MM( o  ) to perform better is when sample sizes vary across studies (the even-numbered scenarios), since the covariances between treatment effects then vary across studies even if the covariances between outcomes remain the same. We observe such a pattern for situations in which between-study correlation is low, but not when between-study correlation is high (which might be explained in part by our relatively small underlying heterogeneity variance for the second outcome, which does not allow the high between-study correlation to manifest itself). Finally, Tables B3 and B4 show that, if there are large number of studies (n=50), the bias in the between-study variance estimate is reduced by using a multivariate approach, and the bias in the between-study correlation estimate is minimized in our proposed approach. However, when there are a few studies (n=10), no clear improvement are observed in the multivariate approach over the univariate approach, unless the within-(between-) study correlations are high (low). (0): multivariate meta-analyses; MM0-assuming zero as within-study correlations; MM( e r )-assuming common non-zero within-study correlations between treatment effects; MM( o r )-assuming common non-zero within-study correlations between outcomes. t n and c n number of participants in treatment group and control group, respectively; s n number of studies; 1  and 2  between-study standard deviation for treatment group and control group, respectively; b  between-study correlation coefficients for overall effects; w  within-study correlation coefficients for outcomes (before dichotomized); t p and c p event rates in treatment and control group, respectively. (0): multivariate meta-analyses; MM0-assuming zero as within-study correlations; MM( e r )-assuming common non-zero within-study correlations between treatment effects; MM( o r )-assuming common non-zero within-study correlations between outcomes. t n and c n number of participants in treatment group and control group, respectively; s n number of studies; 1  and 2  between-study standard deviation for treatment group and control group, respectively; b  between-study correlation coefficients for overall effects; w  within-study correlation coefficients for outcomes (before dichotomized); t p and c p event rates in treatment and control group, respectively. (0): multivariate meta-analyses; MM0-assuming zero as within-study correlations; MM( e r )-assuming common non-zero within-study correlations between treatment effects; MM( o r )-assuming common non-zero within-study correlations between outcomes. t n and c n number of participants in treatment group and control group, respectively; s n number of studies; 1  and 2  between-study standard deviation for treatment group and control group, respectively; b  between-study correlation coefficients for overall effects; w  within-study correlation coefficients for outcomes (before dichotomized); t p and c p event rates in treatment and control group, respectively.
2016-05-12T22:15:10.714Z
2012-12-03T00:00:00.000
{ "year": 2012, "sha1": "d15d15d291c3f20f3c8ef6d4eca13496e00179dc", "oa_license": "CCBY", "oa_url": "https://europepmc.org/articles/pmc3618374?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "c1eb55de98e56504e18e446ebac2bfee55820cf1", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
10207081
pes2o/s2orc
v3-fos-license
Reversible myeloneuropathy and pancytopenia related to copper deficiency from gastric bypass surgery: A case report Introduction: Weight loss surgery has become an increasingly popular means of combating the obesity epidemic in modern society but like any procedure, it does not shy away from immediate and long­term complications. Copper deficiency has occasionally been reported to occur many years afterwards but with an increased incidence of bariatric procedures and reduced awareness, the effects of this deficiency could now appear to favor an earlier onset. Case Report: We report a case of a 56­year­old Caucasian female with a history of gastric bypass surgery five year ago; with an unsteady gait, weakness, decreased visual acuity, tingling with numbness in her hands and pancytopenia for the last month. She was treated for copper deficiency. Conclusion: Effects of copper deficiency have been shown to cause a wide array of abnormalities related to inactivation of enzymes such as cytochrome c oxidase, superoxide dismutase, dopamine beta hydroxylase and metallothionein. This can lead to reduced nerve transmission within the central nervous system causing motor and sensory polymyeloneuropathy and an overall reduction of energy required for blood cell formation. With early surveillance, such anomalies can be detected and potentially reverse the effects of this micronutrient deficiency. INTRODUCTION Morbid obesity is one of the major risk factors associated with chronic diseases and conditions such as heart disease, cancer, stroke, diabetes and hypertension [1]. Given this impact on morbidity and mortality in the 21st century, weight loss surgery options are bound to be more prevalent than ever before and the RouxenY gastric bypass remains the most common type done in the United States [2]. This procedure involves dividing the stomach into a small upper pouch and anastomosing it to a distal segment of the jejunum thus creating a gastrojejunostomy for drainage of gastric remnant contents, bile and pancreatic enzymes. This bypass potentially eliminates common micronutrients such as iron, vitamin B12, calcium and vitamin D from being absorbed through the latter stomach and initial part of the small intestine [3][4][5] myeloneuropathies, anemia, leucopenia and sometimes thrombocytopenia, but with early surveillance these conditions can be reversed [6,7]. CASE REPORT A 56yearold Caucasian female presented to our emergency room with complains of unsteady gait, visual disturbance, dizziness, fatigue and recurrent tingling with numbness in her hands for the past month. She lost 68.03 kg ever since her gastric bypass surgery five years ago and more recently developed poor appetite, recurrent diarrhea and nausea despite being compliant with once daily iron, thiamine and folic acid supplements. Her gait had worsened to the point of requiring a cane for ambulation due to frequent falls. A recent upper endoscopy showed a gastrojejunal anastomotic ulcer (H. Pylori negative) and colonoscopy revealed mild diverticulosis with several previous examinations in the past failing to show any clear cut etiology. The past medical history of the patient was significant for depression and chronic hepatitis C (Genotype 2). She was a retired home care administrator, with no family history of gastrointestinal malignancy and routinely took pantoprazole, citalopram, calcium and vitamin D supplements. On physical examination, patient appeared frail with a body mass index 17.5 kg/m 2 (84% of ideal body weight), blood pressure 124/84 mmHg, heart rate 90/min and respiratory rate 14/min. Neurological review revealed reduced strength and sensation over her lower extremities, decreased ankle jerk, ataxic gait and moderate loss of vibratory and joint position sense in the toes. An ophthalmologic examination showed reduced visual acuity and protracted optic disc swelling. The rest of the physical examination was normal. Her laboratory examination was unique for a white blood cell count 2.2x10 3 /mm 3 (nadir 1.4x10 3 /mm 3 ), hemoglobin 8.9 g/dL, mean corpuscular volume (MCV) 72 fl and platelet count 8.8x10 5 /mm 3 . Her iron, percent saturation and vitamin B12 levels were preserved in a high normal range at 76 μg/dL, 40% and 652 pg/mL, respectively. A hepatitis C viral load was undetectable while creatinine phosphokinase remained within normal limits. Serologies for lyme titer and syphilis were undetectable. A brain and entire spine magnetic resonance imaging was otherwise normal except for minimal T2 hyperintensities within the periventricular white matter suggesting demyelination [8][9][10]. Her cerebrospinal fluid analysis was essentially unremarkable with no evidence of oligoclonal banding. An electroencephalogram recording revealed subtle slowing of the background suggestive of mild encephalopathy with no epileptiform activity and nerve conduction studies showed some motor and sensory polyneuropathy affecting different parts of her upper and lower extremities. At that juncture given her history of gastric bypass surgery, ongoing pancytopenia and complains of dizziness with unsteady gait, it was decided to assess for copper deficiency. This was seen low at 0.44 μg/mL (normal range 0.75-1.45 μg/mL) along with zinc at 0.37 μg/mL (normal range 0.66-1.1 μg/mL) but had a normal ceruloplasmin level. The 24 hour urine collection for copper was also low at 9 μg/L (normal range 15-60 μg/L) and over the next three days she received a once daily intravenous infusion (containing 1 mg of copper) in dextrose water along with a high potency Women's Ultra Mega vitamin supplement four times a day. This contained 2 mg of copper, several fat and water soluble vitamins as well as trace elements like manganese, chromium, selenium, magnesium and zinc. Over the next four to five days, her gait and vision improved remarkably with increased acuity and resolution of the optic disc swelling on examination. She no longer required any assistance with ambulation after a week, and was subsequently discharged on oral copper supplements. Table 1 gives her followup laboratory data after the first and second months. On subsequent follow ups in the outpatient clinic, the patient had shown great improvement with her overall strength and ambulation but still had some lingering but subtle tingling with numbness in the hands. DISCUSSION Copper remains an essential nutrient serving as a ceruloplasmin cofactor in the formation of transferring [11]. It thus facilitates iron uptake and ensures adequate red and white blood cell formation. Copper also plays a critical role in activating enzymes such as cytochrome c oxidase, superoxide dismutase, dopamine beta hydroxylase, metallothionein and its deficiency can lead to reduced nerve transmission within the central nervous system and less adenosine triphosphate production for synthesis of hemoglobin [5]. The actual Table 1: Laboratory data compared from admission and one month later after copper supplementation mechanism on how neutropenia occurs is still unknown, but Lazarchick et al. suggested an inhibition of differentiation and selfrenewal of CD34 positive hematopoietic progenitor cells as a likely cause [12]. Most cases of copper deficiency myeloneuropathy typically occur after a few decades of gastric bypass surgery but in our patient the symptoms were seen only after a few years [8,13]. Shorter gastrointestinal tracts may cause reduced sites for reabsorption and a lack of micronutrient replacement can compound this deficiency. Zinc can interfere with copper metabolism since they compete for absorption via the same site and O'Donnell et al. advised against simultaneous supplementation in situations where both are found to be deficient [14]. Previous studies have linked hyperzincemia from toxic exposures as a potential cause of copper deficiency but this was not the case in our patient. With varying degrees of copper deficiency, patients may not necessarily have all the signs and symptoms listed and in order to make the diagnosis a clinician would need to have a high index of suspicion, along with demonstrable low copper levels. The occurrence of longterm irreversible neurological damage is not known and as such it is paramount to consider early surveillance. Kumar et al. have also studied the value of urinary copper as a measure of its deficiency but concluded that a serum copper level remains the best and most reliable assay [8]. An initial intravenous dose of 1 mg of copper is advised for the first three days after which patients can continue on oral supplementation of 8 mg of copper gluconate daily [15]. Our patient received these and blood levels for copper gradually normalized over the next two months along with other respective hematologic parameters. She did not require any blood transfusions during her stay and the abatement of her symptomatology was quite impressive over the immediate days to weeks of commencing therapy. In the absence of other causes for pancytopenia, blood levels usually improve or normalize anywhere within three days to six months after supplementation [11,[15][16][17]. The actual threshold between copper concentrations, tissue stores and neurological sequelae remains to be established and more studies shall be required in the future to establish this. The myeloneuropathy described here can also mimic subacute combined degeneration typically seen with vitamin B12 deficiency, as such this should also be assessed and treated promptly. Even though serial levels of serum copper measured over time was seen to rise, potential confounding effects could exist with various vitamins and trace elements contained in the branded high potency Women's Ultra Mega vitamin supplement. CONCLUSION Early surveillance for copper deficiency has its benefits and ought to be routinely evaluated after a patient undergoes gastric bypass surgery as this gives the clinician an avenue to identify preventable and reversible causes of blood cell disorders, leukemic transformation and polyneuropahthies that would otherwise have been termed idiopathic. ********* Author Contributions Laide Bello -Substantial contributions to conception and design, Analysis and interpretation of data, Drafting the article, revising it critically for important intellectual content, Final approval of the version to be published Joseph Fiore -Substantial contributions to conception and design, Analysis and interpretation of data, Revising it critically for important intellectual content, Final approval of the version to be published Guarantor The corresponding author is the guarantor of submission.
2019-03-11T13:06:39.738Z
2013-10-11T00:00:00.000
{ "year": 2013, "sha1": "a5ef7fcb570b56d9102922746e396e67dbce5a44", "oa_license": "CCBY", "oa_url": "http://www.ijcasereportsandimages.com/archive/2013/001-2013-ijcri/006-01-2013-bello/ijcri-00601201366-bello.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "ac75e306ded19835a3695308d7a6bb86e1346e3e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
199116276
pes2o/s2orc
v3-fos-license
Mathematical modeling of the generation of acoustic waves in a two-channel system The purpose of this study is to determine the influence of the diameter of a subsonic conical nozzle on the generation of acoustic waves in a two-channel system. Three-dimensional numerical simulation of flow in the duct of an actual device was performed. A complete picture of the flow generated in the acoustic-convective drying system of the Institute of Theoretical and Applied Mechanics, SB RAS, was obtained. The results of the study show that to maintain the amplitude-frequency characteristics of the workflow with increasing diameter of the subsonic conical nozzle, it is necessary to reduce the settling chamber pressure. The numerically simulated amplitude-frequency characteristics of the acoustic flow generated in the working section of the drying system are in satisfactory agreement with the results of physical experiments. Introduction In modern industry, there are various technologies for drying capillary-porous materials, the most popular of which is the thermal-convective method [1]. However, there is a promising method of acoustic-convective drying of porous materials, which, according to a series of studies [2][3][4][5], has several advantages over the thermal-convective method, including the possibility of drying materials at room temperature. The amplitude-frequency characteristics (AFCs) of oscillations in the duct of the acousticconvective dryer (ACD) during its operation have been obtained in previous physical experiments [3,6]. Previously, parametric studies of the jet generating self-oscillations with varying length of the resonator have been performed [7,8] under the assumption of plane and axial symmetry. A threedimensional numerical simulation has been carried out [9], showing that the AFCs of the unsteady flows formed in the ACD duct are in qualitative agreement with those obtained in full-scale experiments with a conical nozzle diameter of 8 mm. The present study aims to determine the effect of the diameter of the conical nozzle on the physical stream flow in the ACD duct by mathematical modeling. Physico-mathematical formulation of the problem The ACD geometry can be represented as two perpendicular channels. The first channel consists of a cylindrical settling chamber with a subsonic conical nozzle and a cylindrical resonator with a closed end which is located coaxially to the nozzle at a certain distance from it. The cylindrical channel crosses the second channel with a square cross section and conventionally divided it into two sections: To study the effect of the diameter of the conical nozzle on the dynamics of the formation of acoustic-convective flow, we used subsonic conical nozzles of diameter 8, 10.5, and 12 mm. For the numerical description of the gas-dynamic flow, we used an approach based on solving the Reynolds-averaged Navier-Stokes with the k- Wilcox turbulence model [10] having the following form: The working gas was air with the standard thermal conductivity  and specific heat C p . Pressure was calculated by the ideal gas equation of state, and viscosity by the three-coefficient Sutherland formula: where μ 0 is the reference viscosity in kg/m·s, T 0 is the static temperature in Kelvin, and S is the Sutherland constant. Numerical simulation The ANSYS Fluent software was used for the numerical simulation. A symmetric three-dimensional geometric model of the ACD was constructed taking into account the features of the internal duct of the dryer. The grid area was divided into small segments to improve the accuracy of the solution in complexly described units of the model and improve the quality of the solution. The region of the intersection of the channels was discretized by a tetrahedral computational grid, which can be modified to a polyhedral grid by means of Fluent in order to improve the quality of the cells. The remaining segments of the computational domain are covered by a multi-block structured hexahedral grid which was refined toward the walls of the model. For each nozzle diameter, we specified the corresponding static and total pressures in the settling chamber observed in field experiments [3,6]. For a nozzle diameter of 12 mm, the total pressure was 458 kPa and the static pressure was 451 kPa, and for nozzle diameters of 8 and 10.5 mm, the total pressure was 789 kPa and 616 kPa, respectively, and the static pressure was 769 kPa and 601 kPa, respectively. Results and Discussion The numerical simulation resulted in the self-oscillating process of filling/emptying the cylindrical resonator with the air jet accelerated by the subsonic conical nozzle. The AFCs and the flow pattern of the acoustic streaming developed in the ACD duct were obtained. Consider the flow pattern (see figure. 1) formed in the ACD duct with a nozzle diameter of 12 mm. At the initial time, the jet accelerated by the subsonic conical nozzle issues into the working space of the dryer, reaches the sound speed at the nozzle exit, and is accelerated to supersonic speed, resulting in the formation of a barrel-shaped structure terminated by a Mach disk. Behind the Mach disk, the flow is decelerated to subsonic speed in the central part of the jet, and at the periphery of the jet, the flow remains supersonic. The stream flowing from the settling chamber fills the cylindrical resonator; at the beginning of the inflow phase, the resonator pressure is 130 kPa. A new pressure field of 250 kPa is formed behind the compression wave moving into the resonator. The compression wave reaching the closed end of the resonator is reflected by a compression wave of greater amplitude and moves in the opposite direction. The medium behind the reflected wave has a total pressure of about 350 kPa, and the pressure at the wave front is 500 kPa. Reaching the free end of the resonator, the reflected compression wave encounters the jet flowing out of the conical nozzle and begins to interact with it. The interaction results in deformation of the "barrel", and a pressurized jet enters the second channel with a square cross section, thus emptying the resonator. When most of the gas has issued from the resonator, the pressure in it decreases and a rarefaction wave with a total front pressure of about 200 kPa moves into the resonator. After reaching the closed end, the rarefaction wave is reflected by a rarefaction wave, behind which a uniform low-pressure field of 130 kPa is formed. The wave reaches the free end of the resonator, and the resonator is again filled with the jet flowing from the nozzle. This periodic process is a source of high-intensity acoustic oscillations that underlie the operation of the ACD. Figure 2 shows the total pressure fields in the working channel of the ACD at the same time for nozzles with a diameter of 8, 10.5, and 12 mm. The total pressure in the settling chamber with a conical nozzle of 8 mm diameter is about 7.9 bar, and for diameters of 10.5 and 12 mm, it is 6.2 and 4.6 bar, respectively. The gas jet flowing out of the settling chamber fills the resonator, into which a compression wave propagates; the speed of propagation of the generated wave does not depend on the diameter of the nozzle. The total pressure magnitude behind the compression wave is 2.6 bar for all nozzle diameters. The front of the total-pressure compression wave has a pronounced dispersion character. Increasing the nozzle diameter and hence the greater length of the barrel leads to a partial closure of the neck of the resonator, which complicates the outflow from the resonator; as a result, in the rarefaction phase, the resonator pressure was 0.9 bar for a diameter of 8 mm, 1.3 bar for 10.5 mm, and 1.4 bar for 12 mm. For these diameters of the conical nozzle, we determined AFCs which show that increasing the diameter of the conical subsonic nozzle and appropriately decreasing the settling chamber pressure provide similar frequencies and intensities. Figure 3 presents the results of comparison of the AFCs for the three diameters of the conical nozzle obtained by numerical simulation and in physical experiments. As can be seen, varying the nozzle diameter leads to a negligible change in the resonating frequency of acoustic oscillations, within a dozen hertz, but, at the same time, the amplitude of the workflow in the working section remains almost unchanged. Conclusions The numerical studies have provided a complete picture of the acoustic streaming formed in the working section of the ACD for the corresponding diameters of the conical nozzle. The relationship between the increase in the diameter of the conical nozzle and the decrease in the settling chamber pressure at constant AFCs of the workflow was obtained. Validation of the numerical simulation against the results of field experiments showed satisfactory agreement between the simulated and experimental AFCs of the workflow.
2019-08-02T20:21:51.813Z
2019-07-01T00:00:00.000
{ "year": 2019, "sha1": "3f1039d52cccaee3fa2f53912e982860f4b6e946", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1268/1/012037", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "89dd0bd6022472e3a86326fe9b9ad21792c6b123", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [ "Physics" ] }
155200651
pes2o/s2orc
v3-fos-license
Mining conditions specific hub genes from RNA‐Seq gene‐expression data via biclustering and their application to drug discovery Gene‐expression data is being widely used for various clinical research. It represents expression levels of thousands of genes across the various experimental conditions simultaneously. Mining conditions specific hub genes from gene‐expression data is a challenging task. Conditions specific hub genes signify the functional behaviour of bicluster across the subset of conditions and can act as prognostic or diagnostic markers of the diseases. In this study, the authors have introduced a new approach for identifying conditions specific hub genes from the RNA‐Seq data using a biclustering algorithm. In the proposed approach, efficient ‘runibic’ biclustering algorithm, the concept of gene co‐expression network and concept of protein–protein interaction network have been used for getting better performance. The result shows that the proposed approach extracts biologically significant conditions specific hub genes which play an important role in various biological processes and pathways. These conditions specific hub genes can be used as prognostic or diagnostic biomarkers. Conditions specific hub genes will be helpful to reduce the analysis time and increase the accuracy of further research. Also, they summarised application of the proposed approach to the drug discovery process. Introduction Biological data are publically available with increasing speed due to the advancement of the technology. Analysis and understanding of enormous biological data is a challenging task. Biological data, particularly transcriptomic data are publically available in various forms such as microarray gene-expression data [1,2], RNA-Seq data [3], DNA-Seq data [4] and other [5]. Several researchers are extensively using these data for the various categories of research [6][7][8]. Gene-expression data is one of the increasingly used biological data in biomedical research [9]. The behavioural function of thousands of genes and the disease mechanisms can be extracted by analysing the gene-expression data. Proper analysis of geneexpression data always plays a vital role in finding a solution to the various biological problems. Microarray is the famous technology used for representing the gene expression but it has some drawbacks [10]. RNA-Seq is another high-throughput emerging technology for representing the gene expression [11]. RNA-Seq data has some advantages over the microarray data. Today, the cost of RNA-Seq is slowly decreasing. The RNA-Seq technique allows identifying both known and novel genes. Owing to the reducing cost and other comparatively important features, researchers are more focusing on the analysis of RNA-Seq data. Numerous studies have been applied to the gene-expression data for mining co-expressed gene modules and hub genes associated with the modules [12,13]. However, no studies have focused on mining conditions specific hub genes from the coexpressed gene modules. Identification of hub genes is one of the important clinical applications, where potential genes among the co-expressed genes can be identified [14]. Hub gene is the highly connected gene of the network, which has the tendency to coordinate the other co-expressed genes and represent the overall behaviour of the bicluster. Gene co-expression network (GCN) is a popular method used for finding the hub genes [15]. The GCN can be used for many purposes such as gene prioritisation [16], pathway analysis [17,18], gene function identification [19] etc. Most GCN have been constructed via clustering approach [20]. However, clustering-based GCN does not produce the results specific to conditions, it always considered all experimental conditions. Weighted correlation gene network analysis (WCGNA) is one of the rapidly used tools for the identification of hub genes from the co-expressed genes [20]. The WCGNA is based on the clustering approach. Clustering of gene-expression data [21] has many drawbacks [22]. So, the key genes which play an important role only in specific situations cannot be identified using the concept of WCGNA. For overcoming the drawbacks of the WCGNA-based approach, biclustering-based approach plays an important role. Biclustering [22] produces the subset of highly coexpressed genes across the subset of conditions. By constructing the GCN of the bicluster, the interconnection between the coexpressed genes can be visualised and can understand the gene functions more effectively. In biclustering [23][24][25][26][27], simultaneous clustering performed on both the dimensions, i.e. on gene side as well as on sample or conditions side. From the clustering, we will get the genes which are correlated across all the conditions of the data. Therefore, the biclustering technique is more effective to find more biologically significant patterns as compared with clustering techniques from the gene-expression data [22]. Biclusters are used in many biological applications for extracting unfolds significant information [28][29][30]. For the further analysis of clustering-based hub genes, we have to consider all conditions and for future research based on conditions specific (biclustering based) hub genes, we have to do analysis across the specific conditions only. By using the conditions specific hub genes, time for the analysis across the unnecessary conditions will reduce and increase the accuracy. In this paper, we have proposed a new approach for identifying conditions specific hub genes by using the biclustering algorithm from the RNA-Seq gene-expression data. For finding the biclusters, 'runibic' biclustering algorithm has been used. Set of biclusters obtained from the biclustering algorithms have been validated for biological significance by using the online Generic Gene Ontology (GO) Term Finder tool [31]. All the significant biclusters are further considered for extracting the hub genes using the concept of the GCN. For each significant bicluster, GCN has been constructed. From each GCN, hub genes are identified. Also, we have constructed the protein-protein interaction network (PPIN) for each significant biclusters, and from the constructed PPIN hub genes are identified. Finally, from the subset of hub genes of GCN and PPIN of the specific bicluster, common hub genes are identified. These identified common hub genes are the robust conditions specific hub genes for the specific biclusters. There might be more than one hub genes present in the single bicluster. In this way, hub genes for each and every significant bicluster have been identified. With the help of an example, we have demonstrated the entire approach and done the rigorous analysis and validations. These hub genes can be used as a prognostic and diagnostic marker of the respective diseases in many clinical applications after the rigorous analysis by the clinicians. Our major contributions are summarised as follows: • Proposed a novel method for mining the conditions specific hub genes from the RNA-Seq gene-expression data via biclustering algorithm. • Presented computational analysis of biclustering algorithm on various datasets and validated the results by verifying the biological significance of biclusters. • Chosen an efficient biclustering algorithm and extracted biologically significant biclusters from the large-scale RNA-Seq gene-expression datasets. • Constructed the GCN for each significant bicluster with the help of difference matrix and gene correlation matrix concept. • Identified hub genes from each constructed GCN. Also, identified the hub genes from each constructed PPIN using STRING software tool. Finally, validated the results. • Presented comparative analysis with WCGNA-based approach. • Summarised the role of conditions specific hub genes in the drug discovery process. This paper is divided into the followed sections. Section 2 describes some important terminologies related to the proposed approach and illustrated the proposed approach in detail with the help of the flow diagram. Section 3 presents the results of an experimental analysis on various synthetic as well as real datasets followed by a discussion on the same. Section 4 focused on the application of the proposed approach to drug discovery. Section 5 is about the conclusions of our experimental analysis. Materials and methods In this section, some important terminologies related to the proposed approach are described along with the definitions and methodology used in the proposed approach. The proposed approach is further illustrated with the help of a flow diagram. Preliminaries and definitions 2.1.1 Bicluster: Bicluster of gene-expression data is a subset of consistent behaving genes across the subset of conditions and vice versa. The process of extracting biclusters is known as biclustering [32]. Co-expressed genes: Co-expressed genes are consistent behaving genes across the subset of conditions. Generally, genes in the bicluster are considered as co-expressed genes. Co-expressed genes can use as a prognostic or diagnostic measure in many clinical as well as biological applications [33]. Gene co-expression network: GCN is a popular tool for understanding the diseases and its development across various stages at the gene level. GCN is defined as un-directed gene network, in which nodes represent the co-expressed genes and edges indicate the correlation between the nodes [33]. Hub gene: The highly connected node in the GCN is called as a hub gene. All nodes of the GCN of bicluster are co-expressed genes. The node which is having the highest degree of connectivity is considered as a hub gene of that co-expression network. In a single GCN, more than one node might have the same highest degree. Hence, multiple hub genes can be present in a single GCN. The identified hub genes can be considered as diagnostic and prognostic markers for the diseases [34]. Proposed approach Hub genes extraction from the large gene-expression dataset is an arduous task. In recent years, many methods for finding the hub genes have been introduced. For achieving more accurate biologically significant condition-specific hub genes, we have used the concept of biclustering. Fig. 1 shows the process diagram of the proposed approach. The proposed approach is divided into two phases. The first phase is labelled as 'A' and other is labelled as 'B'. In the phase 'A', pre-processed RNA-Seq data has been used as an input to the runibic biclustering algorithm. The set of biclusters have been extracted from the gene-expression data using the runibic biclustering algorithm. From the set of extracted biclusters, biologically significant biclusters have been identified with the help of online Generic GO Term Finder tool. In the phase 'B', difference matrix with respect to the conditions has been computed for each significant bicluster. Furthermore, a difference matrix has been used for computing the correlation matrix with respect to the genes. Then, the GCN has been constructed based on the correlation matrix. The nodes with the highest degree of connectivity in the network are identified. The identified nodes are called as hub genes. Phase 'B' is repeated for all the significant biclusters. In this way, the list of conditions specific hub genes can be identified for the particular dataset. Details about each and every step involved in the process of the proposed approach are given below. 2.2.1 Pre-processing of gene-expression data: RNA-Seq data analysis includes several steps for obtaining the expressions. The steps include obtaining sequenced reads, normalisation and quality control. Various tools are available for obtaining the expression counts from RNA-Seq data. The input to the biclustering algorithm is an RNA-Seq data in the form of fragments per kilobase of transcript per million mapped reads (FPKM) and reads per kilobase of transcript per million mapped reads (RPKM). The FPKM/ RPKM RNA-Seq data is available in a matrix format, where rows represent the genes and columns represent the conditions or samples. Applying biclustering algorithm: Several approaches for identification of hub genes are based on the concept of clustering but clustering on gene-expression data has many pitfalls. By applying the biclustering to gene-expression data, we will get all possible groups of co-expressed genes across the subset of conditions called biclusters. Hence, hub genes identified from all extracted significant biclusters will be more biologically significant as compared with the hub genes identified from the clustering technique. In biology, all genes are not expressed consistently across all conditions and not always active in all conditions. Therefore, it is more significant to focus on the only subset of conditions and not on all the conditions for the analysis in the clinical research. Hence, we have used the concept of biclustering for the identification of hub genes from the gene-expression data. Most biclustering algorithms are bound to specific features and not work properly on all aspects. Hence, the selection of proper biclustering algorithm for the specific clinical application is a challenging task. After doing the experimental analysis of some state-of-the-art biclustering algorithm, it is found that the algorithm 'runibic' is efficient and performs effectively on most of the aspects [35]. In this paper, we have used the 'runibic' biclustering algorithm for extracting the biologically significant biclusters from the RNA-Seq gene-expression data. The first time, runibic biclustering algorithm is applied to the RNA-Seq data. The runibic algorithm is the parallel form of unibic biclustering algorithm [29]. Several existing biclustering algorithms failed to perform efficiently on large-scale datasets but runibic algorithm performs efficiently on large-scale datasets. Another reason behind selecting the runibic algorithm is it performs well on all important aspects related to biclustering problems such as overlapping, noise, stable output, bicluster size, biological significance, comprehensive search etc. The runibic biclustering algorithm majorly extracts trend preserving biclusters but it is also able to extract all the remaining types of biclusters [22,34]. Overall the runibic algorithm is the better algorithm for the bicluster extraction from the gene-expression data. It also performs effectively on RNA-Seq data. Very few biclustering algorithms produce better results on RNA-Seq datasets because most of the biclustering algorithms are proposed for the microarray datasets. Gene set enrichment analyses: Gene set enrichment analysis is used for identifying the biological significance of the biclusters. Online Generic GO Term Finder tool has used for the gene set enrichment analysis. The biological significance of the biclusters is validated with the help of p-value. p-Value is the probability of seeing at least a particular number of genes out of the total genes in the list which are annotated to GO term. The value signifies that how well a group of genes match with different GO categories. The bicluster which satisfies the criteria of p-value <0.01 is biologically significant bicluster. Bicluster with lesser pvalue will be more significant. Hence, we have used the various pvalues such as 0.01 and 0.001 for getting more significant biclusters. In this way, set of biologically significant biclusters have been identified. Genes in the biologically significant biclusters are actively involved in many biological processes. Therefore, the set of significant biclusters are used for the construction of GCN and insignificant biclusters have not used for the further process. Construction of GCN: GCN plays a vital role in understanding the functionality of the co-expressed genes. Construction of conditions specific GCN involves three steps. The first step is computing the difference matrix of the biclusters with respect to the conditions. The runibic algorithm extracts the coherent evolution biclusters. In the proposed approach, we are using the coherent evolution biclusters which are extracted by the 'runibic' algorithm. We have calculated the difference matrix of the biclusters with respect to the conditions. If we use the correlation matrix directly for the construction of the GCN without using the difference matrix, then we will get the irrelevant results, and hence GCN cannot be constructed properly. Therefore, accurate key genes cannot be extracted and results may get affected. If we use the correlation matrix after the difference matrix, then we will get the relevant results to the behaviour of the genes. Therefore, the difference matrix increases the correlation between the genes and gives better results. For improving the accuracy of the results, we have first computed the difference matrix. The second step is of finding the gene correlation matrix of the difference matrix using Pearson's correlation measure. An equation for Pearson's correlation is represented by the equation below: where r is the correlation, N is the number of samples, and a and b are the expression levels of genes in the gene pair. The third step is the construction of un-directed graph from the obtained gene correlation matrix. This un-directed graph is referred to as a GCN. Here, a co-expression network is constructed from each and every significant bicluster. While constructing the GCN, correlation threshold α = 0.95 has been considered for getting a more accurate result. In the network, a node represents the gene and an edge represents the correlation between genes. In the proposed approach, the GCN is further used for identifying the hub genes. The same procedure is applied to all the significant biclusters for GCN construction. Hub genes identification: Hub gene is the highly connected gene of the network. Hub gene explains the functional behaviour of a bicluster. From each constructed network of the biclusters, hub genes are identified by computing the degree of the node. There might be chances that similar hub genes can be extracted from more than one bicluster because of the overlapping property of the biclusters. The hub genes can be used in many clinical applications such as the prognostic and diagnostic marker for the diseases, pathway analysis, regulatory elements etc. Here, two types of networks GCN and PPIN are constructed for the hub gene identification. Hub genes identification using GCN: The GCN is the GCN of the bicluster. For each significant bicluster, GCN is constructed. From each constructed GCN, hub genes are identified. Hub genes are more relevant to the functionality of the co-expression network than the other genes in the GCN. For extracting hub genes, after the various experimental analysis, we have decided the threshold for the minimum number of connected genes which is more than five in the GCN. The same procedure has been applied to all GCNs for extracting the hub genes. Hub genes identification using PPIN: For getting the more robust hub gene for the specific biclusters, we have also constructed the PPIN using online STRING software tool [36]. Highly connected genes of the PPIN are called as the hub genes of the PPIN. For extracting hub genes, after the experimental analysis, we have decided the threshold for the minimum number of connected genes which is more than five in the PPIN. In some literature, it is more than eight [37]. Hub genes obtained from the PPIN give more robustness about the biological significance. After identification of the hub genes using both the networks GCN and PPIN, we have extracted the common hub genes from the GCN and PPIN of each bicluster. These common hub genes are the robust conditions specific hub genes for the specific bicluster. In this way, conditions specific hub genes are identified for each significant biclusters. Results and discussion The aim of the proposed approach is to extract the conditions of specific hub genes from the gene-expression data. The 'runibic' biclustering algorithm has been used in the proposed approach. To validate the results of the biclustering algorithm, we performed experiments on both synthetic data and real data. We have compared the results with state-of-the-art biclustering algorithms with respect to various performance measuring issues. For finding the hub genes, RNA-Seq real datasets have been used. The results have been compared with the WCGNA-based approach. Also, we have performed the validation of the results using various performance measuring aspects. For the experiments, highperformance computing workstation running a Linux system has been used. Results on synthetic data Experiments have been performed on the synthetic dataset for validating the performance of the 'runibic' biclustering algorithm. Synthetic data matrix of size 1000 × 50 has been created randomly. In the synthetic dataset, various biclusters along with noise and overlapping biclusters have been implanted. Then, the 'runibic' biclustering algorithm is applied to the synthetic data. For performance evaluation of the algorithms, various important issues such as overlapping biclusters, noisy biclusters, bicluster accuracy and output nature have been used. The performance of 'runibic' algorithm has been compared with four state-of-the-art biclustering algorithms such as SAMBA [26], OPSM [38], xMotif [39] and Bimax [27]. Table 1 shows the experimental results. From the experimental results, the 'runibic' algorithm performs effectively on most of the issues as compared with other state-of-the-art biclustering algorithms. Results on real data High-throughput gene-expression data are being rapidly used for clinical research [40]. Very few biclustering algorithms have applied to the RNA-Seq data. The runibic algorithm was not previously applied to RNA-Seq data. The first time, the runibic algorithm we have applied to the RNA-Seq data. For the experiment, we have used the normalised datasets in the forms of FPKM and RPKM values. Experimental evaluation of various RNA-Seq datasets has been performed by applying the runibic biclustering algorithm. The RNA-Seq datasets used for the experiment are described in Table 2. For the experiments, Rpackage of biclustering algorithm 'runibic' has been used. Important issues related to the biclustering problems and hub genes have been chosen for the performance evaluation of biclustering algorithm. The targeted issues are biologically significant biclusters, biologically involved processes, important hub genes and hub gene involvement in biological processes. Table 3 shows the results of gene set enrichment analysis for the runibic biclustering algorithms on RNA-Seq datasets GSE49712 and GSE40419. The results include the number of biclusters extracted and the number of biologically significant biclusters enriched with GO terms with pvalues <0.01 and 0.001. Lower the p-values, the biological significance of the biclusters is more. Genes in the biclusters are co-expressed across the subset of conditions. Hence, genes in the biclusters are condition-specific co-expressed genes. Fig. 2 represents the percentages of extracted significant biclusters are far more than the extracted insignificant biclusters. The runibic biclustering algorithm has extracted the set of biclusters at the various p-values. At the p-value <0.01, the significant biclusters obtained 74 and 55% on datasets GSE49712 and GSE40419, respectively. Similarly, at the p-value <0.001, the significant biclusters obtained 53 and 41% on datasets GSE49712 and GSE40419, respectively. The result shows that runibic algorithm has extracted the most significant biclusters on both the datasets as compared with the WCGNA. Hence, the runibic biclustering algorithm is an efficient and more perfect algorithm for extracting the biologically significant biclusters. Furthermore, significant biclusters have been used for identifying the hub genes. For a demonstration of the proposed approach, we have presented the example of the process for finding the hub genes from the extracted bicluster 'Bic295'. Fig. 3 shows the heatmap of extracted biologically significant bicluster Bic295 from the dataset GSE40419. In which, the vertical axis represents the genes and the horizontal axis represents the experimental conditions. Bic295 contains total of 10 genes which are behaving co-expressively across the 28 conditions. These 10 genes are co-expressed across 28 conditions only, not across all 164 conditions. Genes are always involved in various biological processes. It is not necessary that all the co-expressed genes should be involved in the same process. There might be chances that the same gene can be involved in more than one biological process. Fig. 4 shows the genes of Bic295 involved in the several biological processes at the p-value <0.01. Genes PCDHA1, PCDHA2, PCDHA3, PCDHA5, PCDHA6, PCDHA7, PCDHA8, PCDHA9, PCDHA10 and PCDHAC1 are involved in the biological adhesion process and genes PCDHA1, PCDHA2, PCDHA3, PCDHA5, PCDHA6, PCDHA7, PCDHA8, PCDHA10 and PCDHAC1 are involved in the developmental process and multicellular organismal process. These relationships between the genes and the biological processes play a key role in extracting the biologically significant insights. GCN construction: GCN has been constructed by considering the gene correlation matrix. Fig. 5 represents the corplot of the gene correlation matrix of the Bic295. The corplot represents the correlation between the genes. Dark blue colour shows the strong correlation between the genes indicated by value as 1 and dark red colour shows the negative correlation between the genes and indicated by the value −1. The figure shows the genes in the Bic295 biclusters are strongly correlated across the 28 conditions and all are showing the dark blue colour, i.e. strong correlation. Fig. 6 shows the GCN for the Bic295. We have constructed the GCN with the correlation threshold for getting more accurate results. In the entire approach, we have considered the correlation threshold α = 0.95 for getting more accurate results. For the Bic295, all genes are satisfying the criteria of the correlation threshold α = 0.95. Here, we have removed the edges which are pointing to the same parent node. With the same manner, the co-expression networks have been constructed for all significant biclusters. All these constructed GCNs are used for the identification of hub genes. Hub genes from all the biologically significant biclusters have been identified by considering the highest degree of connectivity in the GCN. These hub genes are conditions specific because biclusters include only a subset of conditions, not all the conditions. From Fig. 6, we have calculated the degree of each gene of the network which is shown in Table 4. The degree of gene PCDHA9 is 9 which is the highest among all other genes with a correlation threshold α = 0.95. Hence, the hub gene of the Bic295 is PCDHA9. Table 5 shows the WCGNA-based extracted clusters, identified hub genes and a number of conditions across which genes of the clusters are co-expressed for the dataset GSE49712. WCGNA has extracted only five clusters from GSE49712 dataset. Here, all clusters show a number of conditions equal to ten which is the total number of experimental conditions of the GSE49712 dataset. Since WCGNA extracts gene cluster across all experimental conditions. The identified hub genes from the clusters are KIAA0415, ZNF638, BHLHA9, GDF6 and ADAM30. All these hub genes represent all the experimental conditions. Table 6 shows the bicluster with the maximum five identified hub genes of the top significant biclusters on dataset GSE49712 using the proposed approach. These biclusters have been ranked using the p-values. Along with the hub genes, the third column shows the number of conditions across which the genes in the biclusters are co-expressed. Hence, we can say that the identified hub genes are more specific to the mentioned conditions. GSE49712 dataset consists of ten conditions but obtained hub genes are specific to the <10 conditions. Table 7 shows the WCGNA-based extracted clusters, identified hub genes and a number of conditions across which genes of the clusters are co-expressed for the dataset GSE40419. WCGNA has extracted a total of 87 clusters from GSE40419 dataset. Here, we have chosen top ten clusters only. All clusters show a number of conditions equal to 164 which is the total number of experimental conditions of the GSE40419 dataset. Since WCGNA extracts gene cluster across all experimental conditions. The identified hub genes from the clusters are FCGBP, KANK2, SAMD7, ANKS3 and ZFP14. All these hub genes represent all the experimental conditions. Table 8 shows the bicluster with maximum of five identified hub genes of the top significant biclusters on dataset GSE40419 with the correlation threshold α = 0.95. Along with the hub genes, the third column shows the number of conditions across which the genes in the biclusters are co-expressed. GSE49712 datasets consist of 164 conditions but obtained hub genes are specific to the <164 conditions. From Tables 5-8, the hub genes identified by the proposed approach are conditions specific and hub genes identified by the WCGNA approach are not conditions specific. WCGNA-based hub genes are based on the clustering concept and the proposed approach-based hub genes are based on the biclustering approach. Hence, the proposed approach-based hub genes are conditions specific. From the experimental results, it is found that the proposed approach produces the conditions specific hub genes from the gene-expression data which plays a vital role in various clinical applications. Hence, we can say that the identified hub genes are more specific to the mentioned conditions. These hub genes can be used as the diagnostic and prognostic markers for the disease after the exhaustive analysis and testing by the researchers. In this manner, hub genes for all the significant biclusters have been identified. If we compare the proposed approach with the WCGNA-based approach, it is found that the identified hub genes using WCGNA were based on all conditions of the gene-expression data, since the WCGNA-based approach uses the clustering concept, whereas the proposed approach identifies the conditions of specific hub genes with the help of biclustering concept. Owing to the conditions specific hub genes, researchers will concentrate on the only specific conditions for further analysis and not on all conditions. Therefore, the extra time for the unnecessary analysis will be saved and the accuracy of the result will increase. In this way, the proposed approach will be helpful to the researchers who want to Hub genes identification using PPI network: For getting the more robust results of the proposed approach, we have integrated the proposed approach with PPIN. Here, we have identified the hub genes from the PPIN of the respective biclusters using the STRING software tool (http://www.networkanalyst.ca) with confidence score more than 400. We have constructed the PPIN of each significant biclusters and identified the highly connected genes from the PPIN. These highly connected genes are the hub genes for the specific set of genes across the subset of conditions. Fig. 7 shows the PPIN for the bicluster Bic295, in which gene PCDHA2 is the highly connected genes represented by red colour. Hence, we have considered it as hub gene. Table 9 shows the hub genes identified using PPIN for the top significant biclusters on the datasets GSE49712 with confidence score more than 400. Table 10 shows the hub genes identified using PPIN for the top significant biclusters on GSE40419 dataset with confidence score more than 400. Common hub genes: We have identified the conditions of specific hub genes using the GCN and PPIN for the top significant biclusters of both the datasets. Fig. 8 shows a Venn diagram for the intersection of identified hub genes from both the networks (GCN∩ PPIN) for the bicluster 'Bic295'. The genes PCDHA1, PCDHA2, PCDHA5, PCDHA8 and PCDHA9 are the top highly connected genes of the GCN for the bicluster 'Bic295' and gene PCDHA2 is the only identified hub gene of the PPIN for the bicluster 'Bic295'. After the intersection of both the subsets of the network of genes, PCDHA2 is the only common gene obtained as a highly connected gene in both networks for the bicluster 'Bic295'. Hence, PCDHA2 is the robust hub gene for the bicluster 'Bic295'. In this way, we have identified the robust conditions specific hub genes for the significant biclusters. For the top significant biclusters of dataset GSE49712, Table 6 shows the conditions of specific hub genes using GCN and Table 9 shows the conditions of specific hub genes using the PPIN. From Tables 6 and 9, some of the hub genes are common to both GCN and PPIN networks for the respective biclusters. The results are illustrated in Fig. 9. Here, we have done the intersection of the identified hub genes from both the networks (GCN∩PPIN) for the same bicluster shown in the Venn diagram. From the results, PRKAR1A is the obtained hub gene for the bicluster Bic22; CBX5 is the obtained hub gene for the bicluster Bic50; OAS2 and STK4 are the obtained hub genes for the bicluster Bic51; ACTRT1 is the obtained hub gene for the bicluster Bic52; and ARHGAP5 and ARL6 are the obtained hub genes for the bicluster Bic53. All these are the robust conditions specific hub genes for the respective biclusters because all are identified using the GCN and PPIN. For the top significant biclusters of dataset GSE40419, Table 8 shows the conditions of specific hub genes using GCN and Table 10 shows the conditions of specific hub genes using the PPIN. From Tables 8 and 10, some of the hub genes are common to both GCN and PPIN networks for the respective biclusters. The results are illustrated in Fig. 10. Here, we have done the intersection of the identified hub genes from both the networks (GCN∩PPIN) for the same bicluster shown in the Venn diagram. From the results, LRRC18 and STK33 are the obtained hub gene for the bicluster Bic34; DNAI2 is the obtained hub gene for the bicluster Bic69; PACRG and LRRC10B are the obtained hub genes for the bicluster Bic90; CCT5 is the obtained hub gene for the bicluster Bic100; and CCT7 is the obtained hub gene for the bicluster Bic319. All these are the robust conditions specific hub genes for the respective biclusters because all are identified using the GCN and PPIN. Application to drug discovery New medications are discovered by using the process of drug discovery. Drug target identification is the first step in the process of drug discovery. Abnormal changes in the expression levels of genes lead to diseases. Drug target can be identified by examining the expression profiles of the genes in specific conditions. Nowadays, gene-expression data is available in very large size. Gene-expression data is one of the widely used biological data in clinical research. The function of thousands of genes and the mechanisms underlying diseases can be identified by analysing the gene-expression data. Proper analysis of gene-expression data will help to find the solution for the many biological problems. To find out target genes from the large-scale gene-expression data related to the specific disease is a challenging task. The proposed approach plays an important role in the process of drug discovery. We are identifying conditions specific hub genes from the gene-expression data of specific diseases. These hub genes are specific to the subset of conditions not specific to all conditions of the dataset. Therefore, the pharmacist can concentrate on the hub genes of their interest. These hub genes can be condition specific. Hence, the researcher can focus on their analysis on the subset of conditions, not on all conditions. Owing to the conditions specific hub genes, exhaustive analysis can be reduced to the specific conditions. Hence, these conditions specific hub genes will help to make the process more efficient and more accurate. Conditions specific hub genes can give more accurate results and predictions. These hub genes can act as a drug target to particular diseases after the rigorous validations and testing. In this way, the proposed approach can contribute to the process of drug discovery. Conclusions In this paper, we had proposed a new approach for mining conditions specific hub genes using the biclustering algorithm and summarised the role of hub genes in the drug discovery process. High-throughput RNA-Seq gene-expression data had been used as an input to the biclustering algorithm. The runibic biclustering algorithm had extracted the biologically significant biclusters efficiently. At the p-value <0.01, 74 and 55% of biclusters were obtained enriched with GO terms on gene-expression datasets GSE49712 and GSE40419, respectively. Similarly, at the p-value <0.001, 53 and 41% of biclusters were obtained enriched with GO terms on gene-expression datasets GSE49712 and GSE40419, respectively. The results show that runibic biclustering algorithm performed effectively on the various performance measuring issues such as overlapping, noise, stable output and accuracy on the synthetic dataset, and on real datasets at the very lesser p-value biclusters show the biological significance. Those significant biclusters had been used subsequently for the construction of GCN and PPIN. The first time, the GCN had been constructed using difference matrix and gene correlation matrix of the significant bicluster. Hub genes had been extracted from each GCN and PPIN with the help of connectivity degree. Finally, common hub genes from the GCN and PPIN-based hub genes have been identified. These common genes are considered as more robust conditions specific hub genes to the respective biclusters. In this manner, conditions specific hub genes had been extracted from all the significant biclusters. The extracted hub genes were more relevant to the specific subset of conditions as compared with the hub genes identified from the clustering concept. The identified hub genes can be used in many clinical applications as prognostic and diagnostic markers. On the basis of the observations, it is found that runibic algorithm performed effectively and efficiently on RNA-Seq data. Identified hub genes represent the functionality of the biclusters and have been involved in various biological processes and pathways. The conditions specific hub genes will be very helpful to further research for saving the time of exhaustive analysis and increasing the accuracy. In the future, the observed findings can be used for the various biomedical applications such as drug discovery, disease diagnosis, regulatory gene identification, pathway analysis, biomarker identification etc. Therefore, the proposed approach can be useful for identifying the conditions of specific hub genes related to any diseases efficiently and accurately. Acknowledgments The authors are thankful to the Department of Computer Science and Engineering, VNIT, Nagpur(MS), India, for providing the resources and support during the course of this research. The authors also are very thankful to the Ministry of Electronics and
2019-05-17T13:46:32.553Z
2019-06-24T00:00:00.000
{ "year": 2019, "sha1": "2d1a34f90c1edfc9f4911cba23d60ee4bdaa7e9a", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1049/iet-syb.2018.5058", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "d20ae621c528f6029693d6502c16a2db7dc44137", "s2fieldsofstudy": [ "Biology", "Computer Science", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
10543702
pes2o/s2orc
v3-fos-license
mLST8 Promotes mTOR-Mediated Tumor Progression The activity of the mechanistic target of rapamycin (mTOR) is elevated in various types of human cancers, implicating a role in tumor progression. However, the molecular mechanisms underlying mTOR upregulation remain unclear. In this study, we found that the expression of mLST8, a required subunit of both mTOR complex 1 (mTORC1) and complex 2 (mTORC2), was upregulated in several human colon and prostate cancer cell lines and tissues. Knockdown of mLST8 significantly suppressed mTORC1 and mTORC2 complex formation, and it also inhibited tumor growth and invasiveness in human colon carcinoma (HCT116) and prostate cancer (LNCaP) cells. Overexpression of mLST8 induced anchorage-independent cell growth in normal epithelial cells (HaCaT), although mLST8 knockdown had no effect on normal cell growth. mLST8 knockdown reduced mTORC2-mediated phosphorylation of AKT in both cancer and normal cells, whereas it potently inhibited mTORC1-mediated phosphorylation of 4E-BP1 specifically in cancer cells. These results suggest that mLST8 plays distinct roles in normal and cancer cells, depending upon its expression level, and that mLST8 upregulation may contribute to tumor progression by constitutively activating both the mTORC1 and mTORC2 pathways. The molecular mechanisms underlying regulation of mTOR activity have been elucidated by a co-crystal structure of a complex of mTOR and mammalian lethal with SEC13 protein 8 (mLST8), also known as GbetaL [22]. mLST8 is a common subunit of both mTORC1 and mTORC2, and is necessary for activation of the mTOR kinase [23]. The structure of the mTOR-mLST8 complex revealed that mLST8 directly stabilizes the active site of mTOR, supporting the idea that mLST8 plays a critical role in mTOR kinase activity. Analyses of mLST8-knockout mouse embryos and fibroblasts have shown that mLST8 is required for formation of mTORC2, suggesting a specific role for mLST8 in mTORC2 function as well [24]. Furthermore, mLST8 can associate with other cellular proteins, such as CAD, a multifunctional protein involved in pyrimidine synthesis, which is phosphorylated by S6K [25,26]. Thus, mLST8 is critical for the proper regulation of mTOR pathways, but its precise function still needs to be defined. Also, the contribution of mLST8 to carcinogenesis and/or progression of human cancers, particularly those in which mTOR pathways are deregulated, remains uncharacterized. We previously found that expression levels of certain components of mTOR complexes, such as mTOR itself and RICTOR, are upregulated in various human cancers as a result of silencing of specific microRNAs [9,10]. In this study, we show that mLST8 is also upregulated in several human colon and prostate cancer cells/tissues, in which it contributes to tumor growth and invasion. Upregulated mLST8 is required for activation and assembly of both mTORC1 and mTORC2 in cancer cells, although perturbation of mLST8 does not affect proliferation of normal cells. Our results suggest that mLST8 plays distinct roles in normal and cancer cells, depending on its expression level, and that upregulation of mLST8 contributes to tumor progression by activating both the mTORC1 and mTORC2 pathways. Expression of mLST8 is upregulated in various colon and prostate tumors To examine the functional relevance of mLST8 to human cancers, we first analyzed expression levels of mLST8 protein in human colorectal primary tumors. Western-blot analyses revealed that mLST8 had a tendency to be upregulatied in five out of ten cancerous tissues relative to the levels in normal tissues (Fig 1A). We also observed that mLST8 upregulation was associated with that of mTOR. The intensity of mLST8 immunoreactivity was also higher in cancerous lesions than in normal tissues in 16 out of 20 samples examined [ Fig 1B (i)]. In these clinical samples, the invading edges of tumors exhibited strong mLST8 expression [ Fig 1B (ii)(iii)(iv)]. We next examined mLST8 expression in several lines of colon and prostate cancer cells ( Fig 1C). Western-blot analysis revealed a marked upregulation of mLST8 in all cancer cell lines tested. Similarly, protein levels of other mTOR complex components, such as mTOR, RICTOR, RAPTOR, and mSIN1, were also upregulated in these cells. Furthermore, RT-PCR analysis of mLST8 mRNA levels revealed that mLST8 expression is regulated at the level of transcription in colon cancers (Fig 1D, upper panels). In prostate cancers, however, there was no significant change in the levels of transcripts between cancer and normal cells, suggesting that mLST8 expression may instead be regulated by protein stability in these cells (Fig 1D, lower panels). These results suggest that mLST8 is upregulated coordinately with mTOR complex components in some human tumors and cancer cell lines. mLST8 regulates tumor growth in vitro and in vivo To evaluate the role of mLST8 upregulation in cancer cells, we examined the effects of shRNAmediated knockdown of mLST8 (mLST8-KD) on tumor growth of HCT116 (Fig 2) and LNCaP cells (Fig 3). mLST8-KD suppressed the growth rate of HCT116 cells (Fig 2A and 2B). More notably, compared to control KD, mLST8-KD induced a marked reduction in anchorage-independent growth of HCT116 cells and LNCaP cells (Figs 2C and 3A). Overexpression of HA-tagged mLST8 in mLST8-KD cells restored the ability of anchorage-independent growth to a level even higher than that of control cells (Fig 2D and 2E). Overexpression of mLST8 also promoted colony-forming activity in non-transformed human keratinocyte HaCaT cells, although the colony-forming activity in these cells was substantially weaker than those in cancer cells (Fig 4). Furthermore, the effect of mLST8-KD was evident in vivo: mLST8-KD in HCT116 cells potently suppressed tumorigenesis in nude mice (Fig 2F). These results suggest that the expression levels of mLST8 are tightly associated with the potential for tumor growth. mLST8 regulates mTORC1/2 activity in cancer cells To investigate the mechanism underlying mLST8-mediated promotion of tumor growth, we assessed the effects of mLST8-KD on the formation of mTOR complexes and the activity of downstream signaling components in HCT116 and LNCaP cells. Immunoprecipitation assays with anti-mTOR antibody revealed that mLST8-KD decreased the amounts of RICTOR and RAPTOR in mTOR complexes, with a larger reduction in RICTOR (Figs 3B and 5A). These results suggest that mLST8 is required for stable formation of both mTORC1 and mTORC2, although it preferentially contributes to mTORC2 formation. Western-blot analyses of the phosphorylation status of downstream components of mTOR pathways revealed that mLST8-KD induced significant reduction in phosphorylation of AKT and 4E-BP1 in both cell types, whereas the phosphorylation status of S6K and SGK1 was unchanged (Figs 3C and 5B). These data suggest that mLST8 upregulation contributes to promotion of mTORC1/2 formation and subsequent phosphorylation of AKT and 4E-BP1. mLST8 does not affect cell proliferation of normal cells We then examined the effect of mLST8-KD on the growth of normal and immortalized human keratinocytes (HaCaT). Although mLST8 was effectively knocked down in these cells (Fig 6B), mLST8 downregulation did not affect growth rate (Fig 6A). Under these conditions, phosphorylation of S6K and AKT was slightly decreased by mLST8-KD, whereas phosphorylation of 4E-BP1 was almost unchanged (Fig 6B). These observations suggest that perturbation of mLST8 is not crucial for growth of normal cells, and that the role of mLST8 in regulating mTOR pathways may differ between normal and cancer cells. mLST8 regulates invasiveness of cancer cells In addition to exerting effects on tumor growth, mLST8-KD also induced dramatic morphological changes in HCT116 cells. Cell staining for actin fibers (F-actin) and paxillin, a marker of focal contact, showed that mLST8-KD caused disruption of stress fibers and attenuated formation of focal contacts (Fig 7A). Previously, similar effects were observed by knockdown of Rictor in these cells [10]. Taken together with these observations, this result suggests that mLST8 upregulation promotes cytoskeletal reorganization and formation of focal contacts, potentially through activation of the mTORC2 pathway. Because the ability to form focal contacts is functionally linked to invasiveness of cancer cells, we examined the effect of mLST8-KD on in vitro invasive activity of HCT116 cells using a Matrigel-based chamber assay. mLST8-KD strongly suppressed the invasive activity of HCT116 cells, and the expression of HA-tagged mLST8 in mLST8-KD cells restored their invasive activity (Fig 7B and 7C). These findings suggest that mLST8 upregulation increases the invasive potential of cancer cells. Discussion In this study, we addressed the role of mLST8, a requisite component of mTOR complexes, in tumor progression. We found that mLST8 is upregulated in some human cancer tissues and cells, and that upregulated mLST8 promotes mTORC1/2 formation and induces activation of AKT and phosphorylation of 4E-BP1, resulting in promotion of tumor growth as well as invasive potential of cancer cells. Our findings provide the first evidence that mLST8 is upregulated in a subset of human cancers; however, the molecular mechanisms underlying mLST8 upregulation are currently unknown. In colon cancers, the levels of mLST8 transcripts were elevated, suggesting that activation of some transcription factors and/or silencing of particular microRNAs may be involved in mLST8 upregulation, as observed for other components of the mTOR complexes [9,10,[27][28][29][30][31]. In the case of prostate cancers, however, there was no significant change in the expression of mLST8 transcripts between normal and cancer cells. Previous studies suggested that upregulation of some components of mTORC1 and mTORC2 activates mTOR signaling, potentially due to mutual stabilization between components of mTORC1 and mTORC2 [32]. We also observed that mLST8 upregulation was associated with upregulation of other components of mTOR complexes, such as mTOR itself and RICTOR/RAPTOR. Therefore, it is possible that mLST8 upregulation can be attributed to protein stabilization, which is in turn caused by upregulation of binding partners that are upregulated by other mechanisms, e.g., by micro-RNA silencing [9]. On the other hand, several mutations in the mLST8 gene have recently been identified in cancer patients based on information in the Cosmic and CBio cancer genome databases [33]. The potential contributions of these mutations to mLST8 upregulation also deserve further investigation. Despite the potential importance of mLST8 in regulating mTOR signaling, knockdown of mLST8 had no effect on the growth of normal epithelial cells. Also, mLST8 had little effect on phosphorylation of S6K and AKT in these cells (Fig 6). Ablation of mLST8 in mice has revealed that mLST8 is not essential for normal development [24]. These observations suggest that mLST8 plays a dispensable role in regulating mTOR function under normal conditions. By contrast, mLST8 knockdown in cancer cells led to reduction in formation of mTORC1/2 and phosphorylation of AKT and 4E-BP1, consistent with the results of a previous study (Figs 3 and 5, [24]). While the reason for the difference of phosphorylation between S6K and 4E-BP1 as mTORC1 substrate is unclear, it may relate to an independent pathway of regulation of S6K1 and 4E-BP in cancer cells [34]. We also observed that overexpression of mLST8 induces colony-forming activity in non-transformed HaCaT cells, accompanied by a slight increase in phosphorylation of S6K, AKT, and 4E-BP1 (Fig 4). Therefore, it is likely that upregulation of mLST8 induces full activation of mTORC1/2, resulting in activation of the downstream substrates that are required for tumor progression. It is well known that phosphorylation of 4E-BP1 is responsible for tumor progression in various types of cancer [35]. Upregulated mLST8 induces phosphorylation of 4E-BP1, thereby stimulating cap-dependent translation of genes involved in cell growth and ultimately promoting of tumor progression. In this context, mLST8 may play distinct roles in normal and cancer cells, depending upon its expression levels. Recent analysis of the mTOR-mLST8 complex structure revealed that the binding of mLST8 contributes to stabilization of the active site of mTOR [22]. In this study, we showed that mLST8 knockdown induces dissociation of mTORC1/2 complexes in cancer cells. Therefore, it is possible that upregulation of mLST8 increases the population of fully active and stable mTOR complexes, whereas reduction in mLST8 levels induces structural change in mTOR and affects its binding affinity for other components such as RAPTOR and RICTOR, as well as its substrates. In fact, affinity for 4E-BP1 is dramatically reduced when RAPTOR is absent from mTORC1 [22,36]. This notion is further supported by recent studies demonstrating that phosphorylated RAPTOR and SIN1 promote binding between components of the mTOR complex and its substrates [33,37]. Because mLST8 is a required subunit of mTOR kinase, dissociation of mLST8 from mTOR complex may induce critical changes in the catalytic activity and substrate specificity of mTOR complexes. The distinct functions of mTOR in cancer and normal cells may be due to structural changes in mTOR complexes that are dependent on mLST8 levels. In conclusion, we have demonstrated a crucial role for mLST8-mediated upregulation of the mTOR pathway in promoting tumor progression. Our study provides new insights into the regulatory mechanisms of the mTOR signaling pathway. Further analyses of mLST8-mediated regulation of mTOR pathways may provide new targets for therapeutic intervention in a wide variety of human cancers. Cells and materials A human keratinocyte cell line (HaCaT), human colon cancer cells (HCT116, HCT15, HT29 and SW480), two different types of FHC (normal human colon cells), human prostate cancer cells (PC3, DU145, and LNCaP), and normal human prostate cells (PNT1A and PNT2A) were obtained from the American Type Culture Collection (ATCC). PC3, HCT15, and HT29 cells were cultured in Dulbecco's modified Eagle's medium (DMEM). PNT1A, PNT2A, LNCaP, and DU145 cells were cultured in RPMI medium. HCT116 cells were cultured in McCoy's 5A medium. All media were supplemented with 10% fetal bovine serum (FBS). FHC cells were cultured in DMEM/Ham's F-12 (1:1) with 10% FBS, 5 μg/ml insulin, 5 μg/ml transferrin, 100 ng/ml hydrocortisone, and α-MEM (minimal essential medium) with 10% FBS. Frozen colon tissues were divided into tumor (T) and non-cancerous (N) regions as defined by two pathologists (JI and EM). The research protocol for the collection of human samples was approved by the ethical review board of the Graduate School of Medicine Osaka University, Japan. Informed consent was obtained from all patients in writing before enrollment in the study. Growth assay Cells were seeded at 5 × 10 2 cells/well in 96-well plates, and then incubated for the indicated times. At the end of time points, cells were incubated with 10 μl per well of WST-1 assay reagent (Roche). After 30-min incubation, absorbance was measured at 450 nm. Each experiment was repeated three times. Immunohistochemistry Histologic specimens were fixed in 10% formalin and processed routinely for paraffin embedding. Histological sections (4 μm thick) were stained with hematoxylin and eosin and reviewed by two pathologists (JI and EM) to define cancerous and corresponding normal tissues. An immunoperoxidase procedure was performed on the paraffin-embedded sections, as described previously [9]. After antigen retrieval using a Pascal pressurized heating chamber (Dako A/S, Glostrup, Denmark), the sections were incubated with anti-mLST8 antibody diluted 1:50. Cells were then treated with a ChemMate EnVision kit (Dako). Diaminobenzidine (Dako) was used as the chromogen. As a negative control, staining was carried out in the absence of primary antibody. Stained sections were evaluated independently by two pathologists (JI and EM). Immunoprecipitation Cells were lysed in ice-cold lysis buffer [40 mM HEPES (pH 7.5), 120 mM NaCl, 1 mM EDTA, 10 mM pyrophosphate, 10 mM glycerophosphate, 50 mM NaF, 0.3% CHAPS] containing protease cocktail (Nacalai Tesque). After clearing the lysate by centrifugation at 13,000 × g for 10 min, 4 μg of immunoprecipitating antibody was added to the supernatant. After 1.5-hr incubation at 4°C, 30 μl of 50% slurry of protein G-Sepharose was added, and the mixture was incubated for 1 hr at 4°C, after which immunoprecipitates were washed four times with lysis mLST8 Promotes Tumor Progression buffer. Samples were resolved by SDS-PAGE, and proteins were transferred to PVDF and subjected to immunoblotting as described above. Invasion assay Invasion assays were performed as described [38]. Briefly, a suspension of 5 × 10 4 cells in serum-free medium was loaded into the upper well of BioCoat Matrigel Invasion Chambers (BD Science), and conditioned medium from NIH3T3 cells (as a chemoattractant) was loaded in the lower well. After 48-hr incubation, non-invaded cells were removed with a cotton swab, and migrated cells were fixed with methanol, stained with toluidine blue, and counted. Lentiviral-mediated sh-RNA Control vector and vector carrying human mLST8 (ID: TRCN0000039759, TRCN0000039761 and TRCN0000039762) were purchased from Sigma-Aldrich. Production of purified lentivirus was performed according to the manufacturer's instructions. The stock cells after preparation were subjected to each experiment. Tumorigenesis assays Immunodeficient mice (BALB/c Slc-nu/nu, Japan SLC, Inc.) were subcutaneously injected with 1 × 10 6 cells suspended in 200 μl of serum-free McCoy's 5A for one location. Tumors were monitored every 2 or 3 days, and then mice and tumor volumes were calculated using the following formula: 0.5 × L × W 2 . The mice were sacrificed after 16 days with tumor volume less than 1 cm 3 of monitoring. The mice used this study were housed in environmentally-controlled rooms of the animal experimentation facility at Osaka University and sacrificed under deep anesthesia by inhaling 4% isoflurane. Experiments were conducted under the applicable laws and guidelines for the care and use of laboratory animals in the Research Institute for Microbial Diseases, Osaka University, approved by the Animal Experiment Committee of the Research Institute for Microbial Disease, Osaka University.
2017-04-15T02:39:40.008Z
2015-04-23T00:00:00.000
{ "year": 2015, "sha1": "a6f6ebacc9696ef79267c39a22aaa099cdde1703", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0119015&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a6f6ebacc9696ef79267c39a22aaa099cdde1703", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
252531943
pes2o/s2orc
v3-fos-license
A Reciprocal Heuristic Model for Diffuse Scattering from Walls and Surfaces Diffuse scattering of electromagnetic waves from natural and artificial surfaces has been extensively studied in various disciplines, including radio wave propagation, and several diffuse scattering models based on different approaches have been proposed over the years, two of the most popular ones being Kirchhoff Theory and the so-called Effective Roughness heuristic model. The latter, although less rigorous than the former, is more flexible and applicable to a wider range of real-world cases, including non-Gaussian surfaces, surfaces with electrically small correlation lengths and scattering from material inhomogeneities that are often present under the surface. Unfortunately, the Effective Roughness model, with the exception of its Lambertian version, does not satisfy reciprocity, which is an important physical-soundness requirement for any propagation model. In the present work, without compromising its effectiveness and its simple and yet sound power-balance approach, we propose a reciprocal version of the Effective Roughness model, which can be easily implemented and replaced to the old version in ray-based propagation models. The new model is analyzed and compared to the old one and to other popular models. Once properly calibrated, it is shown to yield similar - if not better - performance with respect to the old one when checked vs. measurements. I. INTRODUCTION Diffuse scattering (DS) of radio waves, intended here as non-specular reflection from terrain, objects and building walls surfaces due to surface roughness or material irregularities, has been studied for years in many application fields such as remote sensing and optics. With reference to radio propagation in urban environment, assuming flat, smooth and homogeneous building walls or indoor furniture panels, propagation can be conveniently analyzed using the Geometrical Optics (GO) approximation [1], where radio wave interactions can be modeled as specular reflections, transmissions and edge diffractions. However, perfectly smooth slabs are rarely present in reallife, especially in dense urban areas where building walls can show relevant deviations from smooth homogeneous layers, such as compound materials, windows frames, metal reinforcements, pillars, rough plaster and brick surfaces, cables, advertising boards, etc. Similar considerations hold true for indoor walls and furniture. In fact, some investigations showed that DS due to such details -often disregarded in building maps and databases -can be an important propagation mechanism in urban environment [2]- [5]. In particular, DS has been shown to generate a large part of the time-domain, angle-domain and polarization dispersion of the multipath radio channel in most environments [6]- [9], and the knowledge of this phenomena can be exploited in the design of MIMO wireless links and to implement advanced beamforming strategies [10]- [12]. Moreover, DS has been shown to play a prominent role even in the determination of the actual RF coverage level, especially in Non Line of Sight (NLoS), millimeterwave frequency applications [13]. Recent studies have also highlighted the importance of DS from rough surfaces in Terahertz wireless communications links [14]- [16]. Therefore, accounting for specular reflection, transmission and diffraction is not sufficient: analysis and modeling of diffuse scattering is mandatory to achieve a complete understanding of urban radio propagation. The most widely known diffuse scattering models available in the literature only deal with surface roughness, and include Kirchhoff Theory, the Small Perturbation Method and the Integral Equation Method [17], [18]. The most popular approach to DS is Kirchhoff Theory, based on Beckmann-Kirchhoff theory for scattering of incident plane waves from Gaussian rough surfaces described in terms of roughness' standard deviation and correlation distance [17]. Another diffuse scattering approach developed specifically for building walls and derived from physical optics is proposed in [19]: here the assumption is that non-specular scattering from the façades of large buildings is dominated by windows and decorative masonry, whose placement tends to be nearly periodic. However, Kirchhoff Theory is not applicable to non-Gaussian surface roughness, to strong surface irregularities where the roughess' correlation length is comparable to, or smaller than, the wavelength (e.g. indentations), or when the surface size is comparable to, or smaller than, correlation length. Moreover all the cited models are not suitable to cases where the presence of internal, material irregularities have a significant impact. The possibility for radiowaves to penetrate inside the wall, undergo scattering interactions due to the internal inhomogeneities and re-emerge with nearly random propagation direction and characteristics must also be accounted for. Therefore, in more recent years heuristic models like the Effective Roughness (ER) model [4] have been proposed to overcome the foregoing limitations. The ER model is aimed at modeling non-specular scattering from surfaces, but its parameters are not actual surface roughness parameters as for the Kirchhoff model, but "effective" parameters that must take into account also the effect of the more general irregularities and details described above, hence the name "Effective Roughness" model. Differently from the Kirchhoff model, the specular reflected wave and the scattered wave are treated from the beginning as distinct waves where the attenuation of the former is due to part of its power being diverted into the latter by irregularities. This fact allows its straightforward, "plugand-play" integration into ray-based models where specular reflection and transmission are implemented as phase-coherent waves that follow GO theory, albeit with a proper attenuation, while diffuse scattering can have different spatial and polarization characteristics. The ER model is physically consistent as it is based on a power balance between specular reflection, transmission and scattering. It is flexible because the scattering pattern can be chosen among several different options, and due to its simplicity and low number of parameters it can be easily tuned vs. measurement data. After its introduction in 2007, analytical formulations of the ER model have been developed to describe the anglespread produced by DS from a single wall [20], it has been extended to transmitted scattering in the forward half-space (e.g. beyond a wall) [21] and has been further validated vs. full-wave electromagnetic simulations and measurements in reference cases [22]. The parameterization of the ER model in the mm-wave bands for different construction materials has also been discussed in [23], [24]. Furthermore, the ER model has been finally embedded into some commercial ray based field prediction software tools [25]. Despite its strengths, the original -or legacy -ER model also has an important shortcoming: with the exception of its Lambertian scattering pattern version, it doesn't fully satisfy reciprocity, which means that the predicted scattered field intensity is not invariant with respect to the exchange of transmitter and receiver, as it should be according to propagation theory [26]. Although, being a heuristic model, its fitting to the actual physical process can always be adjusted through parameter calibration, non-reciprocity represents an important theoretical flaw, especially considering that its non-reciprocal, directive scattering versions have been shown to be the most suitable to describe DS from real buildings [4]. Other models similar to the ER model that satisfy reciprocity have been developed for computer graphic applications [27], [28], or have been derived from them [29]. However, such models do not distinguish specular from diffuse reflection and therefore cannot be easily implemented into existing ray-based propagation models. Moreover, although power constraints are present, such as that the back-scattered power cannot be greater than the incident power, they don't comply with a clear power conservation balance at the surface in order to minimize parameters and to achieve maximum compatibility with traditional formulations based on Geometrical Optics for smooth surfaces and material slabs. In the present work, starting from the approach of the original ER model, we first develop a better and more complete mathematical derivation of its normalization factors with respect to the rather incomplete demonstration provided in [4], using Euler's Gamma and Beta functions. Then we propose a new version of the ER model that satisfies reciprocity without sacrificing the original power-balance assumptions, if not to a negligible extent for grazing incidence. We also provide a discussion on reciprocity and power balance of the new ER model with respect to the original formulation, and a comparison with respect to other reference models (e.g. Kirchhoff). Finally, the model is validated through comparison with measurements in a reference case. The paper is organized as follows. In Section II, some background on the original ER model and its formulation are provided, then the new reciprocal formulation is presented (the mathematical details are provided in the appendices). In Section III, comparisons to the legacy ER model, to other reference models and to measurements are shown and discussed. Finally, conclusions are drawn in Section IV. A. Background on the ER Model When a surface element dS is illuminated by an impinging electromagnetic wave, the following power balance must hold: being P i , P r , P s and P p the incident, the reflected, the scattered and transmitted powers, respectively ( Fig. 1). The basic assumption of the ER approach is that the scattered power can be simply related to a scattering coefficient S ∈ [0, 1] as: Depending on the value of U , S 2 represents the percentage of either the incident (U = 1) or the reflected power (U = Γ, being Γ = Ē r / Ē i the modulus of the reflection coefficient) that is spread in non specular directions [4]. In the following, DS is supposed to occur at the expense of specular reflection, i.e. U = Γ is considered in eq. (2). Therefore, the power balance (1) can be written as: where R is the reflection reduction factor, which is related to the so called "Rayleigh's factor" of Kirchhoff theory [17]. By assuming that the ratio P p /P i does not depend on the degree of roughness, i.e. on the parameter S, from (3) we easily get that the reflection reduction factor is [4]: R = √ 1 − S 2 . Power balance assumptions of the legacy ER model (referred to as ER power-balance in the following), represented by eq. (2) and (3), imply the following equation where power diverted from specular reflection equals the integral of the scattered field power density over the backscattering half space, i.e. (Fig. 1) where r i , r S are the distances between the surface element dS and the source, and between dS and the observation point, respectively, ∆Ω i is the solid angle subtended from dS at the transmitter side (see Fig.1), and η = µ 0 / 0 is the free-space impedance. Moreover, the squared amplitude of the scattered field is assumed to be expressed by the following formula: represents the diffuse scattering spatial pattern. The overall scattered field is then modelled as a non-uniform spherical wave. Assuming the surface element dS in the far field region of the transmitting source, the incident field is a spherical wave, and therefore: where K i (θ i , φ i ) is a parameter depending on the source properties (transmit power, antenna gain). By substituting (5) and (6) into (4) and exploiting the expression for the solid angle ∆Ω i = dS cos θi r 2 i , the following formula can be achieved: where F (θ i , φ i ) represents the following integral expression: It can be observed that, according to (7), Ē S = 0 for any observation angle, when the incident wave is parallel to the surface element, i.e. θ i = π/2: in fact, for grazing incidence, no power is captured and then scattered by the surface. It is worth noting that, in order to have a reciprocal expression for the intensity of the scattered field Ē S , the product of the three functions in (7) needs to be reciprocal, i.e.: where g rec is a reciprocal function, i.e. a function invariant to the exchange of (θ The former version of the scattering model in [4] was aimed at a single-lobe, directive scattering pattern by means of the following choice: where the exponent α R is a tuning parameter for the directivity of the scattering pattern (the greater α R , the narrower the lobe), and ψ R is the angle between the scattering direction (θ S , φ S ) and the specular direction (Fig. 1). The following relation is also provided in [4]: By applying the power balance (4), equation (7) becomes: where F α R is the solution of the integral in (8), when (10) is enforced [4]. Note that with the chosen shape for the scattering pattern in eq. (10), F α R does not depend on the azimuth angle φ i , for symmetry reasons. A complete solution for F α R was not derived in [4]: however, two different, closed-form expressions were proposed, depending on whether α R is even or odd. Instead, a more compact and general expression for F α R is fully derived in this work, by exploiting the properties of the Euler's Beta function (Appendix A). The new closed-form solution valid for any value of α R is (see Appendix B): where x stands for the greatest integer less than or equal to x, and the ! and !! symbols stand for the factorial and double factorial functions, respectively (see Appendix A). In the case of normal incidence (θ i = 0), (13) reduces to: Looking at equations (12) and (13), it is evident that the amplitude of the scattered field Ē S 2 is non reciprocal, due to the presence in (13) of the functions (cos θ i ) m and (sin θ i ) m , which are not counterbalanced by similar terms containing the scattering elevation angle θ S . B. The new reciprocal formulation The aim is to achieve a reciprocal expression for the scattered field. To this extent, we propose the following new expression for the diffuse scattering pattern: Such scattering function is obtained by multiplying the pattern of the legacy ER model, i.e. (10), by the factor √ cos θ S . With this assumption, the scattered power tends to zero for grazing observation angles, i.e. when θ S approaches π/2. This is a necessary condition for reciprocity: in fact, according to (7), Ē S = 0 for θ i = π/2, no matter what is the observation angle θ S as the solid angle ∆Ω i goes to zero; similarly, it must be Ē S = 0 for θ S = π/2, independently from the incidence angle θ i . Besides, it can be observed that the multiplication of (10) by √ cos θ S causes a skew of the maximum of the scattering pattern with respect to specular reflection. This disaligment is of the order of a few degrees, and is more evident for grazing incidence angles, and low values of the parameter α R : some examples will be shown and discussed in Section III. Let's now discuss more in detail the reciprocity of the new model's formulation. In order for the model to be reciprocal, according to eq. (9) the following condition must be satisfied: Actually, it can be observed that F α R that would result from (14) being inserted into (8), i.e. which satisfies ER power-balance, is a monotonic decreasing function having its maximum value for θ i = 0, that can be well approximated by a function proportional to cos(θ i ), as shown in Fig. 2. This means that, if we assume F α R ∝ cos(θ i ), reciprocity is strictly satisfied, while also ER power-balance is satisfied to a good extent. In fact, Fig. 2 shows F α R (θ i ) /F α R (0) derived from (8) through numerical integration for 3 different values of the parameter α R , vs. the function √ cos θ i . It can be observed that the approximation is very good except for very grazing incidence angles (e.g. greater than 85 • ). This allows to write F α R in the form: where k (α R ) is an amplitude parameter depending only on the exponent α R . This approximation satisfies both eq. (9) (i.e. reciprocity) and, with good approximation, eq. (8) (i.e. ER power-balance). The value of k (α R ) can be determined in a straightforward way by assuming the approximation (16) as valid and solving the integral (8) for θ i = 0, as shown in Appendix C. Then, the final reciprocal expression of the scattered field when (14) is enforced and under the approximation (16) is (see Appendix C): with The expression used in (17) for k (α R ) is valid only for integer positive values of the exponent α R , as it has been computed by using the binomial theorem (see Appendix C). However, in case real positive values of the exponent α R are needed for a finer tuning of the model, k (α R ) can be calculated using the following interpolating function: C. Double lobe model, reciprocal formulation Similarly to what done in [4] for the legacy ER model, it is possible to derive a double-lobe model, where an additional lobe steered in the incidence direction is added to the scattering pattern. This is useful in many practical cases, where walls with big irregularities, e.g. indentations, generate a strong backscattering component in the incidence direction trough micro interactions consisting of multiple-bounce reflections (see Fig. 3). In order to obtain a reciprocal formulation for this doublelobe model, we propose the following expression for the scattered field where ψ i is the angle formed by the observation and incidence directions, α i is a parameter that accounts for the directivity of the backscattering lobe, F αi,α R is the solution of the power balance integral (8) for the double-lobe pattern, and Λ ∈ [0, 1] is a factor taking into account how the scattered power is subdivided between the two lobes. It can be easily shown (see Appendix D) that both the integrals in (19) can be approximated by a function propor- √ cos θ i . If so, also F αi,α R is proportional to √ cos θ i , and this allows to get a reciprocal expression for the scattered field. The final (reciprocal) expression of the scattered field for the double-lobe ER model is then: Note also that (17) is obtained as a particular case of (20), when Λ=1. III. COMPARISONS The new, Reciprocal ER model (RER model in the following) is discussed and compared with the legacy ER model and other models in this section. The shape of its scattering pattern lobe is shown in Fig. 4 for different incidence angles and α R values. The lobe's directivity increases with α R , as it should, and its maximum is directed toward the specular direction. However, differently from the legacy ER model, the lobe is always constrained to have a null for θ S = π/2 in order to satisfy reciprocity as explained in section II.B. Consequently, a slight drifting of the peak away from the specular direction toward lower θ S values can be observed for incidence angles greater than π/3 and low α R values. As stated above, the new formulation of the ER model was derived to satisfy reciprocity. The old ER model however, was already almost reciprocal for not-too-grazing incidence angles (up to about 40 • ), and for low values of α R , as shown in Fig. 5, where the term cos θ i /F α R (θ i , φ i ) of eq. (9) is almost constant and therefore reciprocity condition is approximately satisfied. On the other hand, it strictly respects the ER power balance, based on which it was conceived. Vice versa, the new ER model is perfectly reciprocal but its reciprocal formulation was obtained from an approximation that slightly differs from the numerical solution of the powerbalance integral (8), especially for very grazing angles of incidence, as explained in section II.B. It is worth noting however, that reciprocity is a more important requirement than ER power-balance, since the latter is based on the simplifying yet reasonable assumption that the quantity P p /P i of equation (3) does not depend on parameter S, which might not be rigorously true in real-life cases. In order to quantify the influence of the approximation in the power balance integral (8), we can introduce the powerbalance anomaly, which is normalized to the incident power P i , and defined as: where P s is the scattered power obtained through the original power balance assumptions, i.e. through solution of the integral (8) to determine the value of F α R (θ i ), whileP s is the corresponding value obtained by using the approximation Through a few simple mathematical steps, the following expression for ∆ rel can be derived: The power-balance anomaly ∆ rel of the new ER model is plotted vs. θ i in Fig. 6, assuming S = 0.4, while the modulus of the reflection coefficient Γ was calculated with the Fresnel coefficient (TE polarization) for the case of a lossless dielectric wall with r = 5. It is evident that ∆ rel is very small, within 1 % of the incident power up to incident angles of 85 degrees or more! It is interesting to compare the behaviour of the RER model with other models available in the literature, e.g. the Kirchhoff model for scattering from rough surfaces. The Kirchhoff model is a widely used reference model that, being physics-based, is reciprocal and necessarily satisfies physically consistent power-balance constraints. However, it has several parameters, it is valid only for surface roughness of the Gaussian type, with correlation length larger than the wavelength, and it also has approximations that make it invalid for grazing incidence angles, as discussed for instance in [30]. The Kirchhoff's scattering coefficient provided in [17] is made of two parts, a coherent specular component, derived from the Radar Cross Section theory, and an incoherent diffuse component which accounts for the non-specular contribution of the facets representing the irregular surface. For the sake of comparison with the RER model, in the following we consider only the incoherent component. In Fig. 7 the normalized scattering diagrams for the RER model and the Kirchhoff GHz incident plane wave with θ i = π/3, the same case of [4], Fig. 10. The following parameters are used in the Kirchhoff model: surface roughness standard deviation σ h = 1 cm and correlation length l corr = 0.5 m, which are typical literature values for a brick wall as the one considered in [4]. For comparison, the directivity parameter α R of the RER model has been optimized to reproduce the same scattering lobe width as the Kirchhoff model, thus getting α R = 65, which is a much higher value with respect to what found in [4] for the brick wall case, i.e. α R = 4. The shape of the two patterns is very similar: interestingly, the maximum is slightly tilted upward with respect to the specular direction in both cases, albeit to a lesser extent in the RER model case. However, the much greater degree of spreading observed in [4] is an indication that surface elements such as indentations and material inhomogeneities (e.g. the alternation of brick and mortar, cavities inside bricks) probably give a greater contribution to DS than mere Gaussian surface roughness. Besides the aforementioned limitations, there are additional issues that make not straightforward the implementation of the Kirchhoff model in ray-based prediction tools, as discussed for example in [31]. Moreover, as the incoherent component is computed through a series expansion, the Kirchhoff model is computational less efficient than the RER model, from 1 to 2 orders of magnitude depending on how the series is truncated. Another possible approach to deal with DS from irregular surfaces is the one based on the use of computer graphics models, originally conceived for rendering of photorealistic images. Such models are based on the so called Bidirectional Reflectance Distribution Function (BRDF), which is a directional scattering coefficient. In recent years, "physicallybased" BDRFs have been proposed, which obey to reciprocity and comply with upper bound power constraints, such as that P i should always be greater than or equal to P s [27]. One example is the popular GGX shading model, originally introduced in [28]. In [29], it has been proposed to use a slightly modified version of the GGX model for radio wave propagation prediction. In the model, an equivalent roughness parameter Σ s , expressed in dB, is used: Σ s = 0 dB means maximum roughness, while Σ s < −40 dB means smooth surface with quasi-specular behaviour. In particular, in [29] it is shown that, by parameterizing the GGX model to reproduce both the specular and the diffuse components and by adding them through incoherent power sum, realistic results in good agreement with the measurements can be achieved. In Fig. 8, the directional coefficient D of the GGX model as defined in [29], is compared with the scattering patterns of the RER model and of the legacy ER model for an incidence angle θ i = 45 • and a surface with moderate roughness, i.e. Σ s = −4 dB. In such a case, the best-fit directivity parameter is α R = 8 for both the RER and the legacy ER model. The GGX and the legacy ER model have a very similar scattering diagram, and interestingly both of them do not go to zero at grazing scattering angles, differently from the RER and the Kirchhoff models. Finally, the new ER model is compared with the measurements carried out in [4] on a reference rural building wall. In the measurement, the façade of a rural building was illuminated by a Tx directive antenna pointing towards the centre of the wall, while the Rx directive antenna, also aiming at the wall centre, was moved along a semicircle in front of the wall to derive an estimate of the angular scattering pattern. Despite the use of directive antennas all the interaction mechanisms (direct path, specular reflection, diffraction, diffuse scattering) are simultaneously present to some extent, and therefore the measured pattern need to be compared with RT simulations including all mechanisms: the new RER model has been embedded in the RT simulator described in [6], similarly to what done in [4] for the legacy model. Both parameters S and α R have been optimized to get the best match with the measured scattering pattern. Results are shown in Fig. 9 for the optimum values of the parameters, i.e. S = 0.4 and α R = 2 in this case. As expected, the scattering model allows to fill the gap for those receiving positions where the coherent interaction mechanisms (specular reflection, diffraction) are weaker. The curve corresponding to a simulation without diffuse scattering (i.e. S = 0) is also reported for reference in the figure, and shows a very poor performance for the Rx locations further from the specular reflection angle (i.e. θ i = 30 • in Fig. 9). From the plot, it is evident that the proposed model, if properly parametrized, can accurately describe scattering from such a typical building wall, with an RMS Error of 1.59 dB. This result is similar to, or even better than, the one shown in [4] for the legacy ER model in the same scenario, not reported in the figure for the sake of legibility, where the best RMSE value was 1.85 dB. Similarly good results can be achieved for the other cases considered in [4]. Fig. 9. Comparison between RT simulation with the RER DS model embedded, and measurements in a rural building scenario, described in [4]. IV. CONCLUSION The Effective Roughness model, a popular model for diffuse scattering from objects and building walls that is conceived to complement ray-based radio propagation models, is reconsidered and modified in the present work in order to satisfy reciprocity, an important physical-soundness requisite. To this aim, the formulation of the original model has been modified to satisfy reciprocity without significantly affecting the simple and yet sound power-balance approach it is based on. The new, reciprocal version of the Effective Roughness model, which can be easily implemented and replaced to the old version in ray-based propagation models, is analyzed and compared to the old one and to other popular models in section III. Finally, comparison with some of the measurements previously considered in [4] for the validation of the original model has shown that the new one yields similar performance, if not better. APPENDIX A EULER'S GAMMA AND BETA FUNCTIONS The Euler's Gamma function definition is [32]: where z is a complex number having positive non-zero real part ( (z) > 0). We recap here some useful properties of the Gamma function that will be used in the proofs of the next appendices: Γ (n + 1) = n! (26) where n is a natural number and n!, n!! are the factorial and double factorial (or semi-factorial) functions, respectively, defined as: where x stands for the least integer greater than or equal to x. Also, it is conventionally assumed: 0! = 1 and 0!! = 1. In the following appendices we will also make use of the following binomial theorem [32]: where n k = n! k! (n − k)! In the particular case that b is equal to the constant function b(x) = 1, (29) reduces to: The Euler's Beta function is defined as [32]: where (x) > 0, (y) > 0. The following properties hold: B(x, y) = 2 The aim of this section is to prove equation (13), which is the closed-form solution of the integral (8) for the singlelobe scattering pattern of the legacy ER model (see eq. (10)), originally proposed in [4]. The integral to be solved is: Using the binomial theorem in the form (29) we obtain By applying the binomial theorem again (eq.(30)), we get Let I stand for Let us consider the first factor in (38), i.e. I 1 = π/2 0 cos j−l θ S sin l+1 θ S dθ S . Applying (33) we obtain: The second factor I 2 = 2π 0 cos l (φ S − φ i ) dφ S may be written as Using the binomial theorem we get: (41) Let be X = 2π 0 cos l−q φ S sin q φ S dφ S . This integral can be split into four parts: It is evident that if q is odd the four terms cancel each other out. The same thing happens when q is even and l is odd. On the other hand when l and q are both even, the four terms give the same value. Therefore we have Using (33) we obtain: Combining (37), (38), (39), (41) we can assert that Since X is nonzero only when the indices l and q are even, combining (44) and (45) we can write Let us consider B l + 1, j−2l+1 2 and B q + 1 2 , l − q + 1 2 . Using properties (32), (27), (26) it is simple to obtain In a similar way, using (32), (26), (25) we get By substituting (47) and (48) into (46) and rearranging some terms we obtain where the symbol x stands for the greatest integer less than or equal to x. But l q=0 cos 2l−2q φi sin 2q φi q!(l−q)! equals 1/l!, as it can be easily verified by applying the binomial theorem. Then, after this substitution we can eventually write the final formulation of F α R : This equation is also valid in the case of normal incidence, i.e. θ i = 0, if for the 0 th -order term of the second summation it is conventionally assumed, as most automatic calculators do: In such a case, for normal incidence (50) reduces to: The same result can be also obtained by directly integrating (8) for θ i = 0, which is straightforward. APPENDIX C SOLUTION OF THE POWER BALANCE INTEGRAL FOR THE NEW ER RECIPROCAL MODEL (SINGLE-LOBE VERSION) The aim of this section is to justify equation (17). We assume that F α R can be written in the following, approximate form, as discussed in Section II: Looking at this expression, we observe that k (α R ) is equal to F α R when θ i = 0. Thus we can assert that The value of F α R (θ i = 0) can be found by directly integrating (8) for θ i = 0, and observing that, for normal incidence, cos ψ R = cos θ S . Then, we get: Then, by applying the binomial theorem we obtain (54) Equation (54) can be rewritten as sin θ S cos j+ 1 2 θ S dθ S (55) Applying (33) in (55) we have Using properties (32), (24) we get the final expression for F α R : Eventually, eq. (17) is obtained by substituting (57) and (14) into (7). APPENDIX D SOLUTION OF THE POWER BALANCE INTEGRAL FOR THE DOUBLE-LOBE RER MODEL The aim of this section is to justify equation (20). In order to do that, let's consider the two integrals in (19) F α R = cos ψ i = cos θ i cos θ S + sin θ i sin θ S cos (φ S − φ i ) (59) We notice that only difference between (11) and (59) is in the sign of the second term. We want to show that the integrals above have the same result, for a fixed value of the exponent. Proving this is equivalent to proving that the result of the integral M defined in equation (60) does not depend on the sign ± of the term sin θ i sin θ S cos (φ S − φ i ): But 2π 0 cos l (φ S − φ i ) α R sin θ S dφ S = 0 only when the index l is even. Thus, the overall contribution of the term (±1) l is completely irrelevant since (+1) l = (−1) l = 1 when l is even. This proves that the sign ± does not change the result of M . Therefore, the 2 integrals in (58) have the same form, and the only difference between them is in the value of the exponent, either α R or α i . By adopting the same procedure as in Appendix C, we then find that: (62) and by substituting F αi,α R = ΛF α R + (1 − Λ)F αi into (18), we finally get (20).
2022-09-27T01:16:05.870Z
2022-09-04T00:00:00.000
{ "year": 2022, "sha1": "48c954add3339bb555546405582786c6fedbea07", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "48c954add3339bb555546405582786c6fedbea07", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Engineering", "Physics" ] }
145624297
pes2o/s2orc
v3-fos-license
Work Schedules and Work–Family Conflict Among Dual Earners in Finland, the Netherlands, and the United Kingdom Many European families are affected by the 24/7 economy, but relatively little is known about how working parents experience nonstandard hours. The aim of this study was to analyze the possible associations of dual earners’ work schedules and other work-related factors with their experience of time- and strain-based work–family conflict. These phenomena were examined among dual earners living in Finland, the Netherlands, and the United Kingdom, countries that differ in working time practices and policies. Multigroup structural equation modeling was used to analyze cross-cultural data on dual earners with children aged 0 to 12 years (N = 1,000). The results showed that working nonstandard schedules was associated with increased time-based work–family conflict, but only among Finnish and British parents. Poorer financial situation, working longer hours, more time spent working at very high speed, and lower work satisfaction were associated with both types of work–family conflict in all countries. study design to examine in which ways dual earners' work schedules and work-family conflict are related in different cultural contexts. Working Time Patterns: Extent of Nonstandard Working Time Since the establishment of the standardized employment relationship during the 1950s and 1960s, working conditions have changed, and the fundamental temporal institutions of the industrial working time regimes have been eroded. The demand for "just-in-time" production of the service sector challenges the free evenings, nights, and weekends that have been core temporal institutions of the traditional industrial working time regime and (male) standard working time (Garhammer, 1995;Negrey, 2012;Rubery, Ward, & Grimshaw, 2006). This is not to say that nonstandard working time is a new phenomenon. For example, there is a long history of shift work in manufacturing. However, recent decades have exposed more sectors of the economy to nonstandard working time (Presser, Gornick, & Parashar, 2008). Simultaneously with the changes in the conditions of employment, important changes have occurred in family life. Broadly speaking, across Europe the past 30 years has seen a shift from the male-breadwinner to the dualearner model, although at a varying pace in different countries (Drobnič & Blossfeld, 2001). It has even been claimed that the rise of dual-earner households is one of the most significant social trends affecting European societies (Smith, 2005). Women's employment or working time practices have not followed the ideals of the standard employment relationship, with continuous, full-time work, in most Western European countries. There are some exceptions to this, such as Finland, where continuous full-time work has also been the norm among women (Pfau-Effinger, 1998). Countries and employees are differently affected by the increase in nonstandard working time that characterizes the postindustrial working time regime. Significant differences exist between countries, sectors, and workers based on their individual characteristics, such as education, gender, and family status (De Beer, 2009;Presser et al., 2008). According to Presser (1995Presser ( , 2000, Americans are increasingly working during nonstandard times. Despite the lively media debate on the spread of the so-called 24/7 economy in Europe, researchers disagree over whether or not individual European countries can be characterized as 24/7 economies (e.g., De Beer, 2009;Mustosmäki, Anttila, Oinas, & Nätti, 2011). Yet it is undisputable that a considerable proportion of employees work outside traditional office hours (e.g., Parent-Thirion, Fernández Macías, Hurley, & Vermeylen, 2007). EU averages range from 17.6% for shift work, to 27.2% for evening and night work, and 39.7% for weekend work (EU Labour Force Survey, 2012a). Together with the increasing proportion of dual-earner families (Margherita, O'Dorchai, & Bosch, 2009), this means that significant numbers of European families are affected by work that is performed outside standard office hours. According to earlier research, conducted in both the United States and Europe, nonstandard work is more common in the service sector, among those with lower education, and among men and younger workers (Presser et al., 2008;Wight, Raley, & Bianchi, 2008). Richbell, Brookes, Brewster, and Woods (2011) note that shift work is most common in the manufacturing and health services sectors, whereas weekend work is most widespread in the transport, retail, health, social, and personal services sectors. Similarly, research has shown that the increase in nonstandard hours is strongly related to the expansion of the service sector (Negrey, 2012;Presser et al., 2008). In some sectors, nonstandard hours have even become more the rule than standard day work. For example, in Finland the proportion of health service employees working nonstandard hours has exceeded those working a standard day, with 56% of the personnel working nonstandard hours (Lehto & Sutela, 2008). Prior results on the associations between family status and nonstandard hours are conflicting. Presser (2003) showed that nonstandard work in the United States was particularly typical in families with children, low-income families, and single-parent families. However, in a study of seven European countries, Presser et al. (2008) found that nonstandard working time did not vary according to family status: It was equally common among families with and without children, although minor differences were observed between countries. Looking specifically at the Netherlands, Täht (2011) found nonstandard working times to be more common among employees with children, while Presser et al. (2008) found that, among women, it was less common for mothers to work non-day schedules in the Netherlands and the United Kingdom. In the United Kingdom, however, employed fathers are more likely to work non-day schedules compared with employed men without children (Presser et al., 2008). In Finland, nonstandard hours are more linked to sector and profession than to family phase (Lehto & Sutela, 2008). Work-Family Conflict The increase in female participation in the labor force has been accompanied by increased attention to combining work and personal life (e.g., Gallie & Russell, 2009) and by debates on the causes of low fertility rates in countries where work-family policies are not implemented (León, 2009). Various approaches to describing and studying how work and family (or more broadly work and "life"; see Fagan, Lyonette, Smith, & Saldaña-Tejeda, 2012) are reconciled or balanced have been proposed. Despite the conceptual ambiguity, there is a consensus that the work-family interface is both bidirectional and double layered: Work can interfere with home, but home can also interfere with work, and experiences are both negative and positive (e.g., Greenhaus & Beutell, 1985;Kinnunen & Mauno, 1998). A substantial amount of workfamily research relies on a conflict orientation, where the demands of work and family are viewed as incompatible because of conflicts caused by time, behavior, or strain (e.g., Frone, Russel, & Cooper, 1997;Ruppanner, 2013). The present study focuses on time-and strain-based work-family conflict among dual earners. Greenhaus and Beutell (1985, p. 77) defined work-family conflict as "a form of interrole conflict in which the role pressures from the work and family domains are mutually incompatible in some respect." They distinguished between three types conflict: time-based conflict, strain-based conflict, and behavior-based conflict. Time-based conflict may occur when the time devoted to one role makes it difficult to participate in another role; strainbased conflict means that strain experienced in one role restricts involvement in another role; and behavior-based conflict occurs when specific behavior required in one role is incompatible with expectations in another role (Greenhaus & Beutell, 1985). Our study concentrates on the associations between time-and strain-based work and family conflict, which vary according to characteristics of the individual and of the work performed. The evidence on the effect of gender on work-family conflict is conflicting. Some studies have found that women experience more conflict than men (e.g., Hill, 2005;Voydanoff, 2005), while others report no gender differences (Kinnunen & Mauno, 1998;Shaffer, Joblin, & Hsu, 2011). At the individual level, work-family conflict is associated with both workand family-related demands and resources (e.g., Frone et al., 1997;Voydanoff, 2005). Work demands are associated with both time-and strain-related demands. Earlier research has shown that time demands at work include long hours, nonstandard work hours, and hurriedness at work (Gallie & Russel, 2009, Grzywacz & Marks, 2000Jacobs & Gerson, 2001;Kinnunen & Mauno, 1998). Interestingly, autonomy at work, although perceived as a resource reducing work-family conflict (Moen, Kelly, & Huang, 2008), can, when it is extremely high, be associated with increased work-family conflict (Drobnič & Guillén Rodríguez, 2011). Strain-related work demands include, for example, job insecurity and changes in work schedules (Mauno & Kinnunen, 1998;Voydanoff, 2005;see Fagan et al., 2012, for a review). In addition to lowering work demands, work-family conflict can be decreased through the presence of various work-related resources, in particular autonomy over working time (Annink & den Dulk, 2012), job satisfaction (Bruck, Allen, & Spector, 2002), and part-time work (Grzywacz & Marks, 2000). Family-related demands pertain to the presence of children, partner's working hours, and the family's economic situation. First, having children in the family potentially increases time demands and strain, particularly in the case of multiple and/or young children (Hill, Yang, Hawkins, & Ferris, 2004;Kinnunen & Mauno, 1998). Second, while single parents are often in the most difficult situation in the effort to combine work and family, having a working partner can also be a source of conflict. Gallie and Russell (2009) point out that because of the increase in the proportion of dual-earning couples, the focus of the analysis should be on the employment schedules of both household members rather than on individual work patterns. Research on dual earners, however, remains scarce, probably owing to the lack of statistical data that would allow family-level investigation. In one of the very few comparative studies on dual-earning couples in Europe, Gallie and Russell (2009) found that work-related factors explained most of the work-family conflict (29% of the variation) whereas family-related factors explained only 2%. Other family-related demands, such as other care responsibilities or the specific care needs of children, have typically not been included in quantitative research on this topic. Nonstandard Working Time as Threat and Opportunity in Family Life Evening, night, and weekend working is often perceived as a risk for family life in the research literature. Nevertheless, results are mixed. On the one hand, Strazdins, Clements, Korda, Broom, and D'Souza (2004) found associations between nonstandard working time and weakened functioning of the family, and problems in time use. Presser (2000) and Jekielek (2003) reported associations with partnership problems and even increased risk for divorce. Furthermore, several studies have examined the impact of nonstandard work on employees and their families, and found associations with reduced parental well-being (Liu, Wang, Keesler, & Schneider, 2011), increased relationship conflict and instability (Maume & Sebastian, 2012;Presser, 2000), and difficulties in parent-child interaction (Han, Miller, & Waldfogel, 2010;Mills & Täht, 2010). On the other hand, nonstandard working time may be a purposeful choice rather than a necessity for families (Liu et al., 2011;Presser, 2003;Täht, 2011). Parents may use their nonstandard schedules as a solution to their child care needs, by maximizing parental coverage of the child (i.e., split shift parenting; Presser, 2003). Furthermore, the traditional gendered division of household work is no longer so dominant and many parents nowadays share parenting more equally (Craig, 2011;Täht, 2011). Wight et al. (2008) reported that night workers in particular may have routines and time-use patterns similar to those of standard day-working parents, which challenges the negative view of the effects of nonstandard working times. Owing to the lack of suitable data, up-to-date cross-national studies analyzing work-family conflict are scarce (but see Allen, Cho, & Meier, 2014;Gallie & Russell, 2009;Rantanen, Kinnunen, Mauno, & Tilleman, 2011;van der Lippe, Jaeger, & Kops, 2006). More recently, Allen et al. (2014) examined the linkages between national policies and work-family conflict and found a positive association with the presence of policies reducing experiences of work-family conflict. In contrast, Gallie and Russel (2009) reported that welfare policies did not have the expected influence on work-family conflict among dual-earner couples in seven European countries. Their study showed that working conditions, particularly long working time, played a major role in explaining work-family conflict. This Study: Rationale for the Comparisons Between the Three Countries This study concentrates on Finland, the Netherlands, and the United Kingdom. These countries differ widely in their working time practices, particularly in relation to the length of working time and the extent of nonstandard hours. The EU Labour Force Survey (2012a, 2012b; see Table 1) shows that whereas there are no substantial differences in paternal employment rates, the employment rate of British mothers is lower than that of their counterparts in the Netherlands and Finland. Turning to working time patterns, significant differences can be seen in the parental part-time employment rate. Part-time work is not typical in Finland, where only 13.9% of mothers work part-time, whereas this equals 54.5% in the United Kingdom, and 85% in the Netherlands. This demonstrates that length of working time is an important factor to consider in any analysis of work-family reconciliation. Differences in working time characteristics reflect differences between countries in their production systems. The basic difference between coordinated and liberal market economies lies in the means adopted to secure economic competitiveness. Roughly speaking, in liberal market economies, such as the United Kingdom, this has been attempted by weakening workers' rights and working conditions, whereas in coordinated economies, such as the Nordic countries and the Netherlands, efforts have been made to uphold good working conditions through regulation and coordination (Gallie & Russell, 2009). Finland, the Netherlands, and the United Kingdom also differ in their welfare regimes: Finland represents the social democratic/Scandinavian regime, the Netherlands the corporatist, and the United Kingdom the liberal (Esping- Andersen, 1990). This in part explains the differences in working time patterns between these countries. Also, there are cultural differences and differences in the care policies (Kröger, 2010;Pfau-Effinger, 1998) and renders them interesting targets for a comparative study. Aims of the Study The aim of this study was to find out whether work schedules and other workrelated factors are associated with the experience of work-family conflict, operationalized as time-and strain-based conflict, among dual earners living in three European countries with different production and welfare systems. We first examined whether respondent's own work schedules are related to work-family conflict by contrasting parents in regular day work with those working nonstandard schedules. Second, we examined whether other work related factors-that is, working hours, changes in working schedules, hurriedness at work, and work satisfaction-are associated with dual earners' experiences of work-family conflict. Third, we analyzed whether partner's work schedules and working hours are connected to respondent's experiences of work-family conflict. Finally, we examined whether these associations vary between Finland, the Netherlands, and the United Kingdom. Respondents and Procedure This article analyzed data from a cross-national study titled "Families 24/7." The data were drawn from a survey targeted to Finnish, Dutch, and British parents with children aged between 0 and 12 years. Respondents were recruited via child care organizations, unions, and employers, which were invited by letter or email to promote the study. As in Finland day and night child care centers-which are rare in the other two countries-were invited to participate, Finnish parents working nonstandard work schedules are overrepresented in the data. Moreover, because of the procedures used in the recruitment of the respondents, we were not able to evaluate the response rate. The data collection took place between November 2012 and January 2013. Because the same survey was used in all three countries, the survey questionnaire was first prepared in English, and later translated into Finnish and Dutch. After this, back-translation by official translators was used for questions for which no official translation was available. Our total sample consists of 1,000 dual earning parents (318 from Finland, 334 from the Netherlands, and 348 from the United Kingdom), who were either married or cohabiting (Table 2). Respondents' age ranged between 21 and 58 years. The respondents from Finland were somewhat younger than those in the Netherlands or the United Kingdom. The differences were statistically significant. The majority of the sample was female. There were significantly more female respondents in the Dutch and the U.K. samples than in the Finnish sample. The majority of the respondents had completed tertiary education. There were statistically significant differences in educational level between the countries: In Finland, 42% of the respondents had completed tertiary education, whereas this was the case for 74% of the Dutch and 80% of the British respondents. The great majority of the respondents were married, and there were no differences between countries in this respect. By default, all the respondents had at least one child between 0 and 12 years, and the age of the youngest child varied across the countries. The Dutch families had the youngest children. The majority of the respondents had one or two children. There were Financial situation (0 = worst; 10 = best) Comparable data suggest that Finnish nonstandard working parents were overrepresented in our sample and Dutch and British nonstandard working parents somewhat underrepresented (Presser et al., 2008). Concentrating on shift work in particular, comparable data show that shift workers are overrepresented in the Finnish sample and, to a lesser extent, in the Dutch sample while somewhat underrepresented in the British sample when concentrating on employees aged 25 to 49 years (EU Labour Force Survey, 2012b; see Tables 1 and 2). Measures Background Information. Background information included the variables of gender (0 = man, 1 = woman), marital status (0 = cohabiting, 1 = married), and age of the respondent, highest level of education obtained (0 = lower than tertiary education, 1 = tertiary education), number of children, and age of the youngest child. In addition, a question on the financial situation of the family ("How would you rate your family's financial situation these days?") was included in the questionnaire (0 = the worst possible financial situation, 10 = the best possible financial situation). Work Characteristics. Work schedule was measured with the question, "What is your working time pattern?" There were seven response options (1 = day work, 2 = shift work, 3 = regular evening work, 4 = night work, 5 = morning work, 6 = irregular working hours, 7 = other), which for our analyses were dichotomized as either regular day work schedule (= 0) or nonstandard schedule (= 1; including shift work, regular evening/night/morning work, irregular work, and other work schedules). In addition, the respondents were asked whether changes to their work schedule occurred regularly (0 = no, 1 = yes) and to state their actual working hours per week. Hurriedness at work was measured with the following question: "Does your job involve working at very high speed?" There were seven response categories (7 = all the time, 1 = never). Work satisfaction in turn was measured with the question, "How satisfied are you with your current job?" (1 = very dissatisfied, 4 = very satisfied). Identical questions were asked about the partner's work schedule and weekly working hours. The respondent provided the information on his/her partner. Work-Family Conflict. Work-family time conflict was measured using three items (taken from Carlson et al., 2000): "My work keeps me from my family activities more than I would like," "The time I must devote to my job keeps me from participating equally in household responsibilities and activities," and "I have to miss family activities due to the amount of time I must spend on work responsibilities." There were five response categories (1 = strongly disagree to 5 = strongly agree). Cronbach's alphas were .61 (the Netherlands), .79 (the United Kingdom), and .84 (Finland). Strain-based conflict was also measured with three statements: "When I get home from work, I am often too frazzled to participate in family activities responsibilities," "I am often so emotionally drained when I get home from work that it prevents me from contributing to my family," and "Due to all the pressures at work, sometimes when I come home I am too stressed to do the things I enjoy." There were five response categories (1 = strongly disagree to 5 = strongly agree). Cronbach's alphas were .81 (the Netherlands), .86 (the United Kingdom), and .87 (Finland). Statistical Analyses The data from the three countries were analyzed using multigroup structural equation modeling (SEM). The analyses were performed using Mplus (Version 7; Muthén & Muthén, 1998. The estimation method used was MLR, which produces maximum likelihood parameter estimates with standard errors and a chi-square test statistic that are robust to nonnormality and nonindependence of observations (Muthén & Muthén, 1998. Model fit was assessed using chi-square, Tucker-Lewis index (TLI), root mean square error of approximation (RMSEA), and standardized root mean square residual (SRMR). Good model fit is indicated by a nonsignificant chi-square p value, RMSEA with values of ≤.06, SRMR with values of ≤.08, and TLI with values of ≥.95 (Hu & Bentler, 1999). The significance of the differences in chi-square values between the nested models was evaluated using a scaled chi-square difference test (Satorra & Bentler, 1994). The analysis was started by testing for factorial invariance. Similarity in the measurement level of each latent construct in each group is required in order to test for differences and similarities between different sociocultural groups (e.g., countries, genders) in a meaningful way (Little, 1997). As the focus of the analyses was on comparing predictive paths across the countries (and not on, e.g., comparing latent means), this analysis focused on metric invariance (i.e., invariance of factor loadings; Milfont & Fischer, 2010). For this purpose, for both latent factors (time-based conflict and strain-based conflict), the freely estimated measurement models (i.e., models with no requirements for invariant loadings) were compared using Satorra-Bentler chi-square difference tests with the models in which the loadings were constrained to be equal between the three countries. When needed, the information given by modification indices was taken into account. Exogenous variables were included in the models with a stepwise procedure. In Step 1, background variables, in Step 2, the respondent's work schedule and other work-related variables, and in Step 3, work schedule and working hours of the respondent's partner were included in the models. In each step, all the exogenous variables that had nonsignificant (using p = .10 as a limit) path coefficients in each of the three countries were omitted from the analysis before proceeding to the next step. After this, the equality of the path coefficients between the countries was tested for all the exogenous variables with Satorra-Bentler difference tests and, where possible, coefficients were constrained to be equal between the countries. Finally, variables with nonsignificant path coefficients (using p = .05 as a limit) in all three countries were omitted from the final models. Factorial Invariance For strain-based conflict (constrained model: χ²(4) = 6.15, p = .188, TLI = 1.00, RMSEA = .04, SRMR = .03), all loadings, and for time-based conflict (constrained model: χ²(3) = 1.661, p = .646, TLI = 1.00, RMSEA = .06, SRMR = .07), all loadings but one could be set equal between the countries on the basis of the fit indices and Satorra-Bentler significance tests. For timebased conflict, the factor loading for the item "My work keeps me from my family activities more than I would like" was set equal only between Finland and the United Kingdom. Thus, these analyses revealed that the requirement of similarity in the structures of the measures used to enable comparisons of paths between countries were sufficiently fulfilled. Time-Based Conflict For time-based conflict, four background variables (financial situation, marital status, education, and gender) had at least nearly significant (p < .10) associations with the latent variable in Step 1. The other three background variables (age of the participant, age of the youngest child, and number of children) showed no significant associations with the latent variable and were thus excluded from the following analyses. In Step 2, all the work-related variables (nonstandard working time pattern, working hours, working at high speed, work satisfaction, and changes to work schedules) were associated with time-based conflict in at least one of the countries. In Step 3, both partners' work schedule and working hours were associated with the latent variable. The Satorra-Bentler difference tests showed that for all the variables, except for participant's work schedule and gender, paths could be set equal between the countries. For gender, paths could be constrained to be equal only between the Netherlands and the United Kingdom, and for participant's work schedule only between Finland and the United Kingdom. On the basis of the final model, χ²(70) = 154.35, p < .001, TLI = .87, RMSEA = .07, SRMR = .05, experiencing higher time-based work-family conflict was associated with lower financial situation, having more working hours, more time spent working at high speed, feeling less satisfied with one's work, experiencing more changes in work schedules, and having a partner with fewer working hours (see Figure 1). In addition, among the Dutch and British respondents, being female was related to higher work-family conflict, whereas having a nonstandard work schedule was significantly related to higher work-family conflict in Finland and the United Kingdom. Marital status, education, and partner's work schedule did not show significant associations with time-based conflict after the constraints on the equality of the paths between the countries were removed from the model. Strain-Based Conflict In the model for strain-based work-family conflict, predictors of at least marginal significance in Step 1 were age of the youngest child, financial situation, education, and gender, and in Step 2 working hours, working at high speed, and work satisfaction. Neither of the partners' work-related variables were connected with strain-based conflict. All the path coefficients, except for the coefficient of working at high speed in the Dutch sample, could be constrained to be equal between the countries. Thereafter, the variables of age of the youngest child and education no longer had statistically significant associations with the latent variable and were thus excluded from the final model. Experiencing higher strain-based work-family conflict was associated with poorer self-reported financial situation, being female, working longer more hours, more time spent working at very high speed and being less satisfied with one's work, for final model: χ²(43) = 87.93, p < .001, TLI = .96, RMSEA = .06, SRMR = .04, as shown in Figure 2. The association between working at high speed and strain-based conflict was stronger among the Finnish and the British respondents, compared with their Dutch counterparts. Discussion Dual-earner families have become the prevailing family pattern in Europe, and many parents work outside the "standard" hours of schools and day care. These, along with variations in working conditions, are changing the linkages Note. SEM = structural equation modeling. Estimates with significance asterisks are unstandardized path coefficients. Estimates between square brackets refer to standardized path coefficients for the Finnish, Dutch, and British samples, respectively. *p < .05. **p < .01. ***p < .001. between work and family, bringing both new risks and new opportunities. Our study analyzed the associations between working time schedules and experiences of work-family conflict among dual earners. The main contribution of the present study is the use of data from three European countries, Finland, the Netherlands, and the United Kingdom, differing in working time practices, production systems, and welfare regimes. Via cross-national comparisons, which remain scarce in this line of research, the role of the macrocontext on families becomes more visible. Such a comparative setting can be particularly revealing on how families live their daily lives when the countries concerned have different societal policies on family and work. We found that whether one works during so called office hours or during nonstandard times was connected with experiences of time-based Note. SEM = structural equation modeling. Estimates with significance asterisks are unstandardized path coefficients. Estimates in between square brackets refer to standardized path coefficients for the Finnish, Dutch, and British sample, respectively. *p < .05. **p < .01. ***p < .001. work-family conflict in two of the countries-Finland and the United Kingdom-but not in the Netherlands. In addition, in the United Kingdom and Finland, experiencing changes in working schedules and, interestingly, having a partner with short weekly working hours, were related to higher time-based conflict. A nonstandard work pattern can theoretically have both a positive and negative impact on the family. It seems that in Finland and the United Kingdom, in particular, nonstandard working times are experienced as particularly difficult for the family and thus associated with time-based work-family conflict. Finnish and British families working standard hours showed a better fit with prevailing conditions, cultural expectations, and services. In the Netherlands, work-family conflict was not associated with nonstandard working time. It may be that services and policies that have a better fit with nonstandard working time are available to dual earner families in the Netherlands, compared with the situation in the other two countries. Cultural norms concerning childcare and working times may also have an impact. The better fit between nonstandard working time and family in the Netherlands may also be because of employment regulations, protection of workers, higher wages, strict opening hours, and work schedule regulations (Mills & Täht, 2010). Strain-related work-family conflict, in turn, was not associated with working time schedules but rather with other conditions of work such as hurriedness, long hours, and work satisfaction. These conditions were similarly related to time-based conflict across all the samples. Furthermore, individual and family characteristics, such as a poor family economic situation and being female, were associated with strain-based conflict. In previous research, nonstandard working time, particularly shift work, has been found to act as a stressor, resulting in adverse health outcomes (Costa, 2003). Our study did not find an association between nonstandard work schedule and increased strain-based conflict. This might be because of the mixed characteristics of nonstandard working time in our sample, which included not only on shift work but also on other types of nonstandard hours, such as regular evening or early morning work. The gender of the parent matters. We found that compared to men, women experienced more strain-based conflict in all three countries and more timebased conflict in the Netherlands and the United Kingdom. Our study also investigated how the work situation of one's partner was related to the experience of work-family conflict. We found only partial support for an association between these phenomena. Experiencing higher timebased conflict was found to be connected with having a partner with fewer weekly working hours. Partner's working time pattern, on the other hand, was not associated with either types of work-family conflict. This is somewhat surprising, as we expected to find an effect of partner's work schedule when controlling for the characteristics of the respondent's working time. The absence of an association might be because of the restricted information gathered on the partner in our data. This finding challenges the notion that household characteristics have a significant role in experiences of work and family conflict, and is in line with previous research (Gallie & Russell, 2009). However, the experiences of one member of the family may, and often do, spill over to other members of the family, such as children and partner (see Kinnunen, Feldt, Mauno, & Rantanen, 2010;Matthews et al., 2006). Especially with the increase in dual-earning couples across countries, experiences of work spillover from one partner to another will be of increasing importance in the future. Employed parents in families with a poor economic situation reported higher time-and strain-related conflict. Given the importance of family income to the employed population, it is surprising how little attention has been paid to the role of economic situation and work-family balance (see Fagan et al., 2012). To cite an exception, Schieman and Young (2011) found associations between economic hardship and family-to-work conflict in the United States. As our results suggest that individuals' perceptions on their family's economic situation is relevant when considering work-family conflict, more research needs to be done on this issue. This study also has its limitations. The first one concerns the representativeness of the data. The samples were not randomly selected; instead a lot of effort was put into targeting families that worked nonstandard hours. The data were collected via a web survey, and there was no possibility to evaluate the response rate. It is difficult, therefore, to know whether the results are biased in some way. Although similar recruitment strategies were used in the three countries to render the samples as comparable as possible, the country-specific samples differed from each other in certain key characteristics, such as demographics and the prevalence of nonstandard work. Although these factors were controlled for in the SEM models used, it is possible that this did not account for all the differences between the samples. Comparable data suggest that Finnish nonstandard working parents were overrepresented in our sample and Dutch and British nonstandard working parents somewhat underrepresented (Presser et al., 2008). In addition, the data were cross-sectional. Consequently, no causal interpretations can be made on the basis of this study. Moreover, it should be noted that the data were based on self-reports and thus the results of the study are dependent on the respondents' ability and willingness to report on the phenomena of interest. Last, owing to our limited number of respondents, we were not able to differentiate between spouses with respect to nonstandard work schedules. This could have prevented us from finding significant results regarding strain-based conflict, because effects of the different types of work schedules were lumped together. This study shows that work schedules are an important factor when analyzing time-based work-family conflict, particularly among dual-earner couples with young children. It seems that research should not concentrate solely on the length of working hours. Employment not only means a period of time away from the family, but its timing and requirements (e.g., rush or not) vary. Each of these means different demands and resources in combining work and family. Among dual-earner couples, the location of household work during a day or a week has a considerable effect on the daily life of the families that is not wholly negative. Whether schedules are self-selected (a choice) or a not is an important issue meriting further study. There is clear need for more comparative research on the 24/7 economy, societal and workplace practices, and family life. Declaration of Conflicting Interests The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
2019-05-06T14:08:53.596Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "b1c5b50e1f2759d3d54c8658f9087e4128d7157c", "oa_license": "CC0", "oa_url": "https://jyx.jyu.fi/bitstream/123456789/52191/1/tammelin%20revised247%20society%20and%20familynonstandard%20hours%2020.10.2014.pdf", "oa_status": "GREEN", "pdf_src": "Sage", "pdf_hash": "0ff750009a3cf84e2318e7741cc26a840f0b80e4", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "Psychology" ] }
221015864
pes2o/s2orc
v3-fos-license
First report of Xiphinema hunaniense Wang & Wu, 1992 (Nematoda: Longidoridae) in Vietnam Abstract For the first time, a survey of plant-parasitic nematodes in the Central Highlands of Vietnam discovered a population of Xiphinema hunaniense Wang & Wu, 1992. The Vietnamese population of X. hunaniense is characterized by having an offset lip region, lack of anterior genital branch, vagina directed backward, and a digitate tail. Morphological features and morphometrics of this population are in agreement with the type population of X. hunaniense except for some variations. In addition, molecular characterization of this population and phylogenetic tree of 28S rDNA sequences of the genus are also provided. The genus Xiphinema Cobb, 1913, commonly known as dagger nematodes, are migratory ectoparasitic nematodes that damage numerous wild and cultivated plants through direct feeding on the root and transmission of plant viruses (Taylor and Brown, 1997;Perry and Moens, 2013). This genus is distributed worldwide and is divided in two groups, Xiphinema americanum group and non-Xiphinema americanum group, with more than 260 valid species (Gutiérrez-Gutiérrez et al., 2012). The conserved morphology and overlapping morphometrics of some species groups in the genus Xiphinema make quarantine regulations and protection methods more difficult. Therefore, accurate identification of Xiphinema species using integrate approach is strongly recommended to create a basis for plant pest management. In Vietnam, eight species of the genus Xiphinema have been reported, however, molecular identification are not available for most of them (Nguyen and Nguyen, 2000), and thus, a higher diversity of Xiphinema spp. is expected in the country with the use of molecular tools. Herein, a population of Xiphinema hunaniense Wang & Wu, 1992 in Vietnam is characterized by the combination of morphological characters and molecular data. Material and methods Soil and root samples were collected from the upper 30 cm layer of forest soil in the Central Highlands of Vietnam. Nematodes were extracted and permanent slides were made following Nguyen et al. (2019a). Pictures and measurements were recorded using Carl Zeiss Axio Lab. A1 light microscope equipped with a Zeiss Axiocam ERc5s digital camera. For molecular characterization, the 5′-end region of 28S rDNA was amplified using DP391/501 primers (5′-AGCGGAGGA A A AGA A ACTA A-3′/5′-TCGG AAGGAACCAGCTACTA-3′) following Nguyen et al. (2019b). Forward and reverse sequences were assembled and analyzed using Geneious R11 (Nguyen et al., 2019b(Nguyen et al., , 2019c. The best fit model was chosen using Mega 7 following Nguyen et al. (2019b). Remarks The females of the Vietnamese population of X. hunaniense is characterized by an offset lip region from body contour, lack of anterior genital branch, vagina directed slightly backward, and a digitate tail (Fig. 1). Morphology and morphometrics of this population are highly similar to the type population of X. hunaniense except for smaller a value (43-48 vs 51-57), c value (42-49 vs 53-63), longer stylet (190-201 µm vs 180-187 µm), wider width at pharyngo-intestinal junction (43-44 µm vs 21-23 µm). However, these morphometric variations have been reported from other populations of X. hunaniense (Luc, 1981;Wu et al., 2007;Long et al., 2014). Two 28S rDNA sequences (1 bp difference) of the Vietnamese population of X. hunaniense were obtained, 942 to 944 bp long. These sequences are 98.9 to 99.5% similar (3-8 bp difference) to 28S rDNA sequences of X. hunaniense from other populations. The Bayesian inference phylogenetic tree showed that 28S rDNA sequences of the Vietnamese population of X. hunaniense were placed together with sequences of X. hunaniense from other populations (100% PP) and this group has a sister relationship (73% PP) to the sequences of X. brasiliense (Fig. 2). When it comes to morphology, X. hunaniense is closest to X. radicicola, and therefore, it has been synonymized with X. radicicola by Loof et al. (1996). However, based on the observation of different populations of X. hunaniense and X. radicicola, Robbins and Wang (1998) re-established X. hunaniense as a valid species that was agreed by Zheng and Brown (1999). 28S rDNA sequences of X. hunaniense has a sister relationship to X. brasiliense, but X. hunaniense can be differentiated from X. brasiliense by the moderately offset terminal peg vs distinct peg-shaped tail and X. brasiliense usually has more posterior vulva position. Besides, the 28S rDNA sequences of X. hunaniense from Vietnam were only 87 to 88% similar (84-85 bp difference) to X. brasiliense. Due to the conserved morphology in some Xiphinema species groups, i.e. X. hunaniense -X. radicicola -X. brasiliense (Zheng and Brown, 1999) or X. americanum group (Gutiérrez-Gutiérrez et al., 2012), the combination of morphological characters and molecular data is needed to identify Xiphinema species. This is the first report of X. hunaniense in Vietnam with the support of molecular data of 28S rDNA sequences, adding to the total number of nine Xiphinema species in Vietnam, including X. americanum, X. brasiliense, X. brevicolle, X. diffusum, X. elongatum, X. insigne, X. longicaudatum, X. radicicola, and X. hunaniense.
2020-08-07T13:02:04.015Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "da2aa646ee23a4fe3972cc198923319f8188772e", "oa_license": "CCBY", "oa_url": "https://www.exeley.com/exeley/journals/journal_of_nematology/52/i_current/pdf/10.21307_jofnem-2020-078.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "008555972cf03bac9af6a80974f264e4544e8b46", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
142545741
pes2o/s2orc
v3-fos-license
ENGAGED EPISTEMIC AGENTS Our aim in this paper is to throw some light on the kind of normativity characteristic of human knowledge. We describe the epistemic normative domain as that field of human agency defined by knowledge understood as an achievement. The normativity of knowledge rests on the contribution of the epistemic agent to the fulfillment of certain tasks. Such contribution is epistemically significant when the agent becomes engaged in the obtaining of success. Finally, we identify some features associated with full epistemic agency (conditions of cognitive integration and epistemic autonomy) and elucidate what we mean by engagement by appealing to the idea of adopting an epistemic perspective. PALABRAS CLAVE: normatividad epistémica, dominios normativos, autonomía, implicación, perspectiva epistémica In this paper, we address the problem of epistemic normativity. Our aim is to throw some light on the kind of normativity characteristic of human knowledge. To do so we describe what we call normative domains. We carry on our views in the framework of virtue epistemology. In general, such approaches pertain to the so-called anti-luck epistemologies, that is, epistemologies that consider that knowledge is something not merely attained by luck. Thus certain expressions and vocabulary used in this paper may sound too idiosyncratic and dependent on these views. We contend, however, that our proposal of grounding normativity in commitments to accept regulation by constitutive properties of normative domains is largely independent of virtue epistemology and intends to elicit something that could be generalized to other views, especially those concerned with the topic of the sources of normativity. Our definition of normative domains, insofar as these characterize a dimension of our practices, aims to clarify how normativity can be linked to agency in the context of such practices. A normative domain is a certain field of human agency defined by the sort of achievement that characterizes it. In this sense, knowledge, considered as an achievement, constitutes the domain that is called "epistemic". In order to infuse normativity, an achievement must involve more than mere success in performing a task or an activity; it requires taking into consideration how success is obtained. Success is a valuable outcome of our activities but, as we will argue, the contribution of an agent to the obtaining of this particular state is critically relevant to evaluate it as an achievement. What is constitutive of the normative status of knowledge is explained in terms of how an epistemic agent is engaged in the task of knowing. It is the sort of engagement that leads to success that ultimately explains the normativity of knowledge. To elucidate what we mean by engagement, the last section of our paper proposes to appeal to the idea of adopting an epistemic perspective. We will proceed as follows. A first section will very briefly introduce the normative question in epistemology; the second section will place the issue within the framework of agent-centred epistemologies and will criticize a model that views engagement in terms of agents' motivation; the last section of the paper will develop the main tenets of what we will dub a constitutive model of epistemic normativity. . Epistemic Normativity It is widely held that epistemology is a normative discipline. As such, it pertains to what in section 3.1 we will characterize as a normative domain, that is, a field of human practices that establishes a certain standard of success due to the competences of the subjects in these practices. Although some philosophers have thought that one of the main epistemological tasks was to provide guidance for our cognitive conduct, the normative character of epistemological reflection is not exhausted by this regulative task. Moreover, the regulative task does not stand on itself. On the contrary, a justification of how normative standards emerge is also needed. With this aim in view, we seek to understand which are the normative conditions associated with believing and knowing. As cognitive beings, we want our beliefs to be correct and to count as knowledge, and we take this to be good and worth pursuing. Knowledge is the sort of thing we care about. Why is this? We consider it a good thing to possess knowledge and to strive for it. We also judge our cognitive activities in terms of how we think that beliefs should be formed in order to be justified, to be rational or to count as knowledge. It is not contentious that normative claims pervade our cognitive life and have real psychological and practical effects upon us. It is significant that most recent epistemology has turned its focus of attention towards questions regarding epistemic value. Some authors have even ventured to talk about a "value turn" in epistemology (Riggs 2008). The so-called value problem has become a point of departure. The problem is easy to formulate: why is knowledge distinctively valuable? Or, better, why is knowledge valuable over true believing? (Pritchard 2007, p. 86). A certain value-driven epistemology has grown up around these questions; thus the discussion on epistemic normativity has mainly been couched in terms of value. No doubt epistemology has to be concerned with identifying the fundamental epistemic values. But this does not exhaust the normative problem as such. A different possibility is to observe how a performance qualified as knowledge (Sosa 2011) reveals how the subject is committed to certain normative standards that constitute the valuable properties of the epistemic domain. Let us now briefly set forth which aspects of the normative problem we are interested in. Beliefs are subject to normative assessments. Talk of a belief as rational or as amounting to knowledge makes reference to the way the belief is in conformity with certain standards and meets certain requirements. Both standards and requirements determine the correctness of the belief. Correctness expresses normativity. And it does so because to take a belief as correct means to accept at once certain commitments in the regulation of belief. Naturally enough, the commitments that will matter are those that reflect the authority that standards and requirements exert on the believer. Specifically, to accept these commitments requires the believer to have some sensitivity to, even some feeling of being concerned with, the authority of the standards. So, when we talk about knowledge as valuable, we assess how the subject's commitment contributes to the success of the performance of so-called knowledge. And if our interest is the normativity constitutive of knowledge or the question of why knowledge is valuable, we should ask for the source of the authority of such demands. Thus if one wants to address normative issues appropriately, one should take into account the expected responses of those subjects that are sensitive to the requirements and standards distinctive of the performance of a certain task. Take, for instance, a situation where the agent's action is praiseworthy and one is prone to acclaim with regard to what the agent has attained: "Well done!" The appropriateness of the assessment will depend on the fact that such agents are sensitive to the values that reflect the correctness of the task. That is, they should be concerned by those valuable properties. This sensitivity could be expressed either (a) implicitly through the systematic cognitive responses of the agent, or (b) explicitly through a series of commitments to what the agent thinks she ought to do or believe in the epistemic case. In situations where agents exhibit this kind of sensitivity (in either of the two forms we have suggested), we can say that they adopt a normative stance. Such a stance consists of viewing and interpreting the situation as one where certain normative demands must be met in order to accomplish a particular task. In so doing, the requirements are taken to be authoritative over the subject. She will be prone to regulate her believings and disbelievings according to the requirements underlying the identification of those valuable properties that characterize the task. The adoption of a normative stance is a need for those beings that are able to be in charge of their lives and responsible for their deeds, no less in the cognitive realm. Therefore, the normativity issue that we are interested in could be put in the following terms: one must explain how an agent is capable of placing herself in a certain normative (epistemic) stance. Our point of departure is, then, the epistemic agent within an evaluative context, and the questions to be addressed are the following: 1. Which are the normative properties in force within an epistemic context? 2. In what sense can it be said that these are properties that exert authority over the epistemic agent? Let us notice that by "normative property" we don't necessarily mean "moral property". Some authors have suggested that such questions should be considered within a broader theoretical context, that of moral normativity. So, for instance, Zagzebski has claimed that "knowledge is important because it is intimately connected to moral value and the wider values of a good life. It is very unlikely that epistemic value in any interesting sense is autonomous" (Zagzebski 2003a, p. 26). Nonetheless, it should be pointed out that there is an obvious risk in embracing this dependence of the epistemic on the ethical. It is true that the value turn can benefit from drawing analogies out of the ethical domain. Certainly, when one adopts a normative attitude of any kind, a sort of good is always involved, but this only means that knowing, in our case, is regarded as valuable and agents as praiseworthy or blameworthy insofar as they are concerned with the normative status of the beliefs they hold. But are we committed to the idea that all the goods are, in a sense, connected to moral values? Although the aim of this paper is not to argue directly for the autonomy of epistemic normativity, we will contend that normativity denotes a sort of authority and that this authority is rooted in the way epistemic agents tackle cognitive problems and succeed in solving them. The idea is that the properties that confer the status of "epistemic" are not, in any interesting sense, moral; it suffices, with respect to the normative issue we are dealing with, that they are "constitutive" properties of knowledge considered as an achievement. 1 In the next section we shall examine how a version of the socalled agent-centred epistemologies deals with the normative question. Agent-centred epistemologies not only have the tools to address the value problem, they are also in a good position to confront the difficulties raised by the authority question, given the role the subject plays in the explanation of the normative status of knowledge. . Agent-Centred Epistemologies and Motivation It is not surprising that the value problem has been of particular interest for reliabilist epistemologies, given the role that naturalist leanings play for their inspiration. Any view that regards knowledge 1 Compare our approach with the quite close views that Pamela Hieronymi holds regarding the normativity of believing (Hieronymi 2008). She distinguishes between the reasons one might give for answering to the question of why someone believes that p and the reasons of why to believe p. Believings, she adduces, can occur without reasons, but we are answerable for their content, even though such attitudes as believings are not voluntary. Analogously, we distinguish between the question of how worth is to know that p (because p is true), and the questions of how worth is to know that p (because knowing is an achievement due to the appropriate exercise of our competences). as true belief derived from a reliable process is confronted with two immediate problems: first, the view is too liberal unless it offers some hints regarding how to restrict the sort of reliable processes that are knowledge-conducive; second, reliability as such does not seem to add any value to a true belief; a reliable process is valuable because it is truth-conducive, its value is "swamped" by the value of the true beliefs it helps to obtain (Kvanvig 2003). Agent-centred epistemologies offer possible answers to both problems. There is a way to rule out "processes that are strange or fleeting" (Greco 1999, p. 286) -to include among the knowledgeconducive processes only those that are stable features in the cognitive life of the subject and that shape her cognitive character. For Greco, this move also allows us to account for knowledge's being subjectively appropriate, so as to involve "sensitivity to the reliability of one's evidence" (Greco 1999, p. 289), and offers at the same time some hints regarding how to address the value problem. Now the value of knowledge derives from an exercise of the cognitive traits of the subject. Thus agent-centred epistemologies seek to settle the value problem by appealing to an epistemic agent who grounds what is valuable or normatively significant in cognitive activities. Let us assume that the cognitive character of the agent is given by a set of intellectual virtues. Virtue epistemologies are, primarily, agent-centred epistemologies. Firstly, they offer a definition of knowledge as true belief out of intellectual virtue. Secondly, in virtue epistemology, the success relevant for epistemic assessment is the one that is due to the agent, due to the exercise of the virtues that form her cognitive character. Epistemically significant success is an achievement, that is, success attributable to the agent (Sosa 2003 and or something she deserves credit for (Rigss 2002, Zagzebski 2003aand 2003bGreco 2003 and2010). Linda Zagzebski has defended a version of virtue epistemology that embraces a very thick conception of the virtues. Each virtue includes a motivational component along a reliability component. Knowledge arises, according to her, out of an act of intellectual virtue (Zagzebski 1996). 2 So, motivation plays an important role in accounting for knowledge: an act of intellectual virtue is an act motivated by the motivational component of intellectual virtues, is an act an intellectually virtuous person would characteristically do in these epistemic circumstances, and is successful in reaching the truth because of these other features of the act. (Zagzebski 2003b, p. 152) Intellectual virtues may involve many different motives, but one of them is fundamental: the love of truth. The motive is a feature of the agent that makes believing more valuable in a way that is not open for the reliability component. And this is so because the motive as such has a sort of value that can be conferred on the acts of intellectual virtue that it motivates. "I propose that love of truth is a motive that confers value on acts of belief in addition to any other value such acts might have" (Zagzebski 2003b, p. 149). When we know that the act has been motivated by the love of truth and not by any other spurious motivation, we admire the act as better even if in both cases we have succeeded. We are not interested here in criticizing Zagzebski's views on virtue and knowledge, but to outline a model that could account for the normativity of knowledge by appealing to the motives of the cognitive agent. 3 We will dub it motivationalism. The interest of motivationalism lies in two facts: first, it is easy to see how it could illuminate the distinctive value of knowledge; second, it gives a certain image of the agent's involvement in the acquisition of knowledge (through the acceptance of beliefs out of a well-motivated act of intellectual virtue). The emphasis is shifted on to agents, but just one aspect of virtuous agency is essential in accounting for what is distinctive of the epistemic status of knowledge, that is, the motive that guides the cognitive activity of the agent through her acts of virtue. This internal link between acts of virtue and motivation allows to distinguish between the deeds of the agent and mere events that happen to him. Thus motivationalism accounts for epistemic normativity in the following way: (M) S's acts (believings) acquire an epistemically normative property (and thus they become acts he deserves credit for) if and only if they are successful (true believings) due to the fact that the agent acted (believed) moved by an appropriate (epistemic) motive. For the agent to be creditable for her knowing state, the motivationalist claims, it matters how it has been attained, that is, how it has been motivated. There is a right way of attaining the epistemic good we ascribe to knowledge and this is the attaining of the good with the right motivation. This also explains the responsibility of the knower in the acquisition of valuable epistemic states. 4 By referring to the virtuous acts of the agent as part of what defines knowledge, this view attempts both to solve the value problem and to account for the responsibility of the epistemic subject. We could say that by being subject to the "authority" of good motives, the agent becomes responsible for her epistemic deeds. Authority accrues from the motives to the believings. Moreover, the subject gets credit for her acts even if the acquired beliefs are false. It is enough that the intellectually virtuous person does what she would normally do in the circumstances, guided by the right motivation. There is a straightforward objection to any view that takes motivation to be a constitutive feature of the normative status of knowledge. It is extremely easy to identify cases of knowledge where it is at least dubious that there are any motives involved. For Zagzebski, any virtuous act is characterized by reference to a motivational component, but this component is not to be found in ordinary perceptual or memory beliefs. Could such cases count as knowledge? So presented, the objection has some initial plausibility, but we cannot ignore the fact that it can be used against any agent-centred epistemology that assumes that perceptual and memorial processes are performed automatically, without any significant agential contribution. However, there is a possible general answer to these worries: the agential contribution is made visible when perception and memory become part of a web of commitments that form a background for the evaluation of our perceptual and memorial beliefs. In human knowledge, perception or memory perform virtuously when the outcomes become rationally sensitive to defeaters. Motivation could now enter the description of the process as a feature of our capacity to become sensitive to defeaters. Thus the love of truth also guides us in our acceptance of perceptual and memorial beliefs. The crucial issue in motivationalism is whether the motivational component of the virtues plays any significant role in accounting for the normative status of knowledge. Let us assume for the moment that virtues involve motives; it would be undesirable to describe our cognitive life as lacking any emotional and motivational drives. And yet the question remains: in what sense do they contribute to constituting an epistemic normative status? The key claim in motivationalism is that motives really affect the act of believing in such a way as to render it somehow epistemically better. But it is not difficult to see why this cannot be true. Motivation is not constitutive of achievements, at least in the cognitive domain. Let us draw an analogy with other kinds of performance where excellence and motivation could also be involved. Consider the case of a famous physician whose diagnostic capacity is greatly admired; let us suppose that he is even better in particularly challenging and difficult situations. Is it true that we would admire his performance as a physician because of the motivations that guide him in carrying out his tasks? Imagine a physician who is never concerned about whether his motivations have anything to do with care for his patients' wellbeing. Let's call him Dr. House. How is motivation supposed to enter into the assessment of his medical performance? Vary the motives and check whether the quality of the performance is affected in a significant way. Case 1. Dr. House has an excellent record of performances in healing his patients, but he has never been moved by concern for their well-being. Case 2. Dr. House is a very clumsy, but lucky performer, though he has always been moved by concern for their well-being. Case 3. Dr. House is an adroit and safe performer, moved by the best motives with regard to the health of his patients. A motivationalist is committed to the claim that case 3 shows how the value of the motives accrues to the value of the performance, that case 2 reveals how motivations are not sufficient to add value when they do not constitute an agential contribution to the excellence of the competence involved, and that case 1 does not challenge her position because she only considers cases where performance is really due to the motivation. But in this last case she must also offer an explanation as to why the performance is excellent. Or does it not make sense to speak of an excellent performance when this is not due to the motivation of the agent? If it does -she could claimit is because it is due to the reliability component of the virtues Dr. House exerts. And we have seen that reliability cannot account on its own for the value that accrues to the outcomes in addition to what makes them valuable in the first place. So the motivationalist could reply that in case 3, where the performance is due to the good motivations, this performance is of a new kind more valuable in virtue of the motives, and that is why it involves a new sort of excellence. Motives modify performance. As the set of examples shows, there is a sense in which performance is independent of motivation, but this fact does not say anything against the possibility that the performance itself has been modified by the motives. We cannot measure the value of the performance in terms of external outcomes. We risk being under the influence of the "machine-product model", as Zagzebski herself calls it. We cannot think of the outcome as external to the act; if we want to have a valid analogy with the cognitive realm, "the intended outcome is a property of the act itself" (Zagzebski 2003b, p. 151). Now, in what sense do motives change Dr. House's good performance? Truly, in no significant sense with regard to the exercise of the competence of diagnosing and healing patients. This competence remains unaffected by whatever his motives are. And, as case 2 shows, in contrast with the other two cases, the performance is appropriate because of him; it is not just the result of healing people that makes the performance good; he is doing it in the right way, he is not being lucky. It is not true of him that he heals the patients because he desires their health, even if he does. It could even be true that he acquired such a competence of diagnosing and healing because he really cared about sick people; but this is contingent regarding the normative standards that are constitutive of a competence in diagnosing and healing. Again, someone could object that epistemic competencies are different in this respect. Consider, for instance, an analogy with moral virtues. Acts motivated by compassion are better than mere acts that lead to relieving suffering. And truly enough, acts of compassion are good even if they don't actually achieve the aim of relieving suffering. Cognitively virtuous acts are alike; acts motivated by an aversion to falsehood are better than acts that merely aim at avoiding falsehoods for spurious motives. And again, it seems true that acts of virtuous believing motivated by an aversion to falsehood are good even if they don't avoid the false But, in virtue of what? We would say, in virtue of the modal connection between the use of a competence and the avoidance of false beliefs. Performances would deserve positive evaluations even in those worlds where the Cartesian evil genius is acting, insofar as they are controlled by a competence that would lead us away from falsity in a normal world. First, the situation would be the same when we are not motivated by the avoidance of falsehoods, insofar as the competence is well constituted as regards the aim of avoiding falsehoods. And second, the same does not seem to be true in respect of compassion. This does not mean that there could be no situations where bad motivations might affect our truth-tracking competences. The motivationalist could reply that, in these cases, motivation is normatively relevant for the assessment of belief. But this is not the question at issue: what matters is whether motivation constitutes good truthtracking in such a way that only if I am rightly motivated by the love of truth my attainment of the truth is virtuous and counts as knowledge. There could be belief-forming processes that, even if they were to lead us to the truth, wouldn't do it in the right way. But is the appropriateness to be explained in terms of motives? Consider a case of wishful thinking. The subject does not seem to care about the truth; she cares just about what she wants to believe. It seems as if she believes because she so wants to believe. Imagine that she is right in her belief because there is evidence for the belief. Insofar as we cannot just believe like that, at will, there must be "some connection between the fact that there is good evidence for the belief and my belief" (Zagzebski 2003b, p. 151). But does she believe because of that good evidence? The force of Zagzebski's argument depends on viewing the first "because" as normative. "Not caring about the truth" regulates the belief in such a way that prevents the evidence from playing its epistemic role. That makes the second "because" a mere psychological one, without normative force. It is as if the roles had been reversed. The epistemic fault resides in not taking the evidence as evidence, that is, as normatively relevant in the formation of the belief. Motivationalists would have to claim that it is just by caring about the truth that we are able to take evidence as evidence. But caring about the truth is a mere psychological disposition that is not constitutive of the normatively relevant fact, which is the exercise of epistemic competencies that answer to evidence as evidence. The question is not whether I can take the evidence as evidence without caring about the truth; we are probably beings unable to do so. The question concerns which feature is playing a normative epistemic role in the regulation of belief. It is possible to conceive a case where someone obtains knowledge because she adequately responds to the evidence, but her motivational conditions regarding the truth of a particular belief are not appropriate. A good motive is the psychological condition that reflects the subject's disposition to conform to certain normative standards. But the motive as such does not constitute either the normativity of the standards or their authority. It must be considered a symptom that could help to decide whether an agent is engaged in a certain task, but it does not determine correctness in performing it. So it is not accidental that in order to distinguish a good motive from a bad one, we first need to have some conception regarding which normative properties are constitutive of the domain in question. It is therefore not surprising that we use tautological expressions to describe a good performance by referring the right motives: the good performance of avoiding falsehoods is moved by the good motive of avoiding falsehoods. And the same happens with truth or knowledge as motives. . A Constitutive Model of Epistemic Normativity Our argument so far has established the need to introduce considerations about the nature and place of the epistemic subject in order to account for the normativity that characterizes the epistemic domain. But we have also seen how the contribution of the subject cannot be just theorized in terms of her motivational involvement in cognitive tasks. The normative issue cannot be solved by insisting on agents' motives, mainly because although their psychological force could contribute to explaining why they behave in the way they do, this fact does not tell us anything about the correctness of their epistemic agency. In this section we will sketch a model of epistemic normativity focused on the way a certain domain of activity is constituted as normatively stable. We are interested in identifying those features that become authoritative for the agents within the domain, and in explaining why they are normatively compelling. Our aim is to characterize what we will call a "normative domain" and apply the results to the "epistemic domain". We will first argue that a conception of knowledge as achievement is central to the constitution of the normative domain of epistemology; second, we will argue that an agent contributes to the constitution of the domain insofar as she becomes involved and engaged in certain tasks. The result will be a kind of agent-centred epistemology that accounts for epistemic normativity in terms of how the agent is engaged in the accomplishment of certain tasks. . 1 . 1 . Normative Domains Human activities are built around some acts, facts and, sometimes, artefacts that are proper to them. Driving, for instance, involves human behaviour, conventional facts and obviously some artefacts (roads, vehicles, signposts, etc.) that help organize the task in an appropriate manner. Or take horse races: many facts about horse behaviour or horse characteristics, together with jockeys' equestrian skills and the artefacts that make riding possible, characterize the domain of horse races. Entities within the domain are interestingly evaluated along many dimensions. Consider horses. They are assessed in terms of elegance, strength, or speed. But not all of the assessments are at the same level in the domain of horse racing. A plethora of possible assessments seem to converge towards a fundamental value: speed. This convergence makes it possible to identify a success condition that will characterize the activity. We will call normative domains those fields of human activity that are defined by their own sort of achievement. Regarding the overall field of human agency, only one type of success can be regarded as constitutive of a normative domain, and this is the success due to some accomplishment on the part of the agents when they carry out the tasks proper to the domain. Success due to the work of the agents is usually considered an achievement. Thus normative domains are built around their corresponding achievements. These become goals pursued by the agents and in reference to which the activities of the domain are ordered. 5 For the agent to regard it as an achievement, the aim must be reached in virtue of the natural or social endowments, the faculties or the agential conditions of those who pursue them. If an achievement has become a goal worthy of being pursued it is because we have shifted our natural dispositions to put the blame or praise for our deeds on the qualities of the agents. This shift means that we honour certain acts that lead to success without the intervention of luck in terms of the contribution made by the appropriate dispositions and engagement of the agents. Within the domain, success is valuable, but it does not constitute the normative domain as such. Lucky success is valuable, but it is far from contributing to explain how a domain of activity can become normatively stable and compelling for the agents. In our view, it is the way the agents are involved in the attainment of success that helps to explain how a domain is constituted as normative. Agents need to contribute to the obtaining of valuable goals through the use of their faculties, skills and competencies. Moreover, many normative domains, once constituted, are subject to sanctions and quality controls. Most of them are even socially instituted in order to set standards as regards the abilities and responsibilities of agents. Activities like driving, teaching, health care, etc., are good examples of socially regulated normative domains. It is important to note that drivers, teachers, or physicians are not assessed for what their motivations are supposed to be, but for their competencies and their willingness to assume their respective professional responsibilities. In other terms, they are evaluated for their capacities to reach the goals proper to their domain of competence; they are assessed for their achievements. . 1 . 2 . The Epistemic Domain Cognitive activity also constitutes a normative domain in virtue of the sort of achievement that is proper to it. There is a domain of epistemological significance wherever the achievement that we identify with knowledge matters, as opposed to mere success (true belief). As a normative property, "knowledge" names a human attainment. It is success due to the work of agents, just as in any other normative domain. At some stage of their cognitive development, human beings start to undertake activities and practices oriented towards the obtaining of knowledge. Here too the agential conditions (faculties, competencies) must contribute to the attainment of the aim. Again, agents will now become the primary objects of assessment, of praise and blame, and their assessment will have to do with the way they are involved in achieving success within the domain. The credit we attribute for believing truly is accounted for in terms of the contribution made by the dispositions and the appropriate engagement of the agent in the attainment of success. Lucky true beliefs do not suffice to constitute a normative domain of epistemic significance. One can imagine a stage in the cultural development of a community where knowledge in our sense was less valuable than, for example, perseverance in a tradition, the acceptance of social inherited testimonies or even oracles. But the important thing is that at some point of a possible cultural trajectory truth becomes valuable because it is obtained due to the contribution of reliable cognitive skills of the agents. It can be claimed that an epistemically normative domain is fully constituted when such a stage is reached, that is, once putative agents are assessed in such practices according to their competencies to reach the goals. At some moment during the process, and as a result of the stability and acceptance of such evaluative habits, agents become engaged in normative tasks. They may even become fully aware of their involvement in a normative practice. This realization is at the core of the conception they have of themselves as epistemic agents. As far as they act autonomously, they take a responsible attitude towards their own engagement. Thus epistemology became a "domain of normative criticism" (Sosa 2007, p. 77), or a "critical domain" (Sosa 2007, p. 73) in virtue of two main features: a) human beings prove themselves skilful enough to attain the aim of knowledge; b) the world is benevolent enough to allow for such attainments. Thus knowledge became a normative achievement due to the convergence of certain factors that made it the fundamental value of the domain just when the agents' contribution was in place. In our conception, the main tenets regarding the epistemic normative domain follow the lines of Sosa's epistemology. In his recent book, A Virtue Epistemology. Apt Belief and Reflective Knowledge, he suggests that epistemic virtues and competencies are constitutive of the attainment of fundamental value (Sosa 2007, p. 88). Our idea is that solving the normative question in epistemology involves adding some further considerations regarding the way agents intervene in the acquisition of knowledge. The very constitution of the epistemic domain will depend on the way the subject can be engaged in an epistemic task, so that success is attained in virtue of that engagement. The following three features would summarize the constitutive characteristics of the epistemic domain: 1) Both truth and how truth is attained are valuable within the domain due to the contribution of the agents. In a sense, truth remains the fundamental value in epistemology, but only to the extent that it is qualified as "deserved" truth, that is, truth attained due to a virtuous competence. 2) Epistemic virtues, understood as stable and reliable dispositions, are crucial facts in the constitution of an epistemic normative domain. They are the qualities for which the agent is primarily praised and blamed. 3) To attain the truth due to the subject's competencies (or virtues) is the constitutive fact of the epistemic domain. Beliefs thus acquire the property called aptness. A belief is apt when it is true because competent (Sosa 2007, p. 23). 6 To sum up, the "epistemological game" is constituted as a normative domain when truth is considered a valuable prize for those agents who virtuously win it. There are other ways to attain truth, but their epistemological interest is limited. They do not contribute to creating a space where normative properties constrain the work of agents. . 2 . Epistemically Normative Engagement As we have seen, agent-centred epistemologies in the virtue-theoretic tradition explain the normative status of knowledge in terms of the kind of success that is reached through virtue, that is, through the competent exercise of skills and abilities (Greco 1999 and2010;Sosa 2007). Such competencies must be "seated in the agent" (Sosa 2007, p. 86), but there could be many different models of what it could be for a competence to be seated in the agent. In any case, the basic claim is that a true belief would amount to knowledge because it arises out of the exercise of an agent's competence. She should be genuinely involved in getting the knowledge. What is at issue is precisely to what extent the agent is involved in the appropriate exercise of her abilities. True enough, for this involvement to be effective, it would be too demanding to always require an explicit reflective attitude towards the cognitive conditions (including the belief-states, the abilities at her disposal and the circumstances) she is placed in. So, we shall distinguish between the mere agent's involvement because she exerts her cognitive abilities reliably leading to knowledge, and a complete engagement in the epistemic objectives, where such a task demands the ability to calibrate the adequacy of the particular constraints the epistemic scene presents. 7 a) Epistemic Agents Our constitutive account concedes a very significant place to agents in the normative structure of a domain. Thus our model must be completed with some remarks about the constitution of an epistemic agent. Talk about epistemic agents is not without problems. It immediately suggests some commitment to voluntary and freely chosen acts of believing in analogy to intentional action. And it is generally agreed that human beings are not epistemic agents that could choose the beliefs they entertain in the very same sense in which they can intentionally decide to act. Nevertheless, it still makes sense to say that we are active (and not merely passive) in our believings and disbelievings. Our activity in the cognitive realm is exhibited in the way we exercise certain virtues and abilities in the control of our beliefs. We are actively engaged as long as we display, through the use of our faculties and virtues, some sensitivity to the standards that govern belief acquisition processes. If this is so, we need then to identify the conditions under which the agent could be said to be adequately sensitive to such standards. In general, the agent displays this sensitivity insofar as she exerts a control over the outcomes that result from the use of her faculties. 8 We need to say something with respect to how the competencies work to control the outcomes. There is a sense in which we could say that each faculty delivers some truths working in isolation from other faculties. But agential control requires something else. A first idea that must be considered is that we cannot conceive an agent as 7 The question about what the epistemic involvement of agents consists in exceeds the aims of this paper. It could be claimed that it reflects at least the default epistemic work of the agent, that is, the exercising of her competencies in normal circumstances such that the competencies make a salient causal contribution to the success. In a sense, the agent obtains a degree of epistemic success that could be called normative without intervening as a fully engaged normative being in the task. 8 Pamela Hieronymi has developed interesting ideas about the kind of control we can exert in our doxastic life. Being active regarding our beliefs does not amount to controlling a certain state of affairs as the outcome of our acts. We control some of our thoughts, in general, by thinking them. See Hieronymi 2006, and a mere bunch of (reliable) faculties. Virtue epistemology views the virtues as features of a person, meaning that it is the whole subject who will be assessed as the "author" of the belief. This points to a second idea, one that is central to our consideration of what is needed to become a full epistemic agent. The good working of an epistemically virtuous agent must be viewed in terms of the cognitive integration she is capable of. The agent whose contribution is constitutive of the normative achievement of knowledge must be conceived as a cognitively well-integrated subject. This requisite has two different dimensions: one concerns the identification of the cognitive architecture that is necessary to talk of integration in the system; it is not implausible to think that the integration will depend on the acquisition of capacities of metacognition, that is, the capacities to evaluate one's own cognitive performance. Metacognition is a psychological competence whose function is to monitor and control the cognitive status of the system (Koriat 2000). As a psychological device, metacognition can be considered a necessary component of the architecture of a well integrated agent, although it would scarcely be sufficient since such integration also requires the actual working of the different capacities to satisfy a certain degree of coherence between their products. In this regard, the other dimension makes reference to the harmony with which the cognitive system needs to work in order for beliefs to attain the status of knowledge (Breyer and Greco 2008). There must be some monitoring mechanisms in place that help to identify conflicts in the information delivered by the different sources. This requirement of "cognitive integration" is an essential factor in the agent's engagement in cognitive tasks. The integration requirement argues for a consideration of the level of the agent as the proper level of epistemic assessment, rather than the variable performance of the different competencies. The reason is that such deliverances, although essential in the explanation of why an agent achieves knowledge, are not as such sufficient, because they are mainly considered in a piecemeal way. Compare, for instance, the attribution of health to the overall functioning of the organism rather than to the proper working of a particular organ. Our conception of epistemic agents involves a third idea. A requisite of autonomy derives from certain constraints regarding how the subject is engaged in an epistemic task. Again, talk of autonomy within the cognitive realm can be regarded with suspicion. Is not autonomy a capacity for self-determination and self-legislation? And does it make sense, for instance, to talk about a doxastic legislator governed by an epistemic rational will? So, in what sense does engagement require autonomy? Consider the following. To be a full epistemic agent seems to require the subject to be able to attend to the circumstances under which she would be disposed to accept a true belief and eventually realize that she herself is involved in this kind of circumstances. In addition, the subject must be able to calibrate the adequacy of her faculties and abilities, that is, she must be in a position to assess to what extent her faculties are appropriate for the epistemic task she is engaged in. She will be in control of the epistemic task so long as she regards the abilities she is disposed to exercise as her abilities, and the circumstances as those where her abilities presumably would attain their cognitive goals. In this task, a mere exercise of a metacompetence, as Sosa (2011) proposes, perhaps is not sufficient to reach the level of full agency required for attaining a full level of knowledge. A knower performs as a unified subject when she is able to evaluate her own responsibility in the process of knowing. In this situation, the agent adopts a belief such that, when successful, it could be said that the achievement -knowledge-is attributable to her abilities. This leads us to a version of the autonomy condition that is congenial to virtue epistemology. An epistemic agent acts autonomously insofar as she manifests her character in the "acts" of believing or disbelieving certain propositions. In order to do that, the agent must be able to take herself as a knower that weights her own abilities in each epistemic situation she is engaged in. Then the agent is so displaying a certain sensitivity to herself as a knower, that is, she is answering to the demands of the situation as essentially epistemic and normatively compelling. A full epistemic agent is a cognitive being such that she "acts" by the conception she has of her own epistemic condition. That does not answer to a requisite of self-determination as self-legislation, but it is in conformity with the other Kantian ideal of autonomy: a capacity for thinking for oneself. Now an autonomous epistemic subject is one endowed with a set of capacities that allow her to take herself as the owner and as the assessor of these beliefs. She views herself as the source of those beliefs that are epistemically grounded on the adequate exercise of her own competencies. 9 9 In a recent paper, Fischer and Tognazzini (2011) distinguish between two aspects of the claims of responsibility: attributability, that talks "about the connection the agent has with her action", and accountability, that talks "about the potential interaction the agent might have with her moral community" (p. 381). We analogously distinguish between the same aspects regarding epistemic appraisal. The normative stance we are defending here has to do with the first aspect of attributability to the Thus, once we have cleared up what we understand by normative domains and by engagement, we are now in a position to introduce a schema that would summarise the view on epistemic normativity that we propose as alternative to motivationalism. (C) S's acts (believings) acquire an epistemically normative property (and thus they become acts she deserves credit for) if and only if they are successful (true believings) due to the fact that the agent acted (believed) by virtue of her engagement in an appropriately constituted domain. It is not the fact that the virtues are constituted by intrinsically valuable motivations that explains why epistemic achievements are valuable, but the fact that an agent is appropriately engaged in tasks whose aim is to achieve the acquisition of knowledge. b) Engagement and Epistemic Perspective There is a sufficient condition for a subject to attain the level of epistemic agency. We propose to consider such a condition that of being able to take an epistemic perspective towards oneself. In our view, one takes an epistemic perspective when one considers oneself to be confronted with a particular cognitive problem such that one could be in a position to justifiably claim "I know that p". This is the way we consider that the agent is taking a reflective stance on her own possibilities for knowing in a particular scenario. Such a reflective turn is equivalent to a situation in which the agent is able to place herself in the particular circumstances of knowing and then to calibrate how favourable such circumstances might be in order to attain the goal of knowledge. A brief sketch of what we mean by the idea that to take an epistemic perspective is sufficient for considering a subject an integrated and autonomous agent could be the following: 1. To take an epistemic perspective is something that involves an unavoidably singular point of view, that is, a singular firstpersonal point of view in a particular situation. 2. By taking such a perspective, the agent makes herself present as an agent in the epistemic scene with her peculiar first-person authority. agent. But other concerns can be identified regarding the claims that a community of knowers could make on the epistemic agent. We thank an anonymous referee for an objection in this sense. 3. Taking a perspective on one's own cognitive possibilities implies that the agent is in fact sensitive to the epistemic conditions in which she finds herself. 4. As a consequence of this sensitivity, she is able to see the situation in the light of the epistemic, normative properties that are in force within the domain. 10 In order to support our claim that an epistemic perspective is a sufficient condition of epistemic engagement in a normative task, we suggest an analogy between taking a perspective and the attentional process. Consider the following examples: A first analogy links driving and attending. Thus, let us suppose that someone is driving correctly. Very often, she is not required to attend specifically to the road conditions. Nonetheless, at certain moments, the subject says to herself: "Hey! Pay attention to what you are doing!" Putatively, this recognition involves an evaluation of the road conditions, an additional evaluation of her own capacities to cope with the situation, and finally, a conscious involvement in the situation. When these conditions are met, it can be said that this driver is concerned by the normative claims involved in apt driving and that she views the situation under the conditions of a normative engagement in it. A second analogy helps us to highlight this point. Consider for instance the case of a painter: when she takes a perspective on her painting, we can consider that she is mainly attending to the relevant aesthetic properties of her work. The agent then takes part in a normative domain by ensuring that her work is constrained by these properties she attends to. In neither of these examples is the situation over-intellectualized in any significant sense. Firstly, because only in a few cases does the subject need to put into play a complex mechanism of explicit recognition and reflective guidance of her behaviour. Secondly, because this recognitional capacity works, in our model, in analogy with the attentional processes. 11 Our idea is that both the process of paying attention and taking a perspective share certain relevant features: 10 Our notion of adopting an epistemic perspective qualifies Sosa's perspectivism. Our view emphasizes above all the active presence of the subject through the normative engagement in an epistemic task. 11 We are not contending that in every case of acquisition of knowledge the subject must reach such a demanding level of epistemic self-consciousness. But it is at least required -this is our proposal-when the agent's responsibility for knowledge is at stake. 1. Both are transparent in the sense that one is attending to the thing itself and not to an internal representation. Thus, attention focuses on the objective features of a scene under a public description. Analogously, perspective focuses on the objective circumstances of knowing, and not just on the internal states of the subject. 2. Both are essentially perspectival: they depend on the point of view of the "observer". 3. Both have internal as well as external success conditions. Attending, analogously to taking a perspective, makes you aware of these conditions. Attention can be fragile either because of internal working deficiencies or because of environmental circumstances. Thus, when one adopts an epistemic perspective, as when one attends to something, it can be said that the agent is made present in the cognitive scene as a cognitive agent. One cannot notice what is happening without knowing oneself to be concerned by the normative claims in force within the epistemic domain. In this regard, our conception of epistemic normativity requires that the subject exhibits a certain sensitivity to the correctness of her own epistemic standing when performing a task, and it takes some distance from a conception of normativity in terms of mere performance, as seems to be Sosa's view. When taking an epistemic perspective, the agent is attending to the epistemic claims of the situation (and not to other kinds of claims as for instance aesthetic, or ethical ones, even though they could be relevant to the situation). By doing this, the agents are viewing the situation as one in which they need to engage normatively in order to achieve the task successfully. 12 And this, and no more, shows an explicit engagement in the particular cognitive situation. An agent engaged in an epistemic task is an agent capable of adopting an epistemically normative stance. When the agent is engaged in the epistemic task, she is identifying, at least in an implicit way, those features of the situation that are normatively compelling. So by becoming an engaged agent, she is in position to respond to the 12 We are here supposing that the adoption of a normative stance is something that is made within the overall cognitive working of an epistemically virtuous agent who reliably reaches true beliefs. Otherwise, we would be liable to the obvious objection that the agent could think herself to be in a good epistemic position while she is not. On the contrary, we take the virtuous agent as someone who could also virtuously adopt a normative stance on her epistemic engagement. normative properties that are constitutive of the epistemic domain. To accomplish the task can now be seen as the result of meeting the normative demands of the situation. We have understood engagement in terms of how the epistemic subject is attending to the success conditions of the task. By engaging in a normative task, the agent is viewing it as a task where success is reachable because of her contribution. Success is "in her hands". In a sense, only beings capable of acting by the conception they have of themselves could be normatively engaged in this way. Therefore, we can consider that epistemic agents are just a subset of cognitive beings that act in virtue of their attention to the normative demands that are characteristic of a certain kind of achievement. . Conclusion An issue that has acquired a certain prevalence within the virtuetheoretic tradition is the possibility of attributing responsibility to epistemic agents. Motivationalism, for instance, has been defended as a specific form of responsibilism because it seeks to explain how we are responsible for the knowledge we acquire in terms of the right motives of the epistemic subjects. This is a far-reaching issue that we cannot address here. It will suffice to indicate that our model allows for an attribution of responsibility for knowing to an epistemic subject. We explain this notion of responsibility as attributability by referring to the epistemic engagement of the agent. But this does not settle whether we are allowed to take other subjects to be morally liable to our epistemic demands; that is an issue primarily relevant for an ethics of belief and is not part of an explanation of the constitutive nature of knowledge. To sum up: we have shown that the normativity of knowledge rests on the contribution of the epistemic agent to the fulfilment of certain tasks. Such a contribution is epistemically significant when the aptness of beliefs is due to the exercise of the agent's cognitive faculties and the agent is able to take a perspective on the aptness of her own beliefs and the circumstances of knowing. In this way she becomes engaged in an epistemic task. Motivations are not here regarded as constitutive of the epistemic normativity, although they could play some role in the psychological states of the agent. Our constitutive view, therefore, takes into account both the objective process of attaining knowledge (understood as true belief due to the agent's competencies) as well as the subjective engagement of the agent in a normatively constrained task. Obviously, our brief remarks about the conditions necessary to become a full epistemic agent, conditions of cognitive integration and epistemic autonomy, need to be supplemented if we want to provide a full defence of our model on epistemic normativity. 13
2019-05-03T13:04:57.504Z
2011-12-13T00:00:00.000
{ "year": 2011, "sha1": "147b4492cc757cb9e284683e21e31d9631d6d2ba", "oa_license": "CCBY", "oa_url": "http://critica.filosoficas.unam.mx/index.php/critica/article/download/821/791", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "6b61d5c2f6dc56ddeb93d89b7883559625ba3080", "s2fieldsofstudy": [ "Philosophy" ], "extfieldsofstudy": [ "Psychology" ] }
55885006
pes2o/s2orc
v3-fos-license
ENHANCING CARBON SEQUESTRATION POTENTIAL OF URBAN GREEN SPACES THROUGH TRANSFER OF DEVEOPMENT RIGHTS STRATEGY Non-urbanized areas (NUAs) play an important role in reducing the effects of climate change by providing both carbon storage and sequestration. Despite their importance, they are endangered by urbanization pressures and often neglected by local spatial planning practices. On the contrary, NUAs should be protected and designed as new public urban green spaces to enhance the amount of vegetation of different land cover types and therefore their potential capacity of carbon sequestration. This study proposes a three steps-method for enhancing carbon sequestration of NUAs through the implementation of new public green spaces while ensuring the related economic feasibility of urban development based on a Transfer of development rights program. Introduction Cities play a key role in the rise of greenhouse gases emissions, which is considered to be one of the main causes of global warming and climate change.An important role in reducing the effects of climate change can be played by Non-Urbanized Areas (NUAs).They are fragments of woods, shrubs, herbaceous vegetation fields, abandoned farmlands, and other different types of urban green spaces with amounts of vegetation that represent the last remnants of nature scattered within urban areas [1].NUAs produce multiple urban ecosystem services mainly carbon storage and sequestration [2], regulation of microclimate and mitigation of urban heat islands, and cultural and recreational opportunities.Despite their importance in contributing to climate change risks adaptation in cities [1], NUAs suffer from surrounding urbanization pressures and are often neglected by local spatial planning practices especially in Southern Europe urban contexts [3].On the contrary, NUAs should be protected and designed as new public urban green spaces to enhance their ecosystemic potential through increasing the amount of vegetation cover.Nevertheless, the implementation of new public urban green spaces has to deal with land acquisition of plots of NUAs belonging to private owner.Effectively, public acquisition of land is often economically unsustainable for local administration and face resistance from private landowners [4].The issue of economic feasibility for managing public intervention and providing accessible public urban green spaces could be addressed through incentive-based approaches, including the Transfer of Development Rights [5].A Transfer of Development Rights (TDR) programme defines an area to be protected from development and one where development will be allowed to occur.Landowners can transfer the rights from the area to be protected (sending area) to the area to be developed (receiving area).As a consequence, the parcel from which the development rights are being transferred can no longer be developed, or be developed only in a limited way.As a result, landowners are compensated for regulatory restrictions that reduce the property values.In this direction, the paper proposes a three steps-method for enhancing carbon sequestration of NUAs through the implementation of new public green spaces while ensuring the related economic feasibility of urban development based on a TDR program. Materials and methods The presented case study is the metropolitan area of Catania, the largest in Sicily that experienced during the past five decades an impressive urban growth characterized by high urban density and a severe lack of public green spaces.In the course of forty years , the total population of 27 municipalities included in the metropolitan area grew more than 27%.In 2008, approx.60% of its total population lived outside the main city, indicating progressive population expansion beyond the city center.The methodology is structured as following: First step -Selecting and designing NUAs patches compound The patches of NUAs included in the urban fabric are selected as appropriate sites for arranging new public green spaces according to the following criteria [6]: -their current land-use is abandoned farmland or uncultivated lands characterized by presence of trees, shrubs or seasonal herbaceous vegetation; -they are mainly non-built areas with high proximity to other residential areas or public transport nodes; -the ownership of land is mainly private; -they have an appropriate location and shape for defining the city green infrastructure and enhancing the endowment of other key public services.The selected NUAs patches identify a compound where 'Development Zones' and 'Urban green Zone' are identified.Development Zones represent the areas where the urban development will be occur while Urban green Zone represents the areas for new public green spaces (Fig. 1). Second stepassessing economic feasibility of urban development The local TDR program assigns to each compound of NUAs the status of sending area while Development Zones (as subsets of the compound) represent both sending and receiving area (Fig. 1).The total amount of development rights are assigned to the private land parcels included within this compound and transferred to the Development Zones designated for new urban development.Within the Development Zones, the transferred development rights will be added to the ones generated by Development Zones themselves.As a result, new buildings can be built and the majority of NUAs parcels (Urban green Zone) are transferred to public property. Amount of development rights to be assigned to compound land parcels and size of Development Zones are identified according to an Economic feasibility assessment Tool (Tab.1).In order to ensure the economic feasibility of the urban transformation, the proposed tool quantifies the equitable development rights to be assigned to private land parcels in terms of FAR, that is intended as volume of buildings (m 3 ) over the land parcel unit (=1.00 m 2 ).Percentage of Urban green Zone (G%) takes into account the economic rates of land parcels to be left (RLP), total development cost rates (KT), and final economic rate of the built-up property (RP). According to this tool, the overall urban development can be considered as a feasible investment project when net profit ratio for private developers (P%) results more than 25%, that is considered by the Italian Urban Developers National Association a reasonable percentage value of profit.Net profit ratio is expressed in terms of total amount of profit (P) compared to the final economic value of development rights (VDR) and represents the final economic earnings for developers compared to the total revenues obtained after selling the new built-up properties. Third step -Increasing vegetation cover for carbon sequestration enhancement As a consequence of new urban development within the Development Zones, some vegetation is substituted with impervious land covers.This implies a loss of ecosystem services and particularly loss of carbon sequestration potential produced by the lost vegetation.On the other side, the new public acquired Urban green Zone allows to implement a strategy aimed at increasing the vegetation cover within the NUAs patches.Thus, this step of the method takes into account the potential of the public acquired NUAs patches to both store and sequestrate carbon.For the purpose of this study, a characterization of NUAs by different land covers was required in order to assess the contribution of different land cover types to carbon uptake.This was conducted by a Land Cover Analysis [7] that allowed to visually identify and digitize six land cover types by interpreting available regional high resolution (0.25 m) orthophotos (Fig. 2).The contribution of NUAs in terms of carbon uptake has been evaluated through the application of a carbon sequestration rate (kgCO 2 /m 2 y) for each vegetation cover.Potentials of carbon sequestration and storage values for herbaceous vegetation, shrubs and trees have been collected from literature [8], [9], [10] and respectively applied with an average value to the three selected land cover types (Tab.2).In order to compensate loss of carbon sequestration potential due to urban development within the Development Zones, and enhance the potential of the whole compound, a strategy of new trees plantation is proposed to increase the overall vegetation cover.Moreover, new tree canopies allow to provide supplementary ecosystem services such as microclimate regulation and urban heat islands reduction through their shadow effect.Evaluation of enhancement in terms of carbon sequestration has been conducted before and after the urban development based on TDR program. Results and discussion The method has been tested within a municipality at the heart of the Catania metropolitan area.According to the criteria for selecting an appropriate site for arranging new public green spaces (see First step of the method), a NUAs patches compound has been identified.Economic feasibility assessment Tool allowed to verify the minimum net profit ratio for the project investment (P%=25.52≥ 25%) quantifying the amount of development rights in terms of FAR= 0,40 m 3 /m 2 (Tab. 1) and defining the size of the three Development Zones (Fig. 3 Zone to be acquired for public property (Fig. 3, bold green boundary).Land cover analysis has been conducted within this compound and allowed to identify, through visual interpretation and manual extraction, five different current land cover types (Fig. 3).Results show that Herbaceous vegetation and Shrubs covers are the most prevailing land cover types (respectively 58% and 31%).Trees and Bare Soils cover about 7% and 4% while Build-ings represent less than 0.5 % of the total land parcels area (Tab.3).According to the average values reported in Tab. 2, Herbaceous vegetation covers provide the most relevant carbon sequestration with almost 60,000 kgCO 2 /m 2 y.After the urban development, a total area of 18,237.04m2 is built up (Fig. 3, within the red boundaries) at the expenses of Trees, Herbaceous vegetation and Shrubs (Tab.4) At the same time, the plantation of new trees (Fig. 3, within the bold green boundary) allows to add 52,187.29 m 2 of Tree cover and provides a supplementary carbon sequestration of 53,231.04kgCO 2 /m 2 y.Summing up this contribution to the existing Herbaceous vegetation and Shrubs potentials, a final amount of 125,762.22 kgCO 2 /m 2 y carbon sequestra-tion potential can be provided.According to this new layout, the total carbon sequestration potential enhancement is more than 30% when compared to the current layout (Tab 4).Even though the relevance of these results, the proposed method and TDR scenario shows some limitations.Firstly, the methodology doesn't take into account carbon emission potential of new buildings within the compound.Moreover, the net profit ratio (P%) as checked by the Economic assessment tool is mainly depending on the amount of development rights (FAR) to be granted to private landowners while the percentage of urban green area (G%) for public acquisition less influences the economic feasibility of urban development due to its low economic value.This implies that the economic profit of the project is strictly related to the amount of resulting built-up areas that will be responsible of new additional carbon emissions.Secondly, the proposed tree plantation strategy represents a scenario aimed at maximizing carbon sequestration and other ecosystem services but could be appear highly theoretical.Indeed a total tree coverage is not realistic because designing urban green spaces implies to identify different zones for leisure such as lawns, pathways, bike lanes, water bodies, and playgrounds.As an alternative, a more suitable strategy could be take into account a mixed plantation of trees and shrubs in order to compensate loss of sequestration potential due to leisure zones (to be considered as bare soil and herbaceous vegetation/grass). Conclusions This study presents a possible way to implement new public urban green spaces at reduced costs for the local administrations through the allowance to private landowners of a very limited amount of new development and soil consumption.This represents a reasonable trade-off for the free of charge transfer to public property of their land parcels designated for green spaces and the opportunity to enhance their carbon sequestration potential.Moreover, applying Transfer of Development Rights strategy would help to develop new public urban green spaces throughout the city, implement climate change adaptation strategies, create a more liveable and healthy urban environment and obtain economic benefits for landowners and de- Figure 1 : Figure 1: Urban green zone and Developments zones within a NUAs patches compound Table 1 : Economic feasibility assessment Tool. Table 2 : Carbon Sequestration rates for Trees, Shrubs and Herbaceous vegetation land cover types. , red boundaries) and Urban green Table 3 : Current land covers layout and their carbon sequestration potentials Table 4 : Land cover types areas, percentages and carbon sequestration potential
2018-12-15T00:51:53.278Z
2017-09-29T00:00:00.000
{ "year": 2017, "sha1": "3295c966ae36999e118c11e7bc10939f1bee6041", "oa_license": "CCBY", "oa_url": "https://doi.org/10.18509/agb.2018.02", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "3295c966ae36999e118c11e7bc10939f1bee6041", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geography" ] }
7099622
pes2o/s2orc
v3-fos-license
Living systems are dynamically stable by computing themselves at the quantum level Abstract: The smallest details of living systems are molecular devices that operate between the classical and quantum levels, i.e. between the potential dimension (microscale) and the actual three-dimensional space (macroscale). They realize non-demolition quantum measurements in which time appears as a mesoscale dimension separating contradictory statements in the course of actualization. These smaller devices form larger devices (macromolecular complexes), up to living body. The quantum device possesses its own potential internal quantum state (IQS), which is maintained for prolonged time via error-correction being a reflection over this state. Decoherence-free IQS can exhibit itself by a creative generation of iteration limits in the real world. To avoid a collapse of the quantum information in the process of correcting errors, it is possible to make a partial measurement that extracts only the error-information and leaves the encoded state untouched. In natural quantum computers, which are living systems, the error-correction is internal. It is a result of reflection, given as a sort of a subjective process allotting optimal limits of iteration. The IQS resembles the properties of a quasi-particle, which interacts with the surround, applying decoherence commands to it. In this framework, enzymes are molecular automata of the extremal quantum computer, the set of which maintains stable highly ordered coherent state, and genome represents a concatenation of error-correcting codes into a single reflective set. Biological systems, being autopoietic in physical space, control quantum measurements in the physical universe. The biological evolution is really a functional evolution of measurement constraints in which limits of iteration are established possessing criteria of perfection and having selective values. Quantum Measurement is a Non-Local Actualization Quantum measurement process percolates micro and macroscales.In quantum measurements, all interactions are mediated by a holistic reflective factor (an observer) measuring the interaction.This factor addresses to a superposition of all possible states of microsystem, realizing a choice between the potential states, which occurs via macroscopic measuring device, embedded within the system as its actual part ("body"). As a result of quantum measurement, a new actualized macrostate appears non-locally evolving from the previous macrostate, since its points are not defined before the quantum measurement.This means that quantum measurement includes a reflection to the field non-determined beforehand, i.e. addressing the potential field at the microlevel.Local assembly takes place when the field to which reflection is realized is already defined in the actual three-dimensional space, i.e., when a device is external to the assembling system.It corresponds to a temporal evolution of the system.When the device is embedded into the system that is measured, the positions of all points are just rearranged and singled out in the course of measurement.This is a poiesis, in which the temporal process itself is established.The same point taken before and after measurement becomes split to the image and its reflection.Quantum measurement can be expressed as a process generating a contradiction and representing a logical jump via such a contradiction.It is described as a logical procedure of inducing and addressing a fixed point [1]. An emergent system is always relevant to resolving a paradox or a logical jump.Solving it and obtaining a reflective domain is used as a new transition rule.Resolution of the paradox perpetually proceeds along time, through the flow of which micro and macro levels are connected and any solution is relative.This type of model can be illustrated as an iterative algorithm, using a dynamically changing contraction mapping as the interface of a state and a transition rule [2].It describes a nonlocal structural unfolding where contradictory consequent realizations (quantum reductions) are separated within the internal time-space. During actualization, an unaccountably infinite number of assembling states unfolds into regular series of spatial events with basically simple and reproducible structures.The selection of an appropriate solution of wave function reduction should satisfy certain limit conditions.Actualization can be viewed as a limit of the recursive process originating in quantum uncertainty.It generates an iterative process of reflection to this uncertainty via allotting it by a certain value.It corresponds to a non-local assembly, which is realized as a reduction of the uncertainty in quantum measurement.The measuring device is a part of the reflective system included in it as an embedding (body).Reduction from the field of potentialities assumes the existence of alternative realizations that represent different projections into real numbers.Quantum complementarity arises as a set of these different projections that cannot exist simultaneously, where contradictory states generate the appearance of uncertainties in the coordinate/impulse or energy/time observables. Separation of Contradictory Statements Between Scales The structure of the physical space-time follows from the basic reflective structure appearing in the actualization process.It includes a dimension of an infinite extension V (from vacuum), which consists of superpositioned states described by imaginary numbers.It is actualized via the reduction of potentialities forming the classical three-dimensional space (3D), in which contradictory realizations are separated as series in a temporal process (T).On the edge between V and 3D+T, the measuring device R (from reduction), a subset of 3D+T ("body") is operating. Modern models of space-time in the superstring theory explicitly follow this approach.According to Randall and Sundrum [3], an extra dimension of the infinite extent supplements three spatial dimensions we observe.The observed 3D space is actually the evolution in time of a three-brane moving through an ambient space-time of a higher dimension (the infinitely large space-time).In such 'brane-world' scenarios, the particle physics is confined to the brane but the particles can interact with the ambient space-time through gravitational interactions.When all extra spatial dimensions of the ambient space-time are compact (being a subset of the potential dimension of an infinite extension), these interactions can be so weak as to have escaped detection by experiments thus far.Thus, our Universe is viewed as a domain wall in this five-dimensional space. The fifth dimension is not a single coordinate.It is the infinite dimension where the 3D and the other (compactized) dimensions are embedded.The compactized dimensions represent a subset within this potential set (vacuum) forming a mesoscale border between the quantum vacuum and the classical 3D.The actual structure of the 3D+T is deduced from its suitability for the presence of observer.With more or less than one time dimension, the partial differential equations of nature would lack the hyperbolicity property that enables observers to make predictions.In a space with more than three dimensions, there can be no traditional atoms and no space structures.A space with less than three dimensions allows no gravitational force and is too simple and barren to contain observers [4]. The relation of the infinite and the finite is always a choice of an alternative out of a set, and parameters of this choice cannot be deduced from initial conditions.Thus, a statement is needed narrowing the number of combinations of the values.This statement is an internal arbitrary signification realized by a device through the process of measurement.It is a reflection, which parameters are determined by a possibility of the reflection itself via a specific possible construction of the measuring device operating as quantum computer, particularly by introducing of its temporal parameter (the time of observation). When contradictory statements appearing during actualization are separated by time intervals, we sink from the mathematical into the physical world and face infinite regression avoiding simultaneous existence of opposite definitions.A separation (selection) of contradictory states occurs via measurement process.The temporal process represents as series of computable events, but it is not our computation (by which we count) but an external natural computation (which can be counted by us, i.e. represented as an objective dependence of the spatial coordinates on the time coordinate, i.e. as the physical law). Maintenance of Hierarchical Space-Time Structure -Computation Any measuring (computing) device uses an extra dimension to organize structures in the 3D space.Classical computers use electromagnetic field, which is a compactized dimension in the framework of Kaluza-Klein approach, to glue together separate points.Quantum computers will use a total extra dimension of the infinite extension (vacuum) for binding separate points.As a result of measurement (computation) the observed space-time is organized, and the 3D corresponds to the optimal variant for embedding of measuring devices ("bodies") that compute (organize) the V-reality into the actual reality (3D).According to modern views, quantum computers in order to operate without errors should maintain decoherence-free subspaces via implication of error-correcting codes.The living time of decoherence free subspace is determined in frames of Heisenberg's energy-time uncertainty ratio [5].It determines, e.g., the turnover rates of enzymes and conformational movements of other biomacromolecules.The continuous measurement holds a decoherence-free state via the quantum Zeno-effect between levels. Decoherence-free subspaces themselves (without error-correction) are stable to a perturbation in time to the first order.They fit ideally for the quantum memory applications.For the quantum computation, however, the stability result does not extend beyond the first order.To perform the robust quantum computation in decoherence free subspaces, they must be supplemented with the quantum error correcting codes [6].As a result, the power law appears in the system which is introduced via hyperlinks (Gödel numbers) in the set of real numbers. Quantum computer can be protected against decoherence for an arbitrary length of time, provided a certain threshold error rate can be achieved.Encoding the state of quantum computer for error correction has the effect of making its operating states macroscopically indistinguishable: the more "stable" the code is, the more errors it can correct in each pass.Any two possible operating states would have to be macroscopically indistinguishable (to a degree), from the point of view of the values of averages of macroscopic variables.Quantum computers can be made stable by encoding them. The encoded state would appear just like a state that has no information at all.For the nondegenerate codes the expectation values of macroscopic observables in such encoded states are identical to those that would be obtained for a totally mixed state of a maximum entropy, where again the degree of indistinguishability increases with the number of errors corrected by code.The use of encoded states prevents one from being able to construct coherent superpositions of states that look very different macroscopically, which are the special states characterized by the large decoherence rates seen in.Instead, the more errors the code can correct, the more possible states of the system which one might construct (and superimpose) look alike [7]. The concatenated codes involve re-encoding already encoded bits.This process reduces the effective error rate at each level, with the final accuracy being dependent on how many levels of the hierarchy are used.It is not possible to clone the unknown quantum states.The act of measuring would destroy any quantum coherence of the state.It is possible to exploit the entangled states supported by the additional bits.To avoid a collapse of the quantum information in the process of correcting errors, it is possible to make a partial measurement that extracts only the error information and leaves the encoded state untouched. Quantum error-correcting methods protect information in memory.The concatenation involves the applying this combination of techniques hierarchically [8].Engineering the environment (reservoir), and therefore decoherence, may be a way to avoid complex error-correcting schemes.Decoherence rate scales with the square of a quantity describing the amplitude of the superposition states [9].The best solution of the problem of reservoir is a squeezed reservoir, where all initial states asymptotically relax to a squeezed state of motion [10]. Structure of Computation Device Leibniz [11] defined living systems as the automata exceeding infinitely all artificial automata.The machines of nature, i.e. living bodies, are machines up to their smallest details ('Monadology', § 64).In modern science it is realized that the smallest details of living systems are molecular automata [12] that operate between the classical (3D) and the quantum levels, i.e. between the potential dimension (microscale) and the actual 3D space (macroscale), i.e. they are mesoscale quantum devices.These smaller devices form larger devices (macromolecular complexes), up to the living body. The quantum device possesses its own potential internal quantum state (IQS), which is maintained for prolonged time via a reflective error-correction.It is a part representing a superposition of the potential contradictory reality (vacuum), i.e. it belongs to a microscale.The error-correction is a reflection over this state.It is concatenated within the 3D space as a molecular computer (MC).IQS cannot be cloned but it can exhibit itself by a creative generation of limits of iteration in the 3D world.Superpositions can exist only in quantum systems that are free from the external influences.Thus the external influence should be restricted only to error correction without disturbing the IQS.A decay from a superposition to a statistical mixture of states is called decoherence.The rates of decoherence scale exponentially with the size of superposition. In artificial quantum computers, which principal basis is under extensive theoretical consideration at present time, error-correction is allotted by human constructing this device.In natural quantum computers, which are living systems, the error-correction is internal.It is a result of reflection, given as an estimation of the "state of affairs", i.e., as a sort of an internal process allotting optimal limits of iteration.The IQS by its internal "decision" causes decoherence being coherence-free itself.The IQS is a decoherence-free subspace, which can apply decoherence to its envelope (body).This decoherence should be error-corrected.The decoherence-free state is maintained by the error-correcting code from the quantum computer.Error-correcting code is concatenated by the encoding in genome. The internal quantum state resembles the properties of a quasi-particle, which interacts with the surround, applying decoherence commands to it.For this it should possess a certain curvature possibly of the order of Planck's mass value.The IQS is maintained by the program of error-correcting codes.In this framework, enzymes are molecular automata of the extremal quantum computer, the set of which maintains highly ordered coherent state, and genome represents a concatenation of errorcorrecting codes into a single reflective set. The MC operates with molecular words (DNA, RNA) having definite addresses.The MC functions if the operator acts as an enzyme.The set of operators forms the program of calculation, where operators collide by the Brownian movement.A program can be rearranged in the course of computation.The long-term memory of the MC is based on DNA, the short-term memory -on RNA [13]. Signals (transmitted by bosonic fields) of molecular structures can displace a probability distribution in the IQS.Molecular computer is an input and output device of the IQS.A search of address is realized by the directed mechanical transition [14].Thus, the molecular computer maintains the IQS and governs its operation.The entering (input) into the IQS should be realized by the code of a minimal influence on the system (i.e., by the error-correcting code recognizing only a wrong decision).The code should be optimal also on the output [15]. The other, complementary to body, projection of the IQS is a constructing of space-time image.It is possible only when IQS reaches very high capacity for decision-making.In Freudian terms, it includes language (superego) and ego (the reflection of IQS on itself by means of superego).How are the different IQSs linked, besides via exhibiting their objectivation and signification?This question, stated already by Pythagoras in connection with the harmony of the observed world, has no explicit answer in the scientific paradigm.The monads have no windows, according to Leibniz, but they are synchronized via a harmonic objectivation based on the uniformity of fundamental constants.This synchronization is achieved at certain values of physical constants, which are substantiated as appearing to be a unique solution within the reflective loop, corresponding to its stable selfmaintenance [16,17].Physical laws operating with fundamental constants represent a basis of the natural computation.They are optimized within a reflective process in such a way that allows the appearance of higher levels of reflection, including such phenomena as free will and consciousness.Biological systems, being autopoietic in physical space [18] control what, when and where measurements are made on the physical universe [19].The biological evolution is really a functional evolution of measurement constraints, from cells to brains [19]. Exhibition of Internal Activity -Iteration and Limits How can we distinguish a subjective internal process from the external non-generic phenomenon?There should be something in the generated structure, which really is a limit of iteration that exhibits an internal process.Any internal (subjective) choice exhibits a structure of the semantic paradox, arising to Epimenides [16].The paradox results from mixing the notion of indicating an element with the act of indicating a set consisting of elements.According to Kitabayashi et al. [20] it is possible to formalize this approach via introducing the notion of fixed point.The fixed point x (the point of coincidence of the image and its reflection) of the operation of determination of A and A -, denoted by F can be expressed as an infinite recursion, x = F(F(F( …F(x)… ))), by mapping x = F(x) onto x = F(x). It can be considered as a point in a two dimensional space.The operation of F is the contraction in a two-dimensional domain, indicating either A or A -. If faithfulness of A is denoted by m, the invariance of faithfulness with respect to contraction is expressed as f(m)*m = constant, where m is the value of faithfulness and f(m) is the probability of m.If distribution of f(m) does not have an off-set peak, m directly means the rank.Then f(m)*m=c represents what is called the Zipf's law, i.e. log (f(m)) = -m+c (for details see [20]). The similar formula was introduced by Mandelbrot [21] for the fractal structure, actually fractal is an iteration arising from the set of complex numbers by squaring them, i.e. by reflecting them to the two-dimensional space.An observer cannot detect the Zipf's law until some tool appears, which is a hyperlink between the other objects.It allows realization of the combinatorial game between the objects connected by the hyperlink.The third dimension is a reflection over this 2D domain.It appears if we estimate the actualization domain (brane) for the error-correction.This is possible only through the introduction of the internal time of observer.As a result, the 3D+T structure appears. According to the Zipf's law, the probability of occurrence of words or other items starts high and tapers off exponentially.Thus a few occur very often while many others occur rarely.The distribution of words is often an inverse exponential like e -an .The power laws can be indicative of the selforganized criticality.The linear iteration with the power law leads to the golden ratio limit.The golden ratio appears as a threshold for establishing a connection between local and global periods of the word.The local period at any position in the word is defined as the shortest repetition (a square) 'centered' in that position.The shortest repetition from that position is described by the golden ratio [22].The power law and the fractal structure appear in the systems exhibiting quantum computation as a consequence of the reflective control. In biological morphogenesis, which we observe in the actual 3D+T space, the preceding motif unit is transferred into the subsequent one by a certain fixed similarity transformation g, i.e., S k+1 = g * S k , where g may be linear, cyclic, Möbius or fractal.If we have the generating transformation g unfolding m times to a motif unit S k , a component S k+m is obtained, and a group of transformation G will contain elements g 0 , g 1 , g 2 ,…, g m .Actually, the concrete mean of g is determined by the internal timing within the reflective loop.The non-Euclidean effects correspond to a time rescaling within the system.Time appears as a tool for the reduction of uncertainty in quantum measurements, i.e. as a computationgenerating tool.During long times of observation coherent quanta coexist in the whole structure of the device providing its precise operation as a whole entity [23]. The geometry being a set of invariant rules of transformation is a finite representation of measurement result.The domains (growing aperiodically in the general case) are hierarchically embedded into one another and function at every level with different clock time periods.The limit of actualization fits optimality of the structure being actualized, thus it provides the existence of 'fundamental constants', e.g., of the golden ratio, which are the most optimal solutions for design.It was proposed that the 'golden wurf' being the limit of lengths series of three sequential stretches divided by three neighboring numbers of the Fibonacci series, i.e., the constant characteristic for the actualized triadic structure, is even more general characteristic for the non-Euclidean systems than the golden ratio [24].These coordinate scales can be transformed by simple recursive rules via rescaling [25]. In internal evolutionary process, which includes formation of self-referential loops, the evolving state is determined by the two (in the simplest case) contradictory values of the system separated by time interval, and the value in time future acquired after addressing them.Addressing the fixed point means that the two contradictory statements taken as sequential values separated by time interval and equally probable are composed to get the third statement.Thus the next statement (quantitatively modelled as having correspondent value) is composed from the two previous statements when they are memorised within the reflective loop: F n+2 = F n + F n+1 .This will lead to important evolutionary consequences: in the transformation of a non-local incursive system to a local recursive system, certain recursive limits will appear as fundamental canons of perfection formed as memorisation within reflective loops. The Fibonacci series represent a recurrent consequence of values (at n = 0, 1, 2, 3, …) where n may correspond to the values at discrete times of generating and addressing the fixed point.In many cases of biological morphogenesis the following configurations are realised as limits (n → ∞) of infinite recursion: = 0.618… (the golden section) Other useful series appear when three neighbouring elements F n , F n+1 , F n+2 of the Fibonacci series are taken as lengths of three sequential segments (as appeared in the sequential past (t-1), present (t) and future (t+1) times).In this case we get the wurf W: 309… (the golden wurf) The value of golden wurf as a limit of the recursive process will have the wurf of three sequential segments with the values 1, Φ and Φ 2 , i.e. it follows from the memorization of limits of recursion in the Fibonacci series [24,26].The golden ratio and the golden wurf constants represent fundamental values of infinite recursion when the next element is formed by the operation on the two previous sequentially appearing elements memorized within the reflective loop.They always occur in morphogenetic patters appearing as limits of infinite process of recursive embedding arising from reflective action (internal quantum measurement). The neighboring members of the Fibonacci series are linked by the relation 1) n According to Petukhov [26], a deviation from the symmetrical relation will be described as the incorporation of the defect ∆: F n *F n+2 = F 2 n+1 + (-1) n *∆ This deviation (dissymmetrisation) can generate a higher-order symmetry at the next step of evolution corresponding to a sequence of canons in evolution. Reflective Structure of Living System The reflective system of living beings (hypercycle) consists of catalysts, substrates and an embedded subset of substrates serving as a matrix for catalysts' reproduction [27].In the simplest case (RNA catalysis -ribozymes), a single molecule can hold all these properties (of catalyst, its substrate, and the matrix).The enzyme catalysis is based on the phenomenon that enzymes provide a precise specific recognition via a prolongation of the relaxation time [5,23], which is relevant to the quantum non-demolition measurement model [28].Enzymes decrease uncertainty in the inorganic catalysis paying by very long relaxation times according to the energy-time uncertainty ratio [5].A specific recognition is characterized by the minimum dissipation of energy during interaction between the measuring device and the measured object. Code interacts with the whole reflective system as its embedded digital description, which limits its development to simple recursive rules.It is a computable part of the non-computable system similar to the set of Gödel numbers within the arithmetic system that are necessary for its description [29].The pattern of the genetic code could be formed on the basis of search of the optimal variant of the reflective structure. The 'central dogma' of molecular biology claims the irreversibility of the information transfer.The quantum superpositions in the genome may be reversible, and during the reduction corresponding to the internal measurement they enter into an irreversible process, that corresponds to 'making decision' within the reflective loop at the molecular level.An uncertainty on the genetic level may be provided by base tautomery, transitions of a proton from one place of nucleotide to another, etc. [30].But the main uncertainty, which is reduced in the irreversible process, is connected with combinatorial transformations using molecular addresses at all levels of informational transfer (mobility of genome, splicing, posttranslational processing).During this process, single events corresponding to realization of interacting individual programs form a percolating network, and this can lead to a concrete spatial pattern constructed using an optimal coordinate scale. DNA folding leads to the formation of alternate structures (within general types of right-handed and left-handed helical) differing in curvatures and topologies that could exist in a superposition before their (internal) observation (measurement).It was discovered that DNA possesses a scale-invariant property consisting in the existence of a long-range power law correlation [31], which is expressed mostly in intron-containing genes and in non-transcribed regulatory DNA sequences [32].Combinatorial events drive the system in an out-of-equilibrium steady state characterized by a power law size distribution [33].It was shown that the coding part of the genome seems to have smaller fractal dimension and longer correlations, than non-coding parts [34].Fractal properties of DNA particularly in its non-coding regions may reflect important properties for providing a combinatorial power for the developmental and evolutionary dynamics of the genetic material, particularly for specific recognitions (as in the case of enzymes) during genome rearrangements.They are connected with the existence of quasi-particles and coherent quanta inside the helical structure of DNA molecule that change their orientation during topological reconstructions and rearrangements.This may provide the existence of genome as a permanently changing superposition of potential states that are reduced in the course of interaction with the changing environment. The genomic superposition is reduced via the transformational generative grammar of genetic texts in the sense of Chomsky [35].The principles of generative transformations of genetic texts will form a set of interactions based on molecular addresses.Such a generative grammar will represent a language game (open process) with limits (constraints). The reflective control in genome is realized by tools (molecular addresses) organizing combinatorial events.Thus, the molecular addresses establish the set of rules for language game corresponding to such hierarchical organization.According to Head [36] the genetic structure can be viewed as consisting of the two complementary sets.The first set consists of double-stranded DNA molecules, the second set -of recombinant behaviors allowed by specific classes of enzymatic activities.The associated language consists of strings of symbols that represent the primary structures of the DNA molecules under the given enzymatic activities.Thus we can say that the recombinant (splicing) system possesses a generative formalism.Further Paun [37] showed the closure to Chomsky language families under the splicing operations.The generative capacity of splicing grammar systems is provided by its components.Any linear language can be generated by a splicing grammar with two regular components.Any context-free language can be generated by a splicing grammar system with three regular components.Any recursive enumerable language can be generated by a splicing grammar system with four regular components [38]. The computation strategy of genome is an example of self-assembly mode of computing [39].The self-assembly may be realized as a computation by carving [40], which represents a computation strategy to generate a large set of candidate solutions of a problem, then remove the non-solutions such that what remains is the set of solutions.During this strategy the error-correction is realized, and this takes place in the potential field.We can suppose that the whole organism possesses the ability to forecast the splicing result before it is actualized, i.e. it can realize error-correction in the potential field by eliminating wrong potential possibilities, by implicating error-correcting codes.This means that living systems realize computation at the quantum level, the process maintaining their dynamic stability at the macroscopic time level. The reality can be described as a set of self-maintained reflective systems exhibiting themselves externally (on macroscales) and interacting via perpetual process of signification (reducing the microscale), which introduces universal computable laws harmonizing their interaction.The evolutionary growth of information occurs via language game of interacting programs, an open process without frames.
2014-10-01T00:00:00.000Z
2003-06-30T00:00:00.000
{ "year": 2003, "sha1": "ae12df39ad51fee9ede6d15861f6d9e5f89391b3", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1099-4300/5/2/76/pdf?version=1424784625", "oa_status": "GOLD", "pdf_src": "Crawler", "pdf_hash": "ae12df39ad51fee9ede6d15861f6d9e5f89391b3", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Computer Science" ] }
209439864
pes2o/s2orc
v3-fos-license
Multiplicative constants and maximal measurable cocycles in bounded cohomology Multiplicative constants are a fundamental tool in the study of maximal representations. In this paper we show how to extend such notion, and the associated framework, to measurable cocycles theory. As an application of this approach, we define and study the Cartan invariant for measurable $\textup{PU}(m,1)$-cocycles of complex hyperbolic lattices. Introduction A fruitful approach to the study of geometric structures on a topological space X is to introduce a bounded numerical invariant whose maximum detects those structures on X which have many symmetries. An instance of this situation is the study of the representation space of lattices in (semi)simple Lie groups. More precisely, given two simple Lie groups of non-compact type G, G ′ and a lattice Γ ≤ G, Burger and Iozzi [BI09] described how to associate to every representation ρ : Γ → G ′ a real number. Using the pullback map H • cb (ρ) induced by ρ in continuous bounded cohomology, they defined a numerical invariant λ(ρ), which depends on a chosen class Ψ ′ ∈ H • cb (G ′ ; R), as follows: where comp • Γ denotes the comparison map (Section 2.3), X denotes the Riemannian symmetric space associated to G, [Γ\ X ] is the (relative) fundamental class of the quotient manifold and ·, · is the Kronecker product. We say that λ(ρ) is a multiplicative constant if it appears in an integral formula, called useful formula by Burger and Iozzi [BI09]. When λ is a multiplicative constant, the formula implies that the numerical invariant has bounded absolute value. In several cases [BBI13,Poz15,BBI18], its maximum corresponds precisely to representations induced by representations of the ambient group. 1.1. A multiplicative formula for measurable cocycles. One of the main goal of this paper is to settle the foundational framework to define multiplicative constants for measurable cocycles. We carefully choose a setting where we can coherently extend ordinary numerical invariants for representations. Moreover, we introduce an integral formula in such a way that our definition of multiplicative constants is the natural extension of Burger-Iozzi's one. Our techniques make use of bounded cohomology theory. Let G, G ′ be two locally compact groups and let L, Q ≤ G be two closed subgroups. Assume that Q is amenable and that L is a lattice. Let (X, µ X ) be a standard Borel probability L-space and let Y be a measurable G ′ -space. Following Burger-Iozzi's approach, given a measurable cocycle σ : L × X → G ′ , we define the pullback induced by σ in continuous bounded cohomology using directly continuous cochains on the groups (Definition 3.2). Unfortunately, this approach does not lead to the desired multiplicative formula. For this reason, we need to consider boundary maps. A (generalized) boundary map φ : G/Q×X → Y is a measurable σ-equivariant map and its existence is strictly related to the properties of σ (Remark 2.10). Inspired by the definition of Bader-Furman-Sauer's Euler number [BFS13b], assuming the existence of a boundary map φ, we describe how to construct a new pullback map C • (Φ X ) in terms of φ (Definition 3.10). The notation C • (Φ X ) emphazises the fact that it is not simply the pullback along φ, but we also need to integrate over X (compare with Definitions 3.5 and 3.7). The map induced by C • (Φ X ) in continuous bounded cohomology agrees with the natural pullback along σ (Lemma 3.14). Our aim is to coherently extend the study of numerical invariants of representations to the case of measurable cocycles. Recall that given a continuous representation ρ : L → G ′ with boundary map ϕ, there always exists a natural measurable cocycle σ ρ associated to it. Using the previous pullback C • (Φ X ), we then show that the map induced by ρ in continuous bounded cohomology agrees with the one induced by σ ρ (Proposition 3.17). Moreover, the pullback along σ is invariant along the G ′ -cohomology class of the cocycle (Proposition 3.15). The study of pullback maps along measurable cocycles (and their boundary maps) leads to the following multiplicative formula, which extends Burger-Iozzi's useful formula [BI09, Proposition 2.44, Principle 3.1]. Recall that the transfer map is a cohomological left inverse of the restriction from G to L. Although this formula might appear slightly complicated at first sight, it contains all the ingredients for defining the multiplicative constant λ ψ ′ ,ψ (σ) associated to a measurable cocycle σ and two given bounded cochains ψ, ψ ′ (Definition 3.20). When no coboundary terms appear in the previous formula, we provide an explicit upper bound for the multiplicative constant (Proposition 3.24). This leads to the definition of maximal measurable cocycles (Definition 3.25). Finally, under suitable hypothesis, we prove that a maximal cocycle is trivializable (Theorem 3.27), i.e. it is cohomologous to a cocycle induced by a representation L ≤ G → G ′ . This general framework has the great advantage that we can easily deduce several applications (Section 3.5 and Section 7). 1.2. Cartan invariant of measurable cocycles. Let Γ ≤ PU(n, 1) be a torsionfree lattice with n ≥ 2. The study of representations of Γ into PU(m, 1) dates back to the work of Goldman and Millson [GM87], Corlette [Cor88] and Toledo [Tol89]. In order to investigate rigidity properties of maximal representations ρ : Γ → PU(m, 1), Burger and Iozzi [BI07b] defined the Cartan invariant i ρ associated to ρ. Inspired by their work, we make use of our techniques to define the Cartan invariant i(σ) for a measurable cocycle σ : Γ × X → PU(m, 1), where (X, µ X ) is a standard Borel probability Γ-space. If the cocycle admits a boundary map (e.g. if it is non elementary), the Cartan invariant can be realized as the multiplicative constant associated to σ and the Cartan cocycles c n , c m . More precisely, as an application of Proposition 1, we prove the following Proposition 2. Let Γ ≤ PU(n, 1) be a torsion-free lattice and let (X, µ X ) be a standard Borel probability space. Consider a non-elementary measurable cocycle σ : Γ × X → PU(m, 1) with boundary map φ : Here µ is a PU(n, 1)-invariant probability measure on the quotient Γ\ PU(n, 1). First we show that our Cartan invariant extends the one defined for representations (Proposition 4.7). Moreover, using our results about the pullback along boundary maps, we prove that the Cartan invariant is constant along PU(m, 1)cohomology classes and it has absolute value bounded by 1 (Proposition 4.8). Then, a natural problem is to provide a complete characterization of measurable cocycles whose Cartan invariant attains extremal values, i.e. either 0 or 1. Since we are not interested in elementary cocycles, we can assume the existence of a boundary map [MS04,Proposition 3.3]. Following the work by Burger and Iozzi [BI12], we introduce the notion of totally real cocycles. A cocycle is totally real if it is cohomologous to a cocycle whose image is contained in a subgroup of PU(m, 1) preserving a totally geodesically embedded copy H k R ⊂ H m R , for some 1 ≤ k ≤ m (Definition 5.1). Totally real cocycles can be easily constructed by taking the composition of a measure equivalence cocycle with a totally real representation. We show that totally real cocycles have trivial Cartan invariant. The converse seems unlikely to hold in general. However, if X is Γ-ergodic, we obtain the following Theorem 3. Let Γ ≤ PU(n, 1) be a torsion-free lattice and and let (X, µ X ) be a standard Borel probability Γ-space. Consider a non-elementary measurable cocycle Then the following hold (1) If σ is totally real, then i(σ) = 0; (2) If X is Γ-ergodic and H 2 (φ)([c m ]) = 0, then σ is totally real. The next step in our investigation is the study of the algebraic hull of a cocycle with non-vanishing pullback. Recall that the algebraic hull is the smallest algebraic group containing the image a cocycle cohomologous to σ (Definition 2.15). Theorem 4. Let Γ ≤ PU(n, 1) be a torsion-free lattice and let (X, µ X ) be an ergodic standard Borel probability Γ-space. Consider a non-elementary measurable cocycle σ : Γ × X → PU(m, 1) with boundary map φ : Let L be the algebraic hull of σ and denote by L = L(R) • the connected component of the identity of the real points. If H 2 (Φ X )([c m ]) = 0, then L is an almost direct product K · M , where K is compact and M is isomorphic to PU(p, 1) for some 1 ≤ p ≤ m. In particular, the symmetric space associated to L is a totally geodesically embedded copy of H p C inside H m C . We conclude with a complete characterization of maximal cocycles. Theorem 5. Consider n ≥ 2. Let Γ ≤ PU(n, 1) be a torsion-free lattice. Let (X, µ X ) be an ergodic standard Borel probability Γ-space. Consider a maximal measurable cocycle σ : Γ × X → PU(m, 1). Let L be the algebraic hull of σ and let L = L(R) • be the connected component of the identity of the real points. Then, we have (1) m ≥ n; (2) L is an almost direct product PU(n, 1) · K, where K is compact; (3) σ is cohomologous to the cocycle σ i associated to the standard lattice embedding i : Γ → PU(m, 1) (possibly modulo the compact subgroup K when m > n). Since recently one of the authors together with Sarti proved a generalization of the previous theorem for cocycles with target PU(p, q) [SS21, Theorem 2], we will mainly refer to their more complete result for the proof. Plan of the paper. In Section 2, we recall some preliminary definitions and results that we need in the paper. We report the definitions of amenable action, measurable cocycle, boundary map and algebraic hull in Section 2.1. We then review Burger and Monod's functorial approach to continuous bounded cohomology (Section 2.2) and we conclude this preliminary section with the definition of transfer maps (Section 2.3). Section 3 is devoted to the description of the general framework in which we study multiplicative constants associated to measurable cocycles. There, we first define the pullback along a measurable cocycle and along its boundary map (Section 3.1). Then, we compare our definition with the usual one given for representations (Section 3.2). In Section 3.3 we state our multiplicative formula (Proposition 1) and we introduce the notion of multiplicative constant associated to a measurable cocycle. We conclude the section studying the notion of maximality (Section 3.4) and showing some applications of the previous results (Section 3.5). Section 4 contains the new application of our machinery. There, we introduce and study the Cartan invariant of measurable cocycles (Section 4). We prove that it is a multiplicative constant (Proposition 2) and it extends the same invariant for representations (Proposition 4.7). Moreover, we show that the Cartan invariant has bounded absolute value in Proposition 4.8. In Section 5 we define totally real cocycles and we prove Theorem 3. Then in Section 6 we study maximal measurable cocycles and we prove both Theorem 4 and Theorem 5. We conclude with some remarks about recent applications of our theory in Section 7. Acknowlegdements. We truly thank the anonymous referee for the detailed report which allowed to substantially improve the quality of our paper. Preliminary definitions and results 2.1. Amenability and measurable cocycles. In this section we are going to recall some classic definitions related to both amenability and measurable cocycles. We start fixing the following notation: • Let G be a locally compact second countable group endowed with its Haar measurable structure. • Let (X, µ) be a standard Borel measure G-space, i.e. a standard Borel measure space endowed with a measure-preserving G-action. If µ is a probability measure, we will refer to (X, µ) as a standard Borel probability G-space. Given another measure space (Y, ν), we denote by Meas(X, Y ) the space of measurable functions from X to Y endowed with the topology of convergence in measure. Remark 2.1. In the literature about the ergodic version of simplicial volume [Sau02,Sch05,FFM12,LP16,FLPS16,FFL19,CC21,FLMQ21], it is often convenient to work with essentially free actions. For this reason, one might find reasonable to stick with the same assumptions also here working with the dual notion of bounded cohomology. However, it is easy to check that every probability measure-preserving action can be promoted to an essentially-free action just by taking the product with an essentially free action and considering the diagonal action on that product. We recall now some definitions about amenability. We mainly refer the reader to the books by Zimmer [Zim84, Section 4.3] and by Monod [Mon01, Section 5.3] for further details about this topic. Let L ∞ (G; R) denote the space of essentially bounded real functions over G. Then, G acts on L ∞ (G; R) as follows for all g, g 0 ∈ G and f ∈ L ∞ (G; R). such that m(f ) ≥ 0 whenever f ≥ 0 and m(χ G ) = 1, where χ G denotes the characteristic function on G. We say that a mean is left invariant if for all g ∈ G and f ∈ L ∞ (G; R) we have A group is amenable if it admits a left-invariant mean. In the sequel we will need a more general notion of amenability which is related to group actions. In fact, amenable spaces and amenable actions will play a crucial role in the functorial approach to the computation of continuous bounded cohomology (Section 2.2). Following Monod's convention, we begin by defining regular G-spaces [Mon01, Definition 2.1.1]. Definition 2.4. Let G be a locally compact second countable group and let S be a standard Borel space with a measurable G-action which preserve a measure class. We say that (S, µ) is a regular G-space if the previous measure class contains a probability measure µ such that the isometric action R : G L 1 (S, µ) defined by is continuous. Here dg −1 µ/dµ denotes the Radon-Nikodým derivative. (1) If G is endowed with its Haar measure, then G is a regular G-space. (2) If Q is a closed subgroup of G, then G/Q endowed with the natural almost invariant measure is a regular G-space. The notion of regular G-spaces allows us to introduce the definitions of amenable actions and amenable spaces [Mon01, Theorem 5.3.2]. Definition 2.6. Let G be a locally compact second countable group and let (S, µ) be a regular G-space. We say that the action of G on (S, µ) is amenable if there exists a continuous norm-one G-equivariant linear operator with the following two properties: First p(χ G×S ) = χ S , secondly for all f ∈ L ∞ (G× S) and for all measurable sets If the action by G on (S, µ) is amenable, then we say that (S, µ) is an amenable G-space. Remark 2.7. The previous definition extends the notion of amenable groups in the following sense: A group is amenable if and only if every regular G-space is an amenable G-space [Mon01, Theorem 5.3.9]. Amenable actions not only characterize groups but also subgroups. By Example 2.5.2, given a closed subgroup Q ⊂ G, the quotient G/Q is a regular G-space. Additionally, we have that Q is amenable if and only if the G-action on G/Q is amenable [Zim84, Proposition 4.3.2]. Hence, Example 2.3.4 shows that if G is a Lie group and P ⊂ G is any minimal parabolic subgroup, then the G-action on the quotient G G/P is amenable. This applies to the Furstenberg-Poisson boundary of a Lie group (being identified with G/P ). We recall now the notion of measurable cocycles and some of their properties. Definition 2.8. Let G and H be locally compact groups and let (X, µ) be a standard Borel probability G-space. A measurable cocycle (or, simply cocycle) is a measurable map σ : G × X → H satisfying the following formula (1) σ(g 1 g 2 , x) = σ(g 1 , g 2 .x)σ(g 2 , x) , for almost every g 1 , g 2 ∈ G and almost every x ∈ X. Here, g 2 .x denotes the action by g 2 ∈ G on x ∈ X. Associated to measurable cocycles there exists the crucial notion of boundary map. Definition 2.9. Let G and H be two locally compact groups and let Q ≤ G be a closed amenable subgroup. Let (X, µ) be a standard Borel probability G-space and let (Y, ν) be a measure space on which H acts by preserving the measure class of ν. Given a measurable cocycle σ : G × X → H, we say that a measurable map φ : for almost every g ∈ G, η ∈ G/Q and x ∈ X. A (generalized) boundary map associated to σ is a σ-equivariant measurable map. We will make use of generalized boundary maps in Section 3.1, when we will explain how to compute the pullback in continuous bounded cohomology. Remark 2.10. It is quite natural to ask when a (generalized) boundary map actually exists. Let G(n) = Isom(H n K ) be the isometry group of the K-hyperbolic space, where K is either R or C. Given a lattice Γ ≤ G(n), let us consider a standard Borel probability Γ-space (X, µ X ) and a measurable cocycle σ : Γ × X → G(m). In the previous situation, Monod and Shalom [MS04,Proposition 3.3] proved that if the cocycle σ is non elementary then there exists an essentially unique boundary map φ : The notion of non-elementary cocycle relies on the definition of algebraic hull (Definition 2.15) and it will be explained more carefully later in this paper. Also in the case of higher rank lattices there are some relevants results about the existence of boundary maps. Indeed a key step in the proof of Zimmer' Superrigidity Theorem [Zim80,Theorem 4.1] is to prove the existence of generalized boundary maps for Zariski dense measurable cocycles (see Definition 2.15). Since Equation (1) suggests that σ can be interpreted as a Borel 1-cocycle in Meas(G, Meas(X, H)) [FM77], it is natural to introduce the definition of cohomologous cocycles. Definition 2.11. Let σ : G × X → H be a measurable cocycle between locally compact groups. Let f : X → H be a measurable map. We define the twisted cocycle associated to σ and f as for almost every g ∈ G and almost every x ∈ X. We say that two measurable cocycles σ 1 , σ 2 : G × X → H are cohomologous if there exists a measurable function f : X → H such that Similarly, we say that σ 1 and σ 2 are cohomologous modulo a closed subgroup When a measurable cocycle σ admits a generalized boundary map, then all its cohomologous cocycles share the same property. Definition 2.12. Let σ : G × X → H be a measurable cocycle with generalized boundary map φ : G/Q × X → Y . Given a measurable function f : X → H the twisted boundary map associated to f and φ is defined as for almost every g ∈ G, η ∈ G/Q and x ∈ X. Remark 2.13. Let σ, σ ′ : G × X → H be two cohomologous cocycles and let f : X → H be the measurable map such that σ ′ = f.σ. If σ admits a generalized boundary map φ, then the twisted boundary map associated to f and φ is a generalized boundary map associated to σ ′ . Representations provide special cases of measurable cocycles: Definition 2.14. Let ρ : G → H be a continuous representation and let (X, µ) be a standard Borel probability G-space. The cocycle associated to the representation ρ is defined as , for every g ∈ G and almost every x ∈ X. Given a representation ρ : G → H, one can obtain useful information about ρ by studying the closure of the image ρ(Γ) as subgroup of H. On the other hand, in general the image of a measurable cocycle does not have any nice algebraic structure. Nevertheless, when H is assumed to be an algebraic group, we can give the following We will use the previous definition when we will work with totally real cocycles (Section 5) and when we will investigate the properties of cocycles with nonvanishing pullback (Theorem 4). 2.2. Bounded cohomology and its functorial approach. In this section we are going to recall the definitions and the properties of both continuous and continuous bounded cohomology that we will need in the sequel. We first introduce continuous (bounded) cohomology via the homogeneous resolution and then, following the work by Burger and Monod [Mon01,BM02], we describe it in terms of strong resolutions by relatively injective modules. is a Banach space E endowed with a G-action induced by a representation π : G → Isom(E), or equivalently a G-action via linear isometries: We say that a Banach G-module (E, π) is continuous if the map θ(·, v) is continuous for all v ∈ E. Finally, we denote by E G the submodule of G-invariant vectors in E, i.e. the space of vectors v such that θ(g, v) = v for all g ∈ G. Notation 2.18. In the sequel R will denote the Banach G-module of trivial real coefficients. In other words, it is endowed with the trivial G-action: π(g)v = v for all v ∈ R and g ∈ G. Example 2.19. Let (E, π) be a Banach G-module. Then, the space of continuous The spaces of continuous E-valued functions give raise to a cochain complex (C • c (G; E), δ • ) together with the standard homogeneous coboundary operator Since this complex is exact, we are going to focus our attention to the subcomplex . Remark 2.21. If G is a discrete group, then there is no difference between continuous and ordinary cohomology. Hence, in this situation we will usually drop the subscript c from the notation. be the subspace of continuous bounded functions. By linearity the coboundary operator δ • preserves boundedness, so we can restrict δ • to the space of continu- The L ∞ -norm defined on cochains induces a canonical L ∞ -seminorm in cohomology given by We say that an isomorphism between seminormed cohomology groups is isometric if the corresponding seminorms are preserved. Beyond the difference determined by the quotient seminorm, one can study the gap between continuous cohomology and continuous bounded cohomology via the map induced in cohomology by the inclusion i : In the sequel we will need an alternative description of continuous bounded cohomology in terms of strong resolutions via relatively injective modules. Since we will not make an explicit use of these notions, we refer the reader to Monod is isomorphic as a topological vector space to the continuous bounded cohomology H n cb (G; E), for every n ≥ 0. We now describe a strong resolution via relatively injective modules which allows us to compute bounded cohomology isometrically. Let G be a locally compact second countable group. Let (E, π, · E ) be a Banach G-module such that E is the dual of some Banach space. This implies that E can be endowed with the weak- * topology and the associated weak- * Borel structure. Moreover, let (S, µ) be a regular G-space. We have the following Definition 2.25. We define the Banach G-module of bounded weak - * measurable E-valued functions on S to be the Banach space endowed with the following G-action τ : We define the Banach G-module of essentially bounded weak- * measurable Evalued functions on S to be Remark 2.26. For ease of notation we will denote elements in L ∞ w * (S •+1 ; E) simply by one chosen representative f . Definition 2.27. Let us consider the situation above. We say that a(n essentially) bounded weak- * measurable function f : where ε ∈ S •+1 is a permutation whose sign is sign(ε). Since the standard homogeneous operator δ • preserves G-invariant (alternating) essentially bounded weak- * measurable functions up to a shift of the degree, we can consider the complex (L ∞ w * (S •+1 ; E), δ • ). The following theorem shows when the previous complex computes isometrically the continuous bounded cohomology of G with E-coefficients. , for every integer n ≥ 0. The same result still holds if we restrict to the subcomplex of alternating essentially bounded weak- * measurable functions on S. Remark 2.29. In the situation of the previous theorem if L ⊂ G is a closed subgroup, then also the cohomology of the complex (L ∞ w * (S •+1 ; E) L , δ • ) is isometrically isomorphic to H n cb (L; E), for every n ≥ 0 [Mon01, Lemma 4.5.3]. Example 2.30. Let G be a locally compact second countable group and let Q ⊂ G be a closed amenable subgroup. By Remark 2.7 and Example 2.5.2 we know that G/Q is an amenable regular G-space. Thus for every Banach G-module (E, π) the cohomology of the complex (L ∞ w * ((G/Q) •+1 ; E) G , δ • ) isometrically computes the continuous bounded cohomology of G with coefficient in E. An instance of this situation is when Q is a minimal parabolic subgroup of a semisimple Lie group G. As we have just discussed one can compute continuous bounded cohomology by working with equivalence classes of bounded weak- * measurable functions. However, in some cases it might be convenient to work directly with B ∞ (S •+1 ; E). Also in this case the homogeneous coboundary operator sends (alternating) bounded weak- * measurable functions to themselves up to shifting the degree. Hence, we can still construct a complex (B ∞ (S •+1 ; E), δ • ). Unfortunately, the associated resolution of E is only strong in general [ , for every n ∈ N. This shows that each bounded weak- * measurable G-invariant function canonically determines a cohomology class in H n cb (G; E). The same result still holds in the situation of alternating functions. In Section 3.1, we will tacitly use this result for showing that the pullback of a bounded weak- * measurable G-invariant function lies in fact in L ∞ w * . 2.3. Transfer maps. In this section we briefly recall the notion of transfer maps [Mon01]. Let G be a locally compact second countable group and let i : L → G be the inclusion of a closed subgroup L into G. By functoriality the inclusion induces a pullback in continuous bounded cohomology Assume that L\G admits a G-invariant probability measure µ (e.g. when L is a lattice of G), then we have Definition 2.32. We define the transfer cochain map as Here g denotes the equivalence class of g in the quotient L\G. The transfer map trans • L is the one induced in cohomology by trans . Remark 2.33. The transfer map is well-defined since we can compute the continuous bounded cohomology of L by looking at the complex (C • cb (G; R) L , δ • ) (Remark 2.21). Moreover, since ψ is L-invariant, it induces a well-defined function on the quotient L\G. With a slight abuse of notation, in the previous formula we still denoted by ψ this induced function. We give now an alternative definition of the transfer map for essentially bounded weak- * measurable functions. Let Q and L be closed subgroups of a locally compact second countable group G. If Q is amenable, then the subcomplex of L-invariant essentially bounded functions on G/Q computes the continuous bounded cohomology H Here the vertical arrows are the canonical isomorphisms obtained by extending the identity R → R to the complex of continuous bounded and essentially bounded functions, respectively. Pullback maps, multiplicative constants and maximal measurable cocycles The main goal of this section is to define pullbacks in continuous bounded cohomology via measurable cocycles and generalized boundary maps. As an application we extend Burger and Iozzi's useful formula for representations [BI09, Proposition 2.44] to the wider setting of measurable cocycles. This allows us to introduce the notion of multiplicative constants and investigate cocycles rigidity. Setup 3.1. Let us consider the following setting: • Let G be a second countable locally compact group. • Let G ′ be a locally compact group which acts measurably on a measure space (Y, ν) by preserving the measure class. • Let Q be a closed amenable subgroup of G. • Let L be a lattice in G. • Let (X, µ X ) be a standard Borel probability L-space. • Let σ : L × X → G ′ be a measurable cocycle with an essentially unique generalized boundary map φ : G/Q × X → Y . 3.1. Pullback along measurable cocycles and generalized boundary maps. In this section we introduce two different pullback maps in continuous bounded cohomology associated to a measurable cocycle. The first pullback will only depend on the measurable cocycle σ. The second one will be defined in terms of the generalized boundary map φ. We will show that under suitable conditions the two definitions agree (Lemma 3.14). Despite a priori the first definition might appear more natural, we will mainly exploit the second pullback in the study of the rigidity properties of measurable cocycles. Given a measurable cocycle σ : L × X → G ′ we define a pullback map from C • cb (G ′ ; R) G ′ to C • b (L; R) L as follows (compare with [MS,Remark 14]). Definition 3.2. In the situation of Setup 3.1, the pullback map induced by the measurable cocycle σ is given by is a cochain map. Moreover, it sends bounded cochains to bounded cochains because µ X is a probability measure. It only remains to prove that C , where the second line is equal to the third one because of the definition of measurable cocycle (Equation (1)). Then, the L-invariance of the measure µ X shows that the third line is equal to the fourth one. Finally, the G ′ -invariance of ψ concludes the computation. As anticipated we now explain how to define a different pullback map via generalized boundary maps in the situation of Setup 3.1. This approach takes inspiration from a work by Bader, Furman and Sauer [BFS13b, Proposition 4.2] and has already produced some applications in special settings (Subsection 3.5). We define the pullback along a generalized boundary map as the composition of two different maps defined in continuous bounded cohomology. The Banach space L ∞ (X) := L ∞ (X; R) has a natural structure of Banach L-module given by the following L-action for all γ ∈ L and f ∈ L ∞ (X). This leads to the following Definition 3.5. In the situation of Setup 3.1, the L ∞ (X)-pullback along φ is the following map η 1 , x), . . . , φ(η •+1 , x))) , where ψ ∈ B ∞ (Y •+1 ; R) G ′ , η 1 , . . . , η •+1 ∈ G/Q and x ∈ X. Lemma 3.6. The map C • (φ) is a well-defined norm non-increasing cochain map. Proof. Since C • (φ) is defined as a pullback, it is immediate to check that it is a norm non-increasing cochain map. Let us show now that for every where the latter space is endowed with its natural diagonal L-action. Then, for almost every x ∈ X, γ ∈ L and η 1 , . . . , η •+1 ∈ G/Q, we have Here we first used the definition of diagonal action, then the σ-equivariance of φ and finally the G ′ -invariance of ψ. Lemma 3.8. The integration map I • X is a well-defined norm non-increasing cochain map. Proof. Given a cocycle ψ ∈ L ∞ w * ((G/Q) •+1 ; L ∞ (X)) L , it is easy to show that I • X (ψ) is L-invariant. Indeed, given η 1 , . . . , η •+1 ∈ G/Q and γ ∈ L, we have where we used the L-invariance of both ψ and µ X . Since it is immediate to check that the integration map is also a norm nonincreasing cochain map, we get the thesis. We are now ready to define the pullback map along φ. Definition 3.10. In the situation of Setup 3.1, the pullback map along the (generalized) boundary map φ is the following cochain map . Remark 3.11. The restriction of the pullback along φ to the subcomplexes of alternating cochains (Definition 2.27) is well-defined. The fact that the pullback map induces a well-defined map in cohomology is proved in the following The same result still holds for the subcomplexes of alternating cochains. Proof. As a consequence of both Lemmas 3.6 and 3.8, the pullback C • (Φ X ) is a norm non-increasing cochain map. Indeed, it is the composition of two such maps, namely C • (φ) and I • X . Since Q is an amenable group, then G/Q is an amenable regular G-space (Example 2.5.2 and Remark 2.7). Hence, by Remark 2.29 the complex of L-invariant essentially bounded functions L ∞ ((G/Q) •+1 ; R) L computes the continuous bounded cohomology H • b (L; R). The same proof adapts mutatis mutandis to the case of alternating cochains. Remark 3.13. One might define a pullback map in cohomology using any measurable σ-equivariant map φ : S ×X → Y , where S is any amenable L-space. However, since we will not need this formulation in the sequel, we preferred to keep the previous setting. Since we have introduced two different pullback maps in continuous bounded cohomology arising from measurable cocycles, it is natural to ask whether they agree. The following lemma completely describes the situation (compare with [BI02, Corollary 2.7]). Lemma 3.14. In the situation of Setup 3.1, let ψ ∈ B ∞ (Y •+1 ; R) G ′ be a measurable cocycle. Then Proof. It is sufficient to consider the following commutative diagram [BI02, Proposition 1.2] where c • is the map introduced in Equation (3). Finally, we show that the pullback along cohomologous measurable cocyles is the same (compare with [MS, Proposition 13, Proposition 20]). Proposition 3.15. In the situation of Setup 3.1, let f.σ : L × X → G ′ be a cocycle cohomologous to σ with respect to a measurable map f : X → G ′ . Then, for every Here C • (Φ X ) and C • (f.Φ X ) denote the pullback maps along the associated boundary maps φ and f.φ, respectively. Proof. The boundary map f.φ associated to f.σ is given by for almost every η ∈ G/Q and x ∈ X (Remark 2.13). Hence, we have for almost every η 1 , . . . , η •+1 ∈ G/Q. This finishes the proof. Remark 3.16. Sometimes it is natural to consider the G ′ -module R with a twisted action. For instance if G ′ admits a sign homomorphism, we can use it to twist the real coefficients. In that situation the previous equality will be true only up to a sign (see for instance [MS, Proposition 13]). 3.2. Pullback along generalized boundary maps vs. pullback of representations. In the situation of Setup 3.1, let (X, µ X ) be a standard Borel probability L-space and let ρ : L → G ′ be a representation. Then, there exists an associated measurable cocycle σ ρ : L × X → G ′ defined by σ ρ (γ, x) = ρ(γ) for every γ ∈ L and x ∈ X (Definition 2.14). If ρ admits a ρ-equivariant measurable map ϕ : G/Q → Y , the corresponding generalized boundary map of σ ρ is for almost every η ∈ G/Q and x ∈ X. The following result shows that the pullback associated to ρ via ϕ agrees with the one along φ. This property turns out to be fundamental to coherently extend the numerical invariants of representations to the ones of measurable cocycles (see [Savb,Proposition 3.4 ] and [MS, Propositions 12, Proposition 19]). Proposition 3.17. In the situation of Setup 3.1, let ρ : L → G ′ be a representation which admits a ρ-equivariant measurable map ϕ : G/Q → Y . Then, we have Proof. Since the boundary map φ associated to σ ρ does not depend on the second variable, it is immediate to check that the following diagram commutes whence the thesis. Remark 3.18. The existence of a cocycle of the form σ : L × X → G ′ required in Setup 3.1 is irrelevant in the previous result. A particularly nice situation for the study of rigidity phenomena is when in Equation (5) there are no coboundary terms. For this reason we are going to introduce the following notation. Proof. By Remark 3.23 we know that Since by Proposition 3.12 trans • G/Q and C • (Φ X ) are norm non-increasing maps, the left-hand side admits the following estimate Using the previous upper bound, we introduce the following Definition 3.25. In the situation of Setup 3.1 assume that condition (NCT) is satisfied. We say that a measurable cocycle σ : L × X → G ′ is maximal if its multiplicative constant λ ψ ′ ,ψ (σ) attains the maximum value: For every representation π : G → G ′ , we denote the restriction of π to L as π| L : L → G ′ . We prove now that under suitable assumptions maximal cocycles can be trivialized, i.e. they are cohomologous to a suitable representation π| L : L → G ′ . Theorem 3.27. In the situation of Setup 3.26 let π| L : L → G ′ be the restriction of the representation π : G → G ′ to L. If the measurable cocycle σ : L × X → G ′ is maximal, then σ is cohomologous to π| L . Remark 3.28. More precisely, the theorem shows the existence of a measurable map f : X → G ′ such that: For all γ ∈ L and almost every x ∈ X, we have Proof. Since the cocycle σ is maximal, we know that Under condition (NCT), if we substitute the value of λ ψ ′ ,ψ (σ) in Equation (5) we get Moreover, by assumption ψ attains its essential supremum, whence there exist η 1 , . . . ,η •+1 ∈ G/Q such that By Remark 3.19 we can evaluate Equation (6) atη 1 , . . . ,η •+1 ∈ G/Q. Hence, by Equation (7), we have This shows that ψ ′ (φ(g.η 1 , x), . . . , φ(g.η •+1 , x)) = ψ ′ ∞ , for almost every g ∈ L\G and almost every x ∈ X. Additionally, the σ-equivariance of φ implies that in fact .η 1 , x), . . . , φ(g.η •+1 , x)) = ψ ′ ∞ holds for almost every g ∈ G and almost every x ∈ X. We can define for almost every x ∈ X a map which is measurable [FMW04, Lemma 2.6] and maximal by Equation (9). Hence, by the assumptions of Setup 3.26, for almost every x ∈ X there must exist an element for almost every η ∈ G/Q. This shows that φ x lies in the G ′ -orbit of Π. In this way we get a map which is measurable [FMW04, Lemma 2.6]. By Setup 3.26 the stabilizer of Π is trivial and hence the orbit G ′ .Π is naturally homeomorphic to G ′ through a map  : G ′ .Π → G ′ . Composing the identification  with the map φ we get a map which is defined almost everywhere and it is measurable being the composition of measurable maps (notice that the composition above gives back the element g x ). We can now conclude the proof (compare with [BFS13b, Proposition 3.2]). Given γ ∈ L, on the one hand we have and on the other In the second equality we used the π-equivariance of the map Π. The fact that Stab G ′ (Π) is trivial implies that Example 3.29. Let n ≥ 3. Let L ≤ G = PO • (n, 1) be a torsion-free non-uniform lattice and (X, µ X ) be a standard Borel probability L-space. Following the notation of Setup 3.1, we set G ′ = PO • (n, 1) and Y = G/Q = ∂H n R ∼ = S n−1 , where Q is a (minimal) parabolic subgroup of G. Using bounded cohomology theory [Gro82,FM], one can define the volume Vol(σ) of a measurable cocycle σ : L×X → PO • (n, 1) [MS, Section 4.1]. As proved by the authors [MS,Proposition 2], in this setting the multiplicative constant is given by Since condition (NCT) is satisfied for twisted real coefficients [BBI13, Lemma 2.2], Proposition 3.24 shows that the following Milnor-Wood inequality holds [MS, Proposition 15] | Vol(σ)| ≤ Vol(L\H n ) . Similarly, one can apply an analogous strategy for studying the case of closed surfaces. The main difference is that we have to to fix a hyperbolization. Then, maximal cocycles will be cohomologous to the given hyperbolization [MS,Theorem 5] Example 3.30. Fix a torsion-free lattice L ≤ G = PSL(2, C) together with a standard Borel probability L-space (X, µ X ). Following the notation of Setup 3.1, we set G ′ = PSL(n, C), Y = F (n, C) is the space of full flags, and G/Q = P 1 (C). Here Q is a (minimal) parabolic subgroup of G. The second author defined the Borel invariant β n (σ) of a measurable cocycle σ : L × X → PSL(n, C) [Savb, Section 4]. Then, the multiplicative constant is given by [Savb, Proposition 1.2] . Finally, one can apply Theorem 3.27 to show that if σ is maximal, then σ is cohomologous to the cocycle associated to the standard lattice embedding L → G composed with the irreducible representation π n : PSL(2, C) → PSL(n, C). In fact also the converse holds true [Savb, Theorem 1.1]. Cartan invariant of measurable cocycles of complex hyperbolic lattices Let Γ ≤ PU(n, 1) be a torsion-free lattice with n ≥ 2 and let (X, µ X ) be a standard Borel probability Γ-space. In this section we are going to define the Cartan invariant i(σ) associated to a measurable cocycle σ : Γ × X → PU(m, 1). Then, when σ is non elementary, we will express the Cartan invariant as a multiplicative constant (Proposition 2). This interpretation allows us to deduce many properties of the Cartan invariant for non-elementary measurable cocycles. We recall here just few notions of complex hyperbolic geometry that we will need in the sequel. We refer the reader to Goldman's book [Gol99] for a complete discussion about this topic. Let H n C be the complex hyperbolic space. For every k ∈ {0, . . . , n} a k-plane is a totally geodesic copy of H k C holomorphically embedded in H n C . When k = 1, a 1-plane is simply a complex geodesic. Similarly, a k-chain is the boundary of a k-plane in ∂ ∞ H n C , i.e. it is an embedded copy of ∂ ∞ H k C . When k = 1, we will just call them chains. Since a chain is completely determined by any two of its points, two distinct chains are either disjoint or they meet exactly in one point. If we denote by (∂ ∞ H n C ) (3) the set of triples of distinct points in the boundary at infinity, we can defined the following function Here ξ i = [z i ] and we choose the branch of the argument function such that arg(z) ∈ [−π/2, π/2]. Then, we can extend c n to a PU(n, 1)-invariant alternating Borel cocycle on the whole (∂ ∞ H n C ) 3 . Moreover, |c n (ξ 1 , ξ 2 , ξ 3 )| = 1 if and only if ξ 1 , ξ 2 , ξ 3 ∈ ∂ ∞ H n C are distinct and they lie on the same chain [BIW09, Section 3]. Let ω n ∈ Ω 2 (H n C ) be the Kähler form, which is a PU(n, 1)-invariant 2-form. By the Van Est isomorphism [Gui80, Corollary 7.2] the space Ω 2 (H n C ) PU(n,1) is isomorphic to H 2 c (PU(n, 1); R). We call Kähler class the element κ n ∈ H 2 c (PU(n, 1); R) corresponding to ω n via the previous isomorphism. Since the Kähler class is bounded, κ n lies in the image of the comparison map comp 2 : H 2 cb (PU(n, 1); R) → H 2 c (PU(n, 1); R) . Hence, there exists a class κ b n ∈ H 2 cb (PU(n, 1); R) which is sent to κ under comp 2 . Since the group H 2 cb (PU(n, 1); R) is one dimensional, we can assume that κ b n is its generator as real vector space. The relation between the Cartan cocycle and the bounded Kähler class is the following (Remark 4.2) [c n ] = κ b n π ∈ H 2 cb (PU(n, 1); R) . Remark 4.3. The previous equality shows that the cocycle πc n is a representative of the bounded Kähler class. In the previous situation σ induces a map in bounded cohomology (Lemma 3.4) . Moreover, since Γ is a lattice, there exists a transfer map (Definition 2.32) , 1); R) . Composing the two maps above we can give the following Definition 4.5. In the situation of Setup 4.4, the Cartan invariant associated to the cocycle σ is the real number i(σ) appearing in the following equation Remark 4.6. The previous formula is well-defined since H 2 cb (PU(n, 1); R) ∼ = Rκ b n . We explain now how to compute the Cartan invariant in terms of a boundary map associated to σ. This will show that the Cartan invariant is a multiplicative constant in the sense of Definition 3.20. Here essentially unique means that two boundary maps coincide on a full measure set. As noticed by Monod and Shalom, the non-elementary condition means that the group of the real points of the algebraic hull of σ (Definition 2.15) is a nonelementary subgroup of PU(n, 1). By Lemma 3.14 the existence of a boundary map implies that the pullback map H 2 b (σ) coincides with the following composition H 2 b (σ) = H 2 (Φ X ) • c 2 , where c 2 and H 2 (Φ X ) are the maps introduced in Equation (3) and Definition 3.5, respectively. Thus, Equation (10) is equivalent to n . This shows that the Cartan invariant is a multiplicative constant in the sense of Definition 3.20. We are going to prove that Equation 11 actually holds at the levels of cochains. Proposition 2. In the situation of Setup 4.4, let σ be a non-elementary measurable cocycle with boundary map φ : Then, for every triple of pairwise distinct points ξ 1 , ξ 2 , ξ 3 ∈ ∂ ∞ H n C , we have Here µ is a PU(n, 1)-invariant probability measure on the quotient Γ\ PU(n, 1). Proof. We already know that πc n is a representative of κ b n (Remark 4.3). Moreover, since Γ acts doubly ergodically on ∂ ∞ H n C , there are no essentially bounded Γ-invariant alternating functions on (∂ ∞ H n C ) 2 . Hence, if we rewrite Equation (11) in terms of cochains, we obtain a formula trans 2 ∂∞ H n C • C 2 (Φ X )(πc m ) = i(σ)(πc n ) , without coboundaries. Here trans 2 ∂∞ H n C is the map introduced at the end of Section 2.3. Since the constant π appears on both sides, we get for almost every ξ 1 , ξ 2 , ξ 3 ∈ ∂ ∞ H n C . The fact that the equation is in fact true for every triple of pairwise distinct points can be proved by following verbatim Pozzetti's proof [Poz15,Lemma 2.11]. This finishes the proof. The interpretation of the Cartan invariant as a multiplicative constant has many consequences. For instance, the Cartan invariant of measurable cocyles extends the one for representations introduced by Burger and Iozzi [BI07b]. Proof. Since ρ is non elementary, ρ admits an essentially unique boundary map ϕ : Hence, one can define a boundary map for the cocycle σ ρ as , for almost every ξ ∈ ∂ ∞ H n C and almost every x ∈ X. This readily implies that The first equality comes from Proposition 2, the second one is due to Proposition 3.17 and finally the last equality is proved by Burger Hence, we get Ad. 2 Since the Cartan invariant is a multiplicative constant and condition (NCT) is satisfied, Proposition 3.24 implies |i(σ)| ≤ 1 . Here we used the fact that c n ∞ = c m ∞ = 1. The second item of the previous proposition leads to the following definition (compare with Definition 3.25). Definition 4.9. In the situation of Setup 4.4, a non-elementary cocycle σ is maximal if i(σ) = 1. Totally real cocycles In this section we introduce the notion of totally real cocycles. Our definition extends the one by Burger and Iozzi [BI12] for representations. We aim to investigate the relation between the vanishing of the Cartan invariant and the condition of being totally real. We will show that totally real cocycles have trivial Cartan invariant. On the other hand, it is natural to ask whether the converse is also true. We partially answer to this question by showing that ergodic cocycles inducing the trivial map in bounded cohomology are totally real. Definition 5.1. In the situation of Setup 4.4 we denote by L the algebraic hull of σ. Let L := L(R) • be the connected component of the identity of the group of real points of L. A measurable cocycle σ is totally real if for some 1 ≤ k ≤ m we have where H k R ⊂ H m C is a totally geodesic copy of the real hyperbolic k-space. Here N PU(m,1) (H k R ) denotes the subgroup of PU(m, 1) preserving the fixed copy of H k R , i.e. g(H k R ) ⊂ H k R for every g ∈ N PU(m,1) (H k R ). Remark 5.2. By the definition of algebraic hull (Definition 2.15) every totally real cocycle σ is cohomologous to a cocycle σ whose image is contained in L. Additionally, N PU(m,1) (H k R ) is an almost direct product of a compact subgroup K ≤ PU(m, 1) with an embedded copy of PO(k, 1) inside PU(m, 1). Hence, the cocycle σ preserves the totally geodesic copy H k R ⊂ H m C stabilized by L. In the sequel we need the following Lemma 5.3. Let Γ ≤ PU(n, 1) be a torsion-free lattice, with n ≥ 2, and let (X, µ X ) be a standard Borel probability space. Then Here, the letter Z denotes the set of cocycles and the subscript alt denotes the restrictions to alternating essentially bounded weak- * measurable functions. Proof. For every k ∈ N we have the following This shows that there are no coboundaries in dimension two, whence the thesis. The notion of totally real cocycles is strictly related to the vanishing of the Cartan invariant. This correspondence is described by the following result which is a suitable adaptation of a result by Burger and Iozzi [BI12, Theorem 1.1] to the case of measurable cocycles. Proof. Ad 1. Let L be the algebraic hull of σ and let L = L(R) • be the connected component of the real points of L containing the identity. By Remark 5.2, there exists a cocycle σ cohomologous to σ such that for some 1 ≤ k ≤ m. Since σ is cohomologous to σ, it admits a boundary map φ (Remark 2.13). Hence, since the Cartan invariant is constant along the PU(m, 1)cohomology class of σ, it is sufficient to show that i( σ) = 0 (Proposition 4.8). By Remark 5.2 the cocycle σ also preserves the totally geodesic copy of H k R stabilized by L, whence it preserves the boundary at infinity ∂ ∞ H k R . We identify ∂ ∞ H k R with a (k − 1)-dimensional sphere S ⊂ H m C as explained in Section 4. Hence, the boundary map φ takes values in S, that is φ : ∂ ∞ H n C ×X → S . For almost every x ∈ X, we then define The map φ x is measurable for almost every x ∈ X [FMW04, Lemma 2.6]. By Proposition 2 we have for almost every ξ 1 , ξ 2 , ξ 3 ∈ ∂ ∞ H n C . Here µ is the PU(n, 1)-invariant probability measure on Γ\ PU(n, 1). Since φ x takes values into the sphere S for almost every x ∈ X, we have that for almost every x ∈ X and almost every g ∈ Γ\ PU(n, 1) [BI12, Corollary 3.1]. Thus, i( σ) vanishes. Since i(σ) = i( σ) we get the thesis. Ad 2. Since C 2 (φ) is a cochain map (Lemma 3.6), it induces a map H 2 (φ) in cohomology. If H 2 (φ)([c m ]) = 0, then we have Since by Lemma 5.3 there are no L ∞ (X)-coboundaries in degree 2, we have that for almost every x ∈ X and almost every ξ 1 , ξ 2 , ξ 3 ∈ ∂ ∞ H n C . For almost every point x ∈ X, let us denote by E x the essential image of φ x , i.e. the support of the push-forward measure (φ x ) * ν, where ν is the standard round measure on ∂ ∞ H n C . We have just proved in Equation (13) that for almost every x ∈ X the Cartan cocycle c m vanishes over E x . Hence, as proved by Burger and Iozzi [BI12, Corollary 3.1], for almost every x ∈ X, there exists an integer 1 ≤ k(x) ≤ m and a real ( Moreover, we can choose S x to be minimal with respect to the inclusion. We claim now that for every γ ∈ Γ and almost every x ∈ X. First, the definition of E x and the σ-equivariance of φ imply that for every γ ∈ Γ and almost every x ∈ X. Hence, we have Thus, the minimality assumption shows that By interchanging the role of E γ,x and E x , we get the claim. As a consequence k(γ.x) = k(x) for every γ ∈ Γ and almost every x ∈ X. The ergodicity assumption on the space (X, µ X ) then implies that almost all the spheres have the same dimension, i.e. k(x) = k ∈ N for almost every x ∈ X. Let us now denote by Sph k−1 (∂ ∞ H m C ) the space of (k−1)-spheres embedded in the boundary at infinity ∂ ∞ H m C . Since the action of PU(m, 1) on (k − 1)-spheres is transitive, Sph k−1 (∂ ∞ H m C ) is a PU(m, 1)-homogeneous space. Let G 0 = N PU(m,1) (S 0 ) be the subgroup of PU(m, 1) preserving a fixed (k − 1)-sphere S 0 . Then we can define a map S : Let f : X → PU(m, 1) be the composition s • S. Since f is a composition of measurable maps, it is measurable. Moreover, by construction we have for almost every x ∈ X. Let us consider now the f -twisted cocycle σ 0 = f.σ associated to σ (Definition 2.11). On the one hand we have that S γ.x = f (γ.x) S 0 , on the other S γ.x = σ(γ, x)f (x) S 0 . Hence σ 0 preserves S 0 . This implies that σ 0 (Γ×X) ⊂ G 0 . If L denotes the algebraic hull of σ (which is the same for σ 0 ) and L = L(R) • , we get L ⊆ G 0 , whence the thesis. Remark 5.4. Unfortunately, we are not able to show that Ad. 2 actually provides a complete converse to Ad.1. Indeed, it is not unlikely that the vanishing of the pullback H 2 (φ)([c m ]) is in fact a stronger condition than the vanishing of the Cartan invariant associated to the cocycle σ. A priori the condition i(σ) = 0 does not necessarily imply that the pullback induced by φ vanishes on c m . However, at the moment we are not able to construct an explicit example of such situation. On the other hand, our formulation of Theorem 4.2 suitably extends Burger-Iozzi's result [BI12, Theorem 1.1] in the setting of measurable cocycles. Indeed when σ is actually cohomologous to a non-elementary representation ρ, the pullback along φ boils down to the pullback along ρ (Proposition 3.17). Thus, we completely recover [BI12, Theorem 1.1] in this particular situation. Rigidity of the Cartan invariant In this section we discuss some rigidy results which can be deduced using the Cartan invariant of measurable cocycles. We first study the algebraic hull (Definition 2.15) of cocycles whose pullback does not vanish. Then, we characterize maximal measurable cocycles (Definition 4.9). We begin with the following result which is a suitable extension of Burger-Iozzi's result for representation [ If H 2 (Φ X )([c m ]) = 0, then L is an almost direct product K · M , where K is compact and M is isomorphic to PU(p, 1) for some 1 ≤ p ≤ m. In particular, the symmetric space associated to L is a totally geodesically embedded copy of H p C inside H m C . Proof. Since H 2 (Φ X )([c m ]) does not vanish, the restriction of the bounded Kähler class κ b m to H 2 b (L; R) does not vanish (Remark 4.3). Thus, L cannot be amenable. Since PU(m, 1) has real rank one, L is reductive with semisimple part M of rank one. We denote by K the compact term in the decomposition of L. Since K compact, whence amenable, we have that κ b m |K = 0 (see Remark 2.31 for the notation). Thus, M is a group of Hermitian type. Then, since M has real rank one, we have M ∼ = PU(p, 1) , for some 1 ≤ p ≤ m. Take an isomorphism π : PU(p, 1) → M such that H 2 cb (π)(κ b m ) = λκ b p for some λ > 0. If m ≥ 2, the map π : PU(p, 1) → PU(m, 1) corresponds to a totally geodesic embedding H p C inside H m C (which is holomorphic by the positivity of λ). When m = 1, the group π(PU(1, 1)) cannot correspond to a totally real embedding, otherwise λ = 0 by Theorem 3. Hence it must correspond to a complex geodesic and the statement is proved. Among the cocycles with non-trivial pullback, maximal ones can be completely characterized. Maximal cocycles always admit a(n essentially unique) boundary map. Indeed they are non-elementary, since the latters have trivial Cartan invariant. Theorem 5. In the situation of Setup 4.4, let (X, µ) be ergodic and let σ be a maximal cocycle. Let L be the algebraic hull of σ and let L = L(R) • be the connected component of the identity of the real points. Then, the following hold (1) m ≥ n; (2) L is an almost direct product PU(n, 1) · K, where K is compact; (3) σ is cohomologous to the cocycle σ i associated to the standard lattice embedding i : Γ → PU(m, 1) (possibly modulo the compact subgroup K when m > n). Proof. One can restrict the image of σ to its algebraic hull, which is completely characterized by Theorem 4. In this way we obtain a Zariski dense cocycle. The thesis now follows verbatim as in the proof of [SS21, Theorem 2]. Concluding remarks We conclude the paper with a short list of comments that relate the notion of Cartan invariant with more recent results in this field. These results have been obtained by combining the theory developed in this paper with new insights. Recently one of the authors has proved a statement analogous to Theorem 5 but with completely different techniques [Savd, Theorem 1.2]. The main new ingredient was the existence of natural maps associated to a measurable cocycles. Natural maps exist for ergodic Zariski dense cocycles, e.g. cocycles arising from ergodic couplings [Savc,Lemma 3.6]. The existence of natural maps also played an important role in the recent proof of the 1-tautness conjecture for PU(n, 1), with n ≥ 2 [Savc, Theorem 1]. This result provides a nice classification of discrete groups that are L 1 -measure equivalent to a lattice Γ ≤ PU(n, 1). In that situation the key point is to show that measurable cocycles arising from ergodic self-couplings associated to a uniform lattice Γ ≤ PU(n, 1) are maximal. This then implies that they are cohomologous to the standard lattice embedding by using the results of this paper. The notion of maximality introduced in [Savc] agrees with the one in Definition 4.9. This provides a wide family of cocycles which do not come from representations but they are cohomologous to them. Unfortunately the authors were not able to prove the 1-tautness conjecture directly with the use of the Cartan invariant. The main obstruction to this approach concerns the study of cup products of bounded cohomology classes, which is a highly non trivial subject [Heu,BM18,AB]. The study of lattices in PU(1, 1) was separated from this project because it contained some additional difficulties. Recently one of the authors used some ideas of this paper to provide a complete characterization of the algebraic hull for maximal cocycles of surface groups [Sava] by extending Burger, Iozzi and Wienhard's tightness to the wider setting of measurable cocycles.
2019-12-20T10:01:30.000Z
2019-12-20T00:00:00.000
{ "year": 2019, "sha1": "37b3838692f18040f7c63411bd4c7b35c7b9d76e", "oa_license": "CCBY", "oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/88A4D3F341BE16637417DAF305D7FB82/S0143385721000912a.pdf/div-class-title-multiplicative-constants-and-maximal-measurable-cocycles-in-bounded-cohomology-div.pdf", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "37b3838692f18040f7c63411bd4c7b35c7b9d76e", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
256747865
pes2o/s2orc
v3-fos-license
DuctiLoc: Energy-Efficient Location Sampling With Configurable Accuracy Mobile device tracking technologies based on various positioning systems have made location data collection ubiquitous. The frequency at which location samples are recorded varies across applications, yet it is usually pre-defined and fixed, resulting in redundant information, and draining the battery of mobile devices. In this paper, we first answer the question “at what frequency should individual human movements be sampled so that they can be reconstructed with minimum loss of information?”. Our analysis unveils a novel linear scaling law of the localization error with respect to the sampling interval. We then present DUCTI LOC, a location sampling mechanism that utilises the law above to profile users and adapt the position tracking frequency to their mobility. DUCTI LOC is energy efficient, as it does not rely on power-hungry sensors or expensive computations; moreover, it provides a handy knob to control energy usage, by configuring the target positioning accuracy. Controlling the trade-off between accuracy and sampling rate of human movement is useful in a number of contexts, including mobile computing and cellular networks. Real-world experiments with an Android implementation show that DUCTI LOC can effectively adjust the sampling frequency to individual mobility habits and target accuracy level, reducing the energy consumption by 60% to 98% with respect to a baseline periodic sampling. I. INTRODUCTION Over the past decade, the pervasive usage of smart devices and location-tracking systems has made it possible to study and understand human mobility at unprecedented scales. A number of studies based on measurements from millions of data subjects have repeatedly demonstrated that human mobility is highly regular [5] and predictable [30], as people tend to follow the same patterns over and over, and they do so The associate editor coordinating the review of this manuscript and approving it for publication was Giovanni Pau . in ways that are clearly periodic. Regularity is easily found in the both spatial and temporal dimensions of the movements of most individuals. As an example, let us consider Fig. 1, which shows heatmaps of the locations visited by three random users in our reference dataset (presented later in Section III-A). Although these plots summarize three weeks of data, a small set of frequently visited places emerges for all users, along with systematic paths connecting them. Likewise, Fig. 2 illustrates the temporal regularity of the mobility of the same users, as a clear periodicity emerges from the time series of their visited locations. In this paper, we study whether the spatiotemporal regularity of human mobility entails the possibility of sampling individual movements at reduced frequencies in an adaptive way while allowing for the reconstruction of trajectories that retain a vast portion of their original level of detail. Intuitively, periodic visits to a limited set of important places through repeated routes may be captured with a smaller sampling effort, as opposed to completely random mobility that would need incessant and continuous sampling to be tracked effectively. The problem of identifying suitably reduced frequencies for human mobility sampling is in fact equivalent to posing the question ''at what frequency should one sample individual human movements so that they can be reconstructed from the collected samples with minimum loss of information?''. In this context, we investigate the effect that increasingly reduced fixed sampling frequencies have on the quality of tracked movement, via a signal processing approach: we consider mobility patterns as signals over time, and carry out a spectral analysis of human mobility. We found that the spectra of the movements of 119 individuals have very similar, flat shapes; this suggests the absence of convenient sampling frequency thresholds -even specific to single users -beyond which the error in the reconstructed trajectories drops significantly. Stimulated by this finding, we carried out a quantitative analysis of the user localization error in movements reconstructed from regular sampling at different periodicities. Our results unveil a linear scaling law of the error with respect to the span of the constant sampling interval. This law corroborates the outcome of the spectral analysis, and has significant practical implications, as it allows controlling the trade-off between accuracy and cost of measurements of human mobility. Examples of practical applications in a number of fields include, but are not limited to: (i) mobile computing, where overly frequent GPS localization unnecessarily reduces the battery life of mobile devices; (ii) location-based service design, where unwarranted users' position data collection raises significant privacy concerns; (iii) cellular networks, where active probing of subscribers' positions is a costly task whose rate must be duly optimized; (iv) trajectory data compression, where information loss must be minimized. It is therefore crucial to precisely control the trade-off between sampling rate and accuracy of human mobility. Inspired by these results, we focus on the first usage above (i.e., mobile computing and energy saving of tracking mobile devices) and develop DuctiLoc, a ductile localization technique that takes advantage of the newly unveiled linear scaling law to adjust the sampling frequency according to the user's mobility habits. We implement DuctiLoc as a mobile phone app and run experiments with real-world mobile users in different countries. Our experimental results highlight that: • DuctiLoc reduces energy consumption by 60%-98% with respect to a high-frequency periodic sampling, without compromising the tracking quality; • DuctiLoc successfully operates without relying on mobile device sensors such as accelerometer or gyroscope, which significantly reduces the usage of computational resources in the device; • DuctiLoc enables a unique explicit configuration of the desired location accuracy, and -by correspondingly adapting the sampling frequency-an indirect control over the battery drain of the tracking process, which is not possible with previous approaches. DuctiLoc is an effective, lightweight adaptive location sampling mechanism based on an original concept, which can operate in isolation, or complement more traditional techniques based on auxiliary sensors or/and alternative positioning systems. As such, DuctiLoc can support downstream applications that do not necessarily require high accuracy hence continuous sampling of the user position (such as navigation), and rather focus on different aspects like long stay periods at specific locations. Indeed, it is in these situations that smart sampling becomes an appropriate solution, as we better expound in the conclusions of the manuscript. The paper is organized as follows. We first discuss related studies in Section II. In Section III, we present the setup and insights of our quantitative analysis of human mobility as a signal. Building on those results, we present the design of DuctiLoc in Section IV, while its implementation and experimental performance evaluation are in Section V. Finally, conclusive remarks are drawn in Section VI. II. RELATED WORK Previous studies have considered techniques for a simplified representation of human mobility, as well as practical solutions for the adaptive utilization of localization functions in mobile devices. Our work is related to both topics, and we discuss the relevant literature next. We also remark that this paper extends an earlier version of the study, which was limited to the analysis of the effect of the sampling frequency on the quality of tracked movements [3]. The contributions of this previous study are presented and discussed in Section III, and serve as a basis for introducing our novel DuctiLoc mechanism. A. STREAMLINING HUMAN MOBILITY The problem of identifying the minimum sampling rate of individual movements is related to -but should not be confounded with-multiple well-researched topics in human mobility analysis. A large literature has addressed the problem of spatial data trajectory compression. There, the objective is to maintain the shape of a spatial trajectory while simplifying its representation; representative examples include, e.g. [19], [20] and the references therein. Consider the toy example in Fig. 3, where a user leaves home, trains at the gym before work, and later goes to a take-away restaurant. A trajectory FIGURE 1. Heatmaps of locations visited by three distinct individuals located in different countries during three consecutive weeks: people tend to move among a limited set of specific locations, following repeated patterns. Figures best seen in color. FIGURE 2. Location time series for three distinct users during three weeks: humans tend to revisit locations in a periodic fashion. The visited locations are mapped to sequential identifiers, upon discretization on a regular grid of 50 meters step. compression technique could approximate the shape of the spatial mobility as the sequence of home, junctions B and C, and take-away locations: then, map-matching based on these cardinal points would provide a fair description of the movement. However, trajectory compression neglects the temporal dimension of the mobility, portrayed in Fig. 3 as circles proportional to the time spent at each location. Our purpose is instead to recreate the complete mobility of the user, including these temporal features. The problem we address is also different from sampling to detect important locations [21], [22], or from simplifying GPS trajectories to preserve location semantics [23], [24]. In the example of Fig. 3, important location detection is solved by sampling the trajectory so as to model the original distribution of time spent at home, work, gym, and take-away. However, that factually ignores the time ordering of visits, and does not capture transitions between frequent locations. We aim at identifying a sampling process that captures all these characteristics. Finally, we are not interested in identifying and retaining locations that determine major changes in the heading of a trajectory [25]; nor we address the similar problem of calculating the current position of a target based on its traveled distance and direction of movement, known as dead reckoning [26], [27]. Indeed, we are not interested in simplifying a pre-recorded GPS trajectory, but in finding convenient sampling frequencies for human trajectory data. B. ADAPTIVE LOCALIZATION Our proposed solution dynamically changes the sampling frequency of GPS localization, which is an approach that has been widely investigated in the past. However, the vast majority of previous works utilize auxiliary sensors, or alternative positioning systems to trigger GPS recording. The accelerometer embedded in mobile devices has been the primary auxiliary system used in adaptive location-sensing mechanisms; for instance, accelerometer information allows minimizing the probability of exceeding a given positional error while maximizing the energy-saving when tracking users indoors [14]. Other sensors can complement or replace the accelerometer; as an example, the Bluetooth interface and the microphone can be used to implement an adaptive sampling of movements and locations associated to interesting events [16]. Such sensors can be further combined with a dedicated geographical zoning to limit GPS activation to important spatial movements [13]. Velocity information, e.g. from historical data, has also been used to trigger GPS localization only when required [17]. In other cases, alternative and less energy-hungry localization mechanisms are used as a partial replacement for GPS, or positioning data demanded by other apps running on the mobile device is reused to reduce GPS activation [18]. Our proposed solution, DuctiLoc, brings three major elements of novelty with respect to models in the literature. First, it develops a practical location sampling technique on top of novel insights on the representation of human mobility that were not capitalized upon before. Second, unlike previous approaches that rely on external sensors or systems, DuctiLoc does not need auxiliary elements beyond the main localization system (e.g. GPS); as proven by our experimental performance evaluation, this spares computational resources in the mobile device, and entails further savings on its battery consumption. Third, DuctiLoc VOLUME 11, 2023 is ductile, i.e. it allows configuring the desired level of localization accuracy, which can be also leveraged as a knob to adjust the energy usage of the localization process, and is an unprecedented feature in this type of tools. III. SAMPLING INDIVIDUAL MOBILITY FOR TRAJECTORY RECONSTRUCTION We aim at answering the question ''at what frequency should one periodically sample individual human movements so that they can be reconstructed from the collected samples with minimum loss of information?''. We approach the question in a systematic way, using a reference dataset collected by volunteers in several countries through diverse initiatives as presented in Section III-A. We first perform a spectral analysis of such data, in Section III-B: by considering human movements as a signal in time and studying its spectrum in frequency, we observe the absence of low sampling frequencies that can capture a large portion of the human mobility. Next, in Section III-C, we refine the result via an extensive quantitative investigation of the exact tradeoff between the sampling frequency and the quality of the recorded movement. A. REFERENCE DATASET In this study, we employ a dataset of real-world individual mobility data collected from three different initiatives. • The MACACO data was collected between July 2014 and December 2016 as part of the European collaborative project MACACO. The project collected GPS positioning information of volunteers located in Europe and South America with a regular sampling interval of one to five minutes, via a dedicated smartphone application. 1 • The OpenStreetMap (OSM) data was collected by volunteers who recorded and uploaded their trajectories as part of contributions to the OSM database. 2 The OSM project is a global crowdsourcing initiative aiming to map the whole world surface thanks to a vast community of supporters. The GPS traces uploaded by OSM participants typically feature 1-Hz frequency, and are freely available at the official OSM project website. • The Geolife data was collected in Beijing, China, by researchers of Microsoft Research Asia, between April 2007 and August 2012 [4]. It consists of GPS trajectories recorded through different GPS loggers and smartphone applications. Although sample rates vary significantly across users and time periods, the vast majority of Geolife positioning data is recorded at intervals from one to five seconds. Geolife traces are publicly available at the official project website. 3 In order to build a consistent reference dataset, we first homogenize the GPS trajectory data from the three sources above. Specifically, we segment the mobility traces of all users into one-week trajectories and analyze them separately, under the rationale that human activities have been shown to have a weekly periodicity [28], [29] hence weekly logs let us capture most of the regularity of human mobility. These weekly trajectories have heterogeneous quality in terms of completeness of the mobility information, and many feature relatively long periods with missing or erroneous data: we thus filter the trajectories, retaining only those that contain complete GPS records in at least six out of seven distinct weekdays. As a result, our reference dataset is composed of 1,052 weeks of the mobility of 119 different individuals, which cover sensibly different geographical spans and can encompass a single city or multiple continents. Tab. 1 provides a break down of the number of users and trajectories on a persource basis. Fig. 4(a)-(c) give further detail, portraying the distribution of the number of weekly trajectories associated to a particular user, separately for each data collection initiative. The plots show that the vast majority of users contribute one to ten weeks of movement data, hence ensuring that the data is representative of a fairly diverse base of individuals. Also, our dataset contains 4 weeks of data on average for each individual and up to 48 weeks for a single user: these observation periods are long enough that the data of a single individual often captures irregular patterns due to nonperiodic endeavours of the user. It is to be noted that the sampling periodicity in the retained weeks is not uniform. Despite the filtering on completeness, the different techniques employed to collect the GPS positioning information lead to uneven recording intervals across, and even within, the original data sources. In addition to this, weekly trajectories may have minor temporal gaps due to offline GPS receivers, or interruptions in the data collection service. Fig. 4d shows the Cumulative Distribution Function (CDF) of the sampling intervals observed in all one-week trajectories of our reference dataset. In 95% of cases, consecutive positioning samples are collected within 10 minutes from each other; in the case of the OSM and GeoLife sources, 90% to 95% of points are less than 10 seconds apart, as highlighted in Fig. 4e. In all cases, the sampling intervals above appear sufficient to capture well human movements on a weekly basis, and allow for a reliable approximation of the time-continuous mobility via a simple linear space-time interpolation on the available location samples. We then re-sample all interpolated trajectories with the same frequency, i.e., 5 minutes, and use the resulting set of homogeneously granular trajectories as the ground truth for the remainder of the study. B. SPECTRAL ANALYSIS OF INDIVIDUAL MOBILITY As we are interested in determining a proper sampling periodicity for human movements, a signal processing approach appears especially well suited. To this end, we explore mobility through the lenses of Fourier transformations: we translate the trajectory data in the frequency domain, and carry out a thorough spectral analysis of their frequency components. 1) TIME SERIES REPRESENTATION OF MOBILITY As a preliminary step to the spectral analysis, we need to transform individual GPS trajectories into unidimensional time series; this poses a challenge since, even when ignoring altitude information, points in geographical trajectories are obviously bidimensional. We carry out an extensive evaluation of approaches to reduce bidimensional movements to unidimensional signals, using approximated measures such as the movement velocity or the relative displacement from the center of mass, as well as applying transformations such as the enumeration of discretized locations in the Hilbert space. However, all these techniques introduce an exceeding amount of noise in the time series, which introduces artificial or unrealistic patterns in the user movements. The difficulty of identifying a reliable univariate representation of mobility lets us opt for a parallel study of the two dimensions of the geographical space, by considering them in isolation. Instead of using the absolute values of latitude and longitude as unidimensional time series, we replace them with the signed latitude and longitude displacements from the corresponding center of mass of the one-week trajectory. Formally, the displacements of the n-th positioning sample in a trajectory are denoted asφ[n] andλ[n] for latitude and longitude respectively, computed as where φ[n], λ[n] are the latitude and longitude coordinates of the n-th GPS point, and N is the number of samples in the weekly trajectory. Other than making time series more easily compared across users and weeks, the transformations in (1)-(2) have the desirable property of generating zero-mean signals whose spectra have no DC component. Illustrations of our unidimensional description of individual movements are in Fig. 5, for two one-week trajectories. By considering the transformation above on the two geographical dimensions in isolation, we do not introduce errors; yet, we may lose properties that only emerge when the two dimensions are considered jointly. To verify whether such a problem exists, we analyze the correlation between the isolated latitude or longitude displacements and the actual traveled distance in the bidimensional space. Fig. 6 shows the per-source correlation coefficients, as well as the linear fitting on trajectories from the MACACO data. We observe consistently good correlations in all cases, and conclude that both dimensions, when taken separately, still provide fair approximations of the overall mobility. Interestingly, the VOLUME 11, 2023 correlation is always stronger for longitude than for latitude, indicating that participants to all data sources tend to move along an East-West axis rather than along a South-North one; we hypothesize that this may be due to the topology of the cities where data collections took place. 2) FREQUENCY SPECTRA OF HUMAN MOBILITY We apply a Fast Fourier Transform (FFT) to the latitude and longitude displacement signals inferred from each one-week trajectory, so as to compute their spectral representation. The frequency spectrum of a signal yields information about the sampling frequency needed to reconstruct the original time series with a small error. For an ideal signal, whose spectrum drops to zero after some frequency threshold f s (i.e. the bandwidth of the signal), the Nyquist-Shannon sampling theorem guarantees that a sampling rate 2f s is enough to achieve a lossless reconstruction of the original signal from its samples. For practical signals, the spectrum is not strictly limited, but it features limited amounts of noise. In those cases, the spectrum is mostly concentrated within a finite support, and shows a negligible amount of power beyond the frequency threshold; again, sampling at a rate twice the threshold allows reconstructing the original signal with minimum error. Fig. 7 shows the spectra of the latitude (top) and longitude (bottom) displacement signals of a representative selection of one-week trajectories. The plots in first two columns refer to the signals in Fig. 5. The original spectra are in light blue, while a moving average curve that better displays the overall trends is in dark blue. The time granularity of the trajectory data, i.e. 5 minutes, sets the spectrum boundaries at ±0.003 Hz frequencies, while the vertical orange lines outline the frequencies that correspond to sampling intervals of 10 minutes (farthest from the central frequency), 1 hour, and 12 hours (closest to the central frequency). While we only present visualization for a subset of the data in Fig. 7, we found the overwhelming majority of spectra to be very much alike those in the reported plots. Based on the spectra, we make two important remarks: (i) the hundreds of very diverse trajectories in our dataset all yield spectra with very similar shapes; (ii) the spectral shapes do not show evidence of a bandwidth threshold beyond which the signal power becomes clearly negligible, i.e. they do not identify a clear operational point for effective sampling. We can explain both these phenomena by considering that the unidimensional movement signals typically feature constant values linked by very steep transitions and deep spikes, as exemplified in Fig. 5: capturing such a behavior requires a near-infinite bandwidth, which results in spectra with a slow decay for high frequencies. In other words, although it exhibits a clear periodicity [30], human mobility is, in fact, a succession of long periods where individuals are almost static, with fast transitions between important locations [33]. While positions during stationary time intervals contribute to low-frequency spectral components and are hence easily captured by a sparse sampling, traveling causes discontinuities in the mobility signal and is much harder to sample. As a result, the spectra do not reveal interesting operation points in the trade-off between the sampling frequency (and its associated energy cost) and the quality of the reconstructed signal. It is also worth noting that a rightful doubt is whether the full spectra in Figure 7 provide a too high-level view of the transform, and hide important details. In particular, sporadic and uncommon movement patterns of the users may generate important high-frequency components that may be difficult to spot in the complete spectrum plots. We investigate this aspect by looking at zoomed-in versions of the full spectra, which focus on the highest frequencies only, as in the examples in Figure 8. While we only provide results for two representative trajectories for the sake of brevity, all spectra yield the same behavior: the amplitude of highfrequency components (highlighted by the solid green line in the plots) is in all cases orders of magnitudes (considering that the ordinate is in dB) lower than that of low-frequency components (outside the abscissa of the plots, but reported as the dashed red line). In other words, regular mobility patterns largely dominate over spurious ones, which confirms previous findings in the literature [5], [30]. C. QUANTITATIVE ANALYSIS OF TRAJECTORY SAMPLING Although disappointing in a sense, the outcome of our spectral analysis calls forth for further investigation to confirm, better understand and possibly model the apparent steady relationship between the quality and cost of individual mobility sampling. To this end, we perform an extensive quantitative analysis, and investigate the impact of different constant sampling frequencies on the quality of the mobility reconstructed from the collected samples. 1) MEASURING ERROR IN RECONSTRUCTED TRAJECTORIES We first create downsampled versions of the one-week trajectories in our reference dataset, using a wide range of sampling intervals, from 10 minutes to 12 hours. We then reconstruct complete trajectories from their retained sample by linearly interpolating such points. Finally, we assess how such time-continuous downsampled trajectories compare to the original ones: specifically, we measure the error in retrieving the individual trajectory from sampled data via the average Haversine distance. Given two points on Earth surface, p a = (φ a , λ a ) and Here, atan2 is a well-known function that returns a unambiguous phase value in Cartesian to polar coordinates conversion, and θ = sin 2 ( φ /2) + cos(λ a ) · cos(φ b ) · sin 2 ( λ /2), where φ = φ b − φ a , and λ = λ b − λ a . Also, R = 6, 371 km is the Earth radius. The average Haversine distance of two (original and downsampled) trajectories is the mean of all Haversine distances between each original sample and its counterpart, i.e. the sample associated to the same timestamp, in the downsampled trajectory. 2) LINEAR RELATIONSHIP OF SAMPLING FREQUENCY AND ERROR In the following, we report results aggregated on a per-user basis, since we find all trajectories of a same individual to yield very similar properties. Also, while we show in detail the results of a few representative users, all individuals in our reference datasets exhibit similar behaviors. Fig. 9 shows the evolution of the average Haversine error against the sampling interval for a choice of eight individuals. Each plot presents results for all of the one-week trajectories of one person: as multiple weekly trajectories are aggregated in every plot, we outline the mean (dots), 25-75% quantiles (dark blue region), and 10-90% quantiles (light blue region) of the error, i.e. average Haversine distance, measured over all trajectories of the user. Remarkably, a neat linear relationship characterizes all plots. Indeed, a simple linear model fits very well the mean error, as shown by the solid lines in Fig. 9. In (4),H is the average Haversine distance between the original and downsampled trajectories, T is the sampling interval, and f is its reciprocal, i.e. the sampling frequency. The result holds for all individuals in our dataset, as proven in Fig. 10a. The plot portrays the CDF, computed over all users, of the Root Mean Square Error (RMSE) between the linear fitting (i.e. the solid line in Fig. 9) and the mean Haversine distance at different sampling intervals (i.e. dots in Fig. 9); in other words, the RMSE quantifies the accuracy of the linear approximation in representing how the distance varies with the sampling frequency. In 95% of the cases, the RMSE is below 250 meters, and it drops to 100 meters in 50% of the cases; these are reasonable values considering that our data subjects travel tens of km per day. The average RMSE value in Fig. 10a is computed over all sampling intervals, and the distribution of the error is not uniform across them: Fig. 9 shows how the error is in fact very small for short sampling intervals, and only increases when the location is sampled at every 2 hours or more. In other words, the linear scaling is very accurate for sampling frequencies below one hour; errors in the order of km, which may seem prominent in Figure 8, are incurred for infrequent sampling occurring at intervals of multiple hours. This proves that irregular patterns in the mobility of the same user do affect the variance of the Haversine distance for the same sampling frequency. Yet, the effect is directly proportional to the sampling interval, and becomes apparent in presence of infrequent (i.e. hourly or above) sampling. As a result, irregular patterns degrade the quality of the linear model only in settings where the expected accuracy of the trajectory reconstruction is already low (i.e. in the order of kilometers). An important remark is that the only parameter of the fitting curve, i.e. the slope α has an important physical meaning: it characterizes the ratio between the average Haversine distance and the sampling interval, or, equivalently, it explains the mean additional error of the reconstructed trajectory when increasing the time that intercurs between samples. Hence, it can be measured in meters per minute (m/min). From this perspective, our analysis indicates that adding one minute to the sampling interval used to track one individual leads to an additional positioning error of α meters in her recorded trajectory, irrespective of the absolute span of the sampling interval. When looking at the value of α, we remark that it is not identical across users: the plots in Fig. 9 also report the equation of the linear fit, and we can note some diversity in the values. The heterogeneity of α in our complete dataset is illustrated in Fig. 10b, which displays the CDF of the errorsampling ratio associated to all our user base. Over 98% of users have slopes uniformly distributed between 1 and 6 m/min. Hence, for the vast majority of individuals, the inaccuracy of their recorded trajectory grows by 1 to 6 meters for each minute added to their movement sampling interval. All the results presented above are derived based on our reference dataset of over 1,000 weeks of mobility of 119 different users. While the linear relationship of the sampling interval and accuracy of the reconstructed trajectory yields in all of the considered cases, we cannot generalize our conclusions beyond the dataset we have access to. Yet, as mentioned in Section I, the wide consensus in the scientific community about the widespread and high regularity and predictability of human trajectories lets us argue that our analysis may hold for a vast majority of the users. IV. DUCTILE LOCATION SAMPLING IN PRACTICE We leverage the insight on the existence of a single parameter α regulating the relationship between localization accuracy and sampling frequency to design a practical technique for ductile localization, which we name DuctiLoc. The rationale for our solution stems from the empirical observation that frequent sampling of GPS data tends to quickly drain the battery of a mobile device [1], [2]. A natural solution is then to sample the device position at a reduced frequency, which is dynamically adapted to the movements of the user. However, deciding which frequency should be employed at each time instant is not trivial, and the linear scaling law we identified in the previous section can be used to control the trade-off between energy consumption and localization accuracy. More formally, the concept underpinning DuctiLoc is that the knowledge of the α value that characterizes the mobility of a given individual is sufficient to estimate the sampling frequency needed to achieve a given localization error. Indeed, a sampling frequency f * =α/H * shall grant a target average errorH * , according to (4). Several important remarks are in order, as follows. • Given the high heterogeneity of α values observed in Fig. 10b, the relationship above allows adapting the sampling rate to the mobility habits of each individual in a significant way: for instance, the same localization accuracy can be achieved by sampling the position of volunteers in our dataset at a frequency that can vary sixfold, with obvious implications on the possible energy and resource savings for users whose mobility is characterized by lower α values. • The target average errorH * is a configurable parameter, which allows de-facto controlling the level of localization accuracy, and adapting the sampling frequency f * accordingly; indirectly, this also entails the possibility of controlling the energy consumption of the positioning tracking process, by varyingH * . • The linear model in (4) captures the mean positioning accuracy, hence f * can be intended as the minimum sampling frequency that guarantees an error H * averaged over all times; variance shall thus be expected in the instantaneous localization performance, and higher sampling rates can be used to combat that when appropriate, such as during periods of significant mobility. We build on the considerations above to design DuctiLoc, as detailed next. A. DuctiLoc DESIGN The operation of our location sampling technique is outlined in Alg. 1. DuctiLoc receives as input the target localization errorH * , which is the system parameter controlling the desired accuracy of the sampled trajectory; it also needs the value of the user-specific error-sampling ratio α, as defined in Section III-C2. Algorithm 1 DuctiLoc Pseudocode The algorithm first computes T max (line 1), i.e. the maximum time interval between any two sampling events. According to our previous discussion, we set T max = 1/f * , i.e. T max =H * /α, since this guarantees an average localization errorH * . The actual execution loop is then entered (lines 2-9). At each iteration, DuctiLoc samples the current position, collecting information on the latitude φ, longitude λ, and speed v (line 3); the latter information is usually provided by the positioning system itself, and can be computed from past location samples otherwise. The latitude and longitude information are used to update the location information, effectively sampling the user's trajectory (line 4). The velocity information is passed through an Exponentially Weighted Moving Average (EWMA) filter, so as to compute a robust estimatev of the user's speed over time (line 5). The speed estimatev is employed to increase the sampling frequency above f * = 1/T max when necessary. Specifically, if the user is found to be moving at a velocity that exceeds α, sampling at every T max generates errors to the right of the meanH * , or, in other words, reduces the localization accuracy below the target. While this is expected asH * is an average value obtained over the complete mobility of an individual, it makes sense to take advantage of the available information on the instantaneous velocity to improve DuctiLoc performance. Therefore, we usev to compute an alternative speed-based sampling interval T next , and select the minimum of T max and T next as the time to the next sample collection (lines 6-8). We remark that this design factually turnsH * from an expectation into an upperbound to the localization error, since high-mobility situations that would generate values aboveH * are countered with an increased frequency of sampling; clearly, the scheme does not provide a guaranteed bound, as its performance depends on the precision and responsiveness of the velocity estimation process. B. ESTIMATING THE α PARAMETER The DuctiLoc algorithm in Alg. 1 requires knowledge of the parameter α, which must be adjusted to the mobility of each individual as presented in Section III-C2 for users in our reference dataset. We propose two approaches to estimating α, as presented next. 1) COLD START The baseline solution to determine α is running DuctiLoc in a cold start mode when first launched. In this mode, our scheme collects positioning information at a high frequency for a sufficient amount of time. This allows collecting training data about the mobility of the current user, and running the same procedure presented in Section III-C to compute the value of α: (i) downsampling the recorded data for intervals up to 12 hours, (ii) computing the average Haversine distance between each downsampled trajectory and the original one, and (iii) computing the slope of a linear fitting between the distance and the downsampling interval. We remark that this is equivalent to employing a traditional fixed sampling for the training period, while benefiting from DuctiLoc dynamic sampling afterwards; or, it can be achieved by leveraging historical movement data of the user, if available. In our experiments, we set the cold start sampling interval at one minute, and the collection period to two weeks. These settings allow for an accurate estimate of α that attains a good trade-off of localization accuracy and energy consumption reduction, as shown later in Section V. 2) WARM START The cost of the cold start mode can be avoided if minimal information about the mobility of the user is available when DuctiLoc is first run. The warm start mode takes advantage of the fact that α can be inferred from high-level historical statistics on the movement patterns of an individual. Tab. 2, computed on all users in our reference dataset, shows that the value of α correlates well with a range of features that have been largely used in the literature to characterize individual VOLUME 11, 2023 FIGURE 11. Overview of the implementation of DuctiLoc. The (i ) location sampling app leverages DuctiLoc, coded as (ii ) an Android library that relies on (iii ) the basic functionalities of the OS. The app uploads the measurement data to (iv ) an external project server. TABLE 2. Pearson's correlation of α with mobility features. For the p−values, we adopt the common notation: * p < 0.1, ** p < 0.05, and *** p < 0.001. movement patterns. The Pearson's correlation coefficients are especially high for features that capture the breadth of movements of a person, i.e. the radius of gyration, number of visited locations, or area of the convex hull of visited locations: this implies that individuals who have more varied mobility incur an average localization error that grows faster under a sparser sampling. More generally, the observed substantial correlations point at the possibility of predicting the value of α from user-specific mobility statistics. Indeed, a simple Multiple Linear Regression (MLR) model using a combination of five features with low collinearity (i.e. radius of gyration, total travelled distance, displacement, fraction of time the user is mobile, and regularity) achieves a coefficient of determination R 2 of 0.88, i.e. allows explaining 88% of the variance of α, with p-values lower than 0.05. Therefore, if similar statistics were available to the user based on previous mobility patterns, DuctiLoc could leverage them to derive a very good approximation of the value of α, without a need for the high-frequency sampling a cold start of the method relies upon. We remark that such information about the movement of the user can be represented with minimal data size, and is fully privacy preserving, which ease its permanent storage with respect to, e.g. complete trajectory data. C. DuctiLoc IMPLEMENTATION We implement DuctiLoc as a self-contained library for the Android operating system (OS), which can then be embedded in any Android app. In order to experiment with our technique, we also develop a dedicated Android location sampling app that relies on the DuctiLoc library, and transfers the collected data to a centralized server for further processing and analysis of the system performance. Fig. 11 illustrates the organization of our DuctiLoc implementation. The diagram is separated into four components: (i) the location sampling app, (ii) the DuctiLoc library, (iii) the underlying Android OS, and (iv) the external server used for data collection. The location sampling app, parametrized with the values ofH * and α, requests location updates using the DuctiLoc library. The library internally relies on the Google Fused Location Provider to obtain location samples from the Android OS. Once DuctiLoc obtains a new location, it saves both the location and the current instant speed (calculated in our implementation using the previously collected sample) in an internal database. Next, DuctiLoc updates the EWMAbased speedv and calculates the time T to the next sample collection as per Alg. 1. The location sampling app receives the location samples from the DuctiLoc library, and saves them in an internal database. At the same time, the app is also responsible for collecting statistics about its own utilization of device battery and CPU, via the Android dumpsys tool. Dumpsys is a tool built into the Android OS, which allows obtaining precise information about the status of running services. Inside the location sampling app, a synchronization service compresses all collected records and stores the compact files in the internal storage of the mobile device; the data is finally sent to the external server when a Wi-Fi connection is available. V. EXPERIMENTAL EVALUATION We evaluate the performance of DuctiLoc via real-world experiments involving a small set of six volunteers located in countries in Europe and South America. The testers were carefully selected so as to compose a representative group of individuals with markedly different mobility habits, from local dwelling to regular long-distance commuting, as proven by the results in Section V-B. Next, we first present the measurement campaign setup, and then report and discuss its results. A. CAMPAIGN SETUP All volunteers installed our experimental suite in their personal Android smartphones, upon signing a consent form that detailed the purpose of the study and of the collected information. Specifically, each data subject run for the duration of the evaluation campaign three different apps in parallel, denoted as follows: • DuctiLoc: the location sampling app that relies on the DuctiLoc library to perform the sampling process, as presented in Section IV-C. As historical information about the mobility of the volunteers was not available at the start of the experiments, the cold start mode of DuctiLoc was employed, by collecting positioning data at 1-minute resolution over two consecutive weeks as previously mentioned. • Fixed-sampling: an app performing location sampling at a fixed periodicity of one minute, which is used as (i) a way to collect ground-truth information about the trajectory of the user, and (ii) a worst-case performance baseline for energy consumption and CPU usage in the mobile device. • Sensor-based: an app using an accelerometer-based location sampling process, by exploiting the fact that low-power accelerometer sensors embedded in modern smartphones can identify user movements so as to to avoid unnecessary location sampling in stationary conditions [36], [37]. More precisely, at every three minutes the app collects a short burst of measurements from the in-built accelerometer of the mobile phone, during two seconds. If the variance of values in the measured burst is greater than a threshold, a movement is detected, and a location sample is collected. For the choice of threshold, we selected the value 0.5 (m/s 2 ) 2 . For the selection of this value, we got inspired by the paradigm proposed in [38]. We firstly applied a variety of values on our data collected by our volunteers as thresholds for detecting movement. We found 0.5 (m/s 2 ) 2 to give the best results. Then, we employed the data used in [39] to separately study the acceleration variance for humans under active, driving, walking and inactive state. We found driving to be the mobile status with the lower acceleration variance, with a median value of 0.72 (m/s 2 ) 2 . Consequently, we chose 0.5 (m/s 2 ) 2 as a conservative threshold for our comparison experiments. All three apps collect and send to the same external server the following data: (i) device-related information, including a unique random identifier for data pseudonymization, the Android version, and the manufacturer, model, and release of the device; (ii) the collected location samples, including the latitude, longitude, horizontal accuracy, and timestamp of the location; (iii) the energy consumption information, including the estimated power utilization of the app, in mAh; (iv) CPUrelated information, including the total CPU time used by the app, in ms. The collected data allows comparing the three sampling methods in terms of energy consumption and CPU usage. Additionally, it lets us evaluate the localization error incurred by the trajectories reconstructed from DuctiLoc and the reference accelerometer-based solution with respect to the ground truth provided by the fixed-sampling baseline. Experiments were carried out for a continued period of seven weeks. During this period, we had the volunteers vary the target error parameter of DuctiLoc, so as to investigate the capability of the mechanism to adapt to different user's settings; to this end, the data subjects updated the setting ofH * ∈ {50, 100, 250, 500, 1000} meters at regular time intervals of two weeks. B. RESULTS As a preliminary result to our performance evaluation, we show the linear fittings to the average Haversine distance as a function of the sampling interval, observed during the cold start phase of DuctiLoc. Fig. 12 summarizes the outcome for all volunteers. The different slopes and their associated coefficients (reported in the legend) illustrate that the volunteers involved in our experiments yield a variety of α values. Specifically, we observe values of α ranging from 1.23 m/min to 7.0 m/min, which are even more diverse values than those found in Section III-C2 for a user base of 119 individuals. Ultimately, this demonstrates that our choice of volunteers, although limited in size, encompasses a substantial variety of mobility patterns. 1) LOCALIZATION ACCURACY The main performance figures are in Fig. 13, which portray (i) the sampling rate, in the top plot, and (ii) the localization accuracy, in the bottom plot, for both DuctiLoc and the sensor-based solution that relies on accelerometer information. Results are shown as a function of the target errorH * used in DuctiLoc. We observe in the top plot that DuctiLoc can effectively adopt the sampling frequency to the expected accuracy, which results in a variable interval between subsequent samples. The average value of the interval varies from ten minutes, whenH * is set to 50 m, to more than four hours, whenH * is 1000 m. The sensor-based solution is obviously insensitive to the target error, and yields a fairly constant sampling interval with a mean value around 1.5 hours. The fixed-sampling approach clearly results in an identical rate of one sample per minute for anyH * , which is not shown in the plot as it would be barely visible. The difference above reflects on the mean localization error, computed using the average Haversine distance as explained in Section III-C1. Note that the positioning data returned by the fixed-sampling baseline is used in this case as the ground truth for that calculation: in other words, the result in the bottom plot of Fig. 13 can be interpreted as the error incurred by DuctiLoc and the sensor-based location sampling with respect to the trajectory information obtained with the fixed-sampling approach. The localization error of DuctiLoc grows withH * as expected; in fact, it is remarkably close to the target, which demonstrates the capability of our method to adapt to the user requirements in terms of expected accuracy of position tracking. Instead, the accuracy of the sensor-based solution fluctuates between 500 m and 800 m. This difference is to be ascribed to the fact that experiments with diverse values ofH * were carried out in separate weeks, during which the volunteers may have modified their mobility patterns due to, e.g. special events, vacations, or work and personal businesses. Here, the fact that the average sampling interval recorded by DuctiLoc is very consistent with the selectedH * further proves the robustness of our method to fluctuations in the movement habits of the user. It is worth remarking that it is possible to fine-tune the sensor-based location sampling so as to reduce the error; for instance, this can be achieved by tailoring the sensitivity threshold for the variance of the accelerometer values. However, no inherent relationship exists between that threshold and the localization accuracy; also, the correct threshold will likely vary on an individual basis. Therefore, finding the correct parametrization of the sensor-based sampling that ensures a given localization error is reduced to a cumbersome trial-and-error task. Instead, the direct relation unveiled in Section III-C2 and used by DuctiLoc allows our solution to elegantly overcome this kind of parametrization problem. Also, one may argue that the absolute accuracy performance of the sensor-based approach can be improved, e.g. by reducing the 3-minute interval between accelerometer data burst collections. While this is true, increasing the usage of the sensor would also grow the consumption of batteries and computational resources in the mobile device; and, those are already penalizing the sensor-based solution with respect to DuctiLoc under the current settings of accelerometer querying frequency. Such a behavior is discussed next, as part of our observations on the device resource utilization under different sampling schemes. 2) ENERGY EFFICIENCY AND CPU USAGE Reducing the location sampling frequency has a direct positive impact on the utilization of resources in the mobile device, as the OS positioning system (which possibly involved activating or probing the GPS receiver) is less solicited. Therefore, by adapting the sampling rate to the minimum value required to achieve the target localization accuracy, DuctiLoc is expected to yield savings on both battery and CPU. This is demonstrated in Fig. 14, which shows the fractional gains in power consumption, in the top plot, and processing time, in the bottom plot, attained by DuctiLoc with respect to the fixed-sampling baseline. The plots illustrate the performance as a function of the target errorH * : depending on that parameter, the reduction of battery usage ranges between 60% and 98%, and that of CPU is from 40% to 94%. An important remark is that the parameterH * allows controlling the level of energy efficiency of the localization process. Indeed, Fig. 14 depicts a clear negative correlation between the value ofH * and the resulting power and CPU consumption. Therefore, by acting on the desired accuracy input to DuctiLoc, the user has an effective knob to tune the demand for system resources entailed by the location sampling. This is not the case with the sensor-based approach, whose performance is also portrayed in Fig. 14: the power and CPU time consumption are fairly constant across experiments with different values ofH * , and the benchmark solution does not offer direct ways to control the system resource utilization. Also, we note that the average reduction in energy usage of the sensor-based app is around 90%, whereas the CPU time required for localization is reduced by 25% to 49%. Thus, DuctiLoc largely outperforms the sensor-based approach in terms of CPU utilization cut, as it can save two to three times more computational resources. In terms of battery consumption, the trade-off between localization accuracy and power drain can be appreciated by comparing the bottom plot of Fig. 13 and the top plot of Fig. 14. Again, the outcome is clearly in favor of DuctiLoc: for instance, whenH * is set to 250 m, our solution yields a slightly lower energy usage while providing a 75% lower positioning error. VI. DISCUSSION AND CONCLUSION We found that the average error incurred by trajectories reconstructed from periodic samples scales linearly with the constant sampling interval, as shown in [3]. The result was first identified through measurement data collected by 119 users during more than 1,000 weeks, and then confirmed in a small-scale experimental evaluation with 6 volunteers; the consistence of the linear behavior across our heterogeneous user base lets us hypothesize that such a scaling law could be a universal property of human mobility. The linearity of the relationship between error and sampling interval explains the absence of an operational point for the effective sampling of human movements, which was also corroborated by the outcome of a spectral analysis of individual movement patterns. Therefore, the trade-off between localization accuracy and sampling frequency can be fully modeled via a single parameter, i.e. the slope α of the linear scaling law. The slope quantifies the added error induced by a unit increase in the sampling interval, and we proved that it can vary by almost one order of magnitude depending on the target person. Building on these insights, we designed DuctiLoc, a ductile location sampler that is lightweight and can adjust the sampling frequency to the preferred accuracy level of each individual. Experiments with real-world users demonstrated that DuctiLoc offers control on the positioning error, which can be also used as a means to effectively modulate the usage of energy and CPU resources by the localization process in the mobile device. In addition to the practical application above, the seemingly general linear scaling law of the positioning error with respect to the sampling interval may also be a very useful tool in a number of other contexts. Thus, the importance of controlling the trade-off between sampling rate and accuracy of human mobility is further evidenced. Examples include applications in the following domains. • In Location-Based Services (LBS) the excessively frequent collection of user locations is expensive from both energy and communication perspectives, and raises privacy concerns. Here, our results may help more informed decisions on the minimum frequency of position querying that can support each service. For instance, in location-based social network services where users share their attraction routes, favourite hotspots or traffic jam events, the important information lies in the visited locations and the amount of time spent there [40], [41], [42]; in these scenarios, it is important to be able to fetch locations only during specific (and long) stays, and our solution can help achieving that efficiently, avoiding unnecessary tracking in between those stays. • Precise knowledge of subscribers' locations is valuable information for mobile operators, for both network management and value-added service development [32]. Yet, operators make today very limited use of active probing to update the location of inactive user equipment, as it is an expensive procedure, and favor less controllable passive measurements [31]. In typical 4G/5G deployments, user positions are probed in a deterministic way, with a periodicity of a few hours that is identical for all subscribers. Operators could instead VOLUME 11, 2023 use our findings to develop active probing solutions that are not uniform across the whole user population, but are instead tailored to the mobility of each subscriber, ensuring lower cost and higher accuracy. • Mobility data compression is an open field of research, which seeks solutions aiming at simplifying individual trajectories so as to allow storing them in very large amounts [19], [20]. As discussed in Section II-A, our problem is a superset of the trajectory compression one. Therefore, our findings can also be leveraged to help the task of trajectory compression and make it more viable in practice at the data collection stage, by sampling the user movements with a loss of information that is controllable. To conclude our discussion, we would like to clarify the limitations of our work. First, our analysis in Section III is based on a dataset of trajectories of 119 individuals, whereas our experimental evaluation of DuctiLoc in Section V relies six testers; while the acknowledged regularity of human mobility yields promises for the wider applicability of our results, at this time we cannot generalize them beyond such data subjects and additional tests would be needed to that end. As a second point, the question we posed in Section I and the subsequent analyses aim at being as general as possible. However, specific applications that leverage some form of human mobility sampling may have unique requirements that do not fit our approach. For instance, different services may rely on forms of accuracy that are not well represented by our average Haversine error; they may have requirements in terms of maximum variance of the accuracy that is not captured by our mean analysis; or they may accommodate sampling approaches that are not constant but adaptive to, e.g. the social or environmental context of the user or device. It is thus important to understand that our study does not aim at providing direct support to the design of any specific practical service relying on mobile device localization: instead, our investigations are a sensible starting point for the tailored design of applications built on top of trajectory sampling. PANAGIOTA KATSIKOULI received the Diploma and M.S. degrees in computer engineering and informatics from the Polytechnic University of Patras, Greece, in 2011 and 2013, respectively, and the Ph.D. degree in informatics from the University of Edinburgh, U.K., in 2017. Since 2017, she has been a Researcher in various institutes, such as Inria, University College of Dublin, and Technical University of Denmark. She is currently a Researcher with the University of Copenhagen. Her research interests include distributed algorithms, blockchain technology, human mobility, data analytics, applications of machine learning, and distributed algorithms for mobility data. MARCO FIORE (Senior Member, IEEE) received the dual M.Sc. degree from the University of Illinois Chicago, IL, USA, and the Politecnico di Torino, Italy, the Ph.D. degree from the Politecnico di Torino, and the Habilitation a Diriger des Recherches (HDR) degree from the Université de Lyon, France. He held tenured positions as an Associate Professor at the Institut National des Sciences Appliquées, Lyon, France, and a Researcher at Consiglio Nazionale delle Ricerche, Italy. He has been a Visiting Researcher at Rice University, TX, USA; the Universitat Politecnica de Catalunya, Spain; and University College London (UCL), U.K. He is currently a Research Associate Professor with the IMDEA Networks Institute and a CTO at Net AI Tech Ltd. He leads the Networks Data Science Group at IMDEA Networks Institute, which focuses on research at the interface of computer networks, data analysis, and machine learning. He is a member of ACM. He was a recipient of a European Union Marie Curie Fellowship and a Royal Society International Exchange Fellowship. VOLUME 11, 2023
2023-02-11T16:10:56.513Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "ad2d185fb45fab271d8e00ae79d6560a52949519", "oa_license": "CCBY", "oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/6514899/10041121.pdf", "oa_status": "GOLD", "pdf_src": "IEEE", "pdf_hash": "e03c3b31c305e1bd92e1b9d59705bf61c63745f1", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
235436173
pes2o/s2orc
v3-fos-license
Automatic linear measurements of the fetal brain on MRI with deep neural networks Timely, accurate and reliable assessment of fetal brain development is essential to reduce short and long-term risks to fetus and mother. Fetal MRI is increasingly used for fetal brain assessment. Three key biometric linear measurements important for fetal brain evaluation are Cerebral Biparietal Diameter (CBD), Bone Biparietal Diameter (BBD), and Trans-Cerebellum Diameter (TCD), obtained manually by expert radiologists on reference slices, which is time consuming and prone to human error. The aim of this study was to develop a fully automatic method computing the CBD, BBD and TCD measurements from fetal brain MRI. The input is fetal brain MRI volumes which may include the fetal body and the mother's abdomen. The outputs are the measurement values and reference slices on which the measurements were computed. The method, which follows the manual measurements principle, consists of five stages: 1) computation of a Region Of Interest that includes the fetal brain with an anisotropic 3D U-Net classifier; 2) reference slice selection with a Convolutional Neural Network; 3) slice-wise fetal brain structures segmentation with a multiclass U-Net classifier; 4) computation of the fetal brain midsagittal line and fetal brain orientation, and; 5) computation of the measurements. Experimental results on 214 volumes for CBD, BBD and TCD measurements yielded a mean $L_1$ difference of 1.55mm, 1.45mm and 1.23mm respectively, and a Bland-Altman 95% confidence interval ($CI_{95}$) of 3.92mm, 3.98mm and 2.25mm respectively. These results are similar to the manual inter-observer variability. The proposed automatic method for computing biometric linear measurements of the fetal brain from MR imaging achieves human level performance. It has the potential of being a useful method for the assessment of fetal brain biometry in normal and pathological cases, and of improving routine clinical practice. Introduction Human fetal brain development is a complex process that involves significant changes in volume, structure, and maturation in a unique spatio-temporal pattern. Abnormal fetal brain development can have significant short and long-term consequences on the newborn. Consequently, accurate quantitative assessment of fetal brain growth is essential for early diagnosis of developmental disorders. Ultrasound (US) is currently the primary imaging modality to monitor fetal development. Magnetic Resonance Imaging (MRI) is increasingly used for fetal brain assessment in cases of inconclusive US findings, to confirm or reject suspected abnormalities, and to detect other developmental abnormalities. MRI-based routine clinical assessment of fetal brain development is mainly subjective, with a few biometric linear measurements. Similar to US-based evaluation, these measurements are compared to MRI reference of growth centiles of normal developing fetuses [1,2]. Three key biometric linear measurements used in routine clinical assessment on fetal brain MRI are Cerebral Biparietal Diameter (CBD), Bone Biparietal Diameter (BBD), and Trans Cerebellum Diameter (TCD). These measures are performed manually on individual MRI reference slices by clinicians following established guidelines [3,4], which differ from the guidelines for US-based measurements, specify how to establish the scanning imaging plane, how to select the reference slice in this volume for each measurement, and how to identify the two anatomical landmarks for the linear measurement. The CBD and BBD measurements are performed on the same slice, and are drawn perpendicular to the mid-sagittal line (MSL). The TCD is measured on a different reference slice by selecting the two antipodal landmark points on the fetal brain cerebellum contour, giving the diameter of the cerebellum. Manual measurements require clinician training, are time consuming, and suffer from intra-and inter observer variabilities [5]. Since fetal brain measurements are small, i.e., 30-100mm, especially at early gestational age, small measurement errors may cause a significant shift in the corresponding fetal growth centile, leading to misdiagnosis and misguided pregnancy management [6]. Developing automatic methods for computing biometric fetal brain measurements presents numerous technical challenges. First, the method should follow the guidelines and steps explicitly and implicitly performed by the clinician, i.e., localization of the fetal brain in the MRI volume, selection of the reference slice, identification of the fetal brain, skull and cerebellum contours and mid-sagittal line, and selection of anatomical landmarks for each linear measurement. Each of these stages presents unique and significant computational challenges. Additional challenges include the variability of the MRI scanning planes, resolutions, contrasts and protocols, pathological fetal brain conditions, and fetal motion artifacts, all of which may affect image quality, and yield inaccurate measurements and significant observer variability. In this paper, we present the first fully automatic method for computing three key biometric linear fetal brain measurements in MRI, i.e., CBD, BBD, and TCD. Related work To the best of our knowledge, there are no published reports of automatic biometric linear measurement methods of the fetal brain MRI. However, a variety of methods have been reported that are relevant to the five stages of our method. We review them next. Fetal brain ROI detection and segmentation: Torrents-Barrena et al. [7] presented a comprehensive review of methods for segmentation and classification of fetal structures on US and MRI. Dudovitch et al. [8] presented a method for fetal brain ROI detection and segmentation based on two 3D U-Nets: one for ROI localization and one for ROI voxel classification. We used the fetal brain ROI localization 3D U-Net as the first stage of our method. Reference slice selection: Baumgartner et al. [9] described a real-time CNN-based fetal US slice selection method, focusing on the temporal aspect of the reference slice selection, which differ from our problem as there is more than one reference slice solution. Pallenberg et al. [10] described a template-based spatial slice selection for CT. However, this method cannot handle the significant variability in fetal brain morphology with gestational age. Various methods have been described for the selection of planes and computation of fetal linear measurements in 3D US scans. Li et al. [11] describe a method to compute the trans ventricular (TV) and the trans cerebellum (TC) planes, which are similar to the CBD/BBD and the TCD slice planes in MRI, respectively. Ryou et al [12] describe a method for selecting planes and computing the crown-rump-length (CRL), head circumference (HC) and abdomen circumference (AC). However, these problems differ from ours in that 3D US provides dense, isotropic spatial information while the fetal MRI is sparser and has different noise and sampling characteristics. In addition, the fetal MRI scan may have spatial motion artifacts that hamper the reconstruction of a spatial volume from planes. Fetal brain component segmentation: Despotović et al. [13] presented a survey of model-based methods for segmentation of adult brain components in MRI. Most methods, e.g. FreeSurfer [14], use a brain atlas and require registration. They are not directly applicable to fetal brain scans since the fetal brain size, shape, and structure changes rapidly during gestation. Others have developed atlases for the various gestational ages for segmenting brain structures [15]. These methods require accurate 3D non-rigid registration, which is time-consuming and may be inaccurate [16]. More recently, deep learning methods have been developed for the segmentation of fetal brain structures [17] in fetal MRI scans. However, this method is not applicable to the problem at hand since it does not differentiate between the left and right hemispheres. Mid-sagittal line (MSL) computation: to the best of our knowledge, there are no papers in the literature that describe methods for fetal brain mid-sagittal line computation. However, two types of methods for the computation of the adult brain MSL in MRI scans have been developed [18]: shape-based methods [19] and content-based methods in which the MSL is computed from the line that maximizes the brain's bilateral symmetry [20]. These methods are designed for T1-weighted adult brain MRI scans and rely on skull stripping prior to segmentation. This task is more challenging on T2-weighted fetal brain scans, as the fetal skull contrast is different and its boundaries are fuzzy. To summarize, while biometric linear measurements of the fetal brain are an essential part of fetal development assessment, they are currently performed manually. While automatic methods for the computation of US-based biometric linear measurements are available, e.g., biparietal diameter [21,22], fetal head circumference [23] and femur length [21], no such methods are available for fetal MRI. Method We present a fully automatic method to compute three key fetal biometric measurements, CBD, BBD and TCD, from fetal brain MRI. The input is a fetal MRI volume. The outputs are the measurements and reference slices in which the measurements were computed. The method follows the clinical guidelines for manual fetal MRI measurements [4]. The pipeline consists of five stages ( Fig. 1): 1) computation of a Region of Interest (ROI) of the fetal brain with an anisotropic 3D U-Net classifier; 2) reference slice selection with a convolutional neural network (CNN); 3) slice-wise fetal brain structure segmentation with a multiclass U-Net classifier; 4) computation of the fetal brain MSL and fetal brain orientation, and; 5) computation of CBD, BBD and TCD measurements. The method performs self-assessment of reliability and alerts clinicians when the measurements may be unreliable. Our method relies on supervised deep learning techniques for the first three stages, which requires offline training and online inference (Fig. 2). In the offline training phase, the networks of the first three stages are trained individually on annotated training and validation datasets. In the online inference phase, these networks are used for inference. For the reference slice selection (stage 2), the CNN is trained twice, one to select the reference slice for CBD/BBD measurements and one for TCD measurement. Fig. 2: Offline training (left) and online inference (right) phases. The offline training phase consists of the first three stages (rectangles). It inputs labeled data for each of the stages (ovals) and outputs four trained networks (parallelograms). The online inference phase uses these networks for classification followed by the last two stages. Fetal brain ROI detection The first stage computes the fetal brain ROI in the fetal MRI volume. The ROI is a 3D axis-aligned bounding box that contains the fetal brain. It is computed using a custom anisotropic 3D U-Net with a Dice loss function described in [8]. Briefly, the network inputs a ×4 downscaled version of the fetal MRI volume and outputs a coarse fetal brain voxel classification from which a tight bounding box is computed. This network achieves 100% ROI detection rate on a network trained with very few (~10) manually labeled volumes. Reference slice selection The next stage is the selection of the two reference slices on which the measurements are performed: one for the CBD/BBD, and another one for the TCD. The slices were selected with a CNN classifier trained for each type of reference slice using transfer learning in three steps: 1) fetal brain ROI pre-processing; 2) slice probability prediction of each slice in the fetal brain ROI, and; 3) reference slice selection based on the computed probabilities. First, the fetal brain ROI, which is a grey-value matrix of size ℎ × × where ℎ and are the fetal brain bounding box height and width and is the number of slices, is resized to × × where = 1.5 × max(ℎ, ) so that the slices are square. The volume is cropped by the resized ROI and resized to the size required by the ResNet50 network (224 × 224). Note that only the ROI was changed, and not the volume itself, thus preserving its original aspect ratio and scale. The grey values are computed with bilinear interpolation and normalized to the [0,1] range. Next the probability of a slice to be the reference slice is computed with a modified version of the ResNet50 [24] CNN pre-trained on the ImageNet dataset. The two modifications are: 1) the last multi-class classification layer is replaced with a two-class softmax classification layer, and; 2) the same grey voxel values are input to each of the three RGB channels. Finally, the reference slice is selected as the slice with the highest probability. The two networks are used for reference slice selection are trained and used for the inference in the same way. In the offline training phase, slice-wise image augmentations, e.g., random cropping, rotations, horizontal and vertical flip are applied on-the-fly to the fetal brain ROI slices in each epoch. To compensate for class imbalance, i.e. only one slice out of ~25 in each volume is a reference slice, a small subset < − 1 of non-reference slices are randomly selected as negative examples (in practice = 2 yielded the best results). The network is trained with the Binary Cross Entropy loss function for 30 epochs. In the first 10 epochs, all layers are trained; in the next 20 epochs, only the last classification layer is trained. Fetal brain structure segmentation This stage performs multiclass semantic segmentation on all fetal brain MRI slices computed in the previous stage into four fetal brain components: cerebellum, left and right cerebrum hemispheres, and background. The segmentation is performed with 2D U-Net consisting of a Resnet34 encoder pre-trained on the ImageNet dataset. In the offline training phase, slice-wise image augmentations, e.g., random horizontal and vertical flip, and brightness and contrast adjustment are applied on-the-fly to the fetal brain slices within ROI in each epoch. Brightness and contrast adjustments have been shown to improve unseen domain generalization in both MRI and in Ultrasound [25]. The network is trained with the Lovasz loss function [26] for 24 epochs. In the first 12 epochs, only the decoder layers are trained; in the next 12 epochs, both the encoder and decoder layers are trained. In the online inference phase, post-processing is applied to each slice output segmentation by nearest-neighbor interpolation followed by zero-padding (background class) to obtain the original slice size (ℎ × ). Mid-sagittal line and brain orientation computation The MSL and the brain orientation are computed from the ROI and the fetal brain structure segmentation. The MSL is computed as the minimal margin line that separates the left and right fetal cerebral hemispheres with a Support Vector Machine (SVM) classifier with a linear kernel: is the pixel coordinates vector, = ( 0 , 1 ) is the linear kernel weights vector, is the cerebral hemisphere index (-1 left, +1 right), is a predefined regularization parameter, and is the bias. The . 3: Illustration of the mid-sagittal line (MSL) and brain orientation computation from the fetal brain structures: fetal cerebellum (green), left (blue) and right (red) fetal brain hemispheres: (a) the MSL (yellow) and its normal line (blue) passing through its middle point computed from points 0 and 1 , the two intersection points of the MSL and the fetal brain ROI; is the intersection between the MSL normal and the ROI boundary, is an arbitrary point inside the cerebellum; (b) two fetal MRI slices with and without the cerebellum; 0 and 1 are the closest inferior (I) and superior (S) points in the slice without the cerebellum. The brain orientation, i.e. inferior/superior, is directly computed from the anatomical location of the cerebellum, which is inferior to the cerebral hemispheres (Fig. 3a). The MSL intersects the fetal brain ROI at points 0 and 1 , from which the midpoint is computed. The line is normal to the MSL that passes through C and intersects the fetal brain ROI at point . Next, an arbitrary point inside the cerebellum is sampled and classified with respect to the sign of the cross-product × . Since the cerebellum is inferior to the brain hemispheres, all points whose cross product sign are positive/negative are in the inferior/superior part of the brain. This computation is performed on all the slices that contain the cerebellum and then applied to the slices without the cerebellum by computing the Euclidean nearest neighbor distance in the slice plane (Fig. 3b). This yields a mid-sagittal line for each slice in the fetal MRI volume. Linear CBD, BBD and TCD measurements computation The final stage computes the CBD, BBD, and TCD measurements with a geometric method akin to that used by expert radiologists. The CBD measurement is computed in the reference slice from the MSL, the brain orientation and brain structures segmentation (Fig. 4a,b,c). First, the cerebrum width profile perpendicular to the MSL is computed from the cerebral brain segmentation boundary. Next, the Sylvian Fissure location is computed by finding the local minima of width profile that is the closest and superior to the brain mass center in the MSL. The CBD is the maximal width of the cerebral hemispheres superior to the Sylvian Fissure and perpendicular to the MSL. The BBD measurement is computed by extending the CBD line to the skull contour on the same reference slice (Fig. 4c,d). First, the intensity derivative along the extended line is computed. Second, the local maxima of the derivatives is detected. Next, the inner skull contour pixels are identified by selecting the point with the maximum value from the two local extrema closest to the segmented cerebral brain boundary above a predefined threshold. The threshold value is used to filter out MR scanning imaging artifacts on the CSF, which appear as dark lines or spots, and therefore may cause noise when analyzing the intensity extrema. The TCD measurement is defined as the maximal diameter of cerebellum contour convex hull of the fetal brain segmentation on the reference slice (Fig. 5). Computation reliability estimation Each step in the pipeline includes an automatic evaluation of its reliability. When no warnings are issued, the CBD, BBD and TCD measurement values are deemed accurate and trustworthy. When a warning is issued in one or more stages, the radiologist can inspect the result, make manual corrections as appropriate, or disregard the results. This reliability estimation may facilitate the use of the proposed method in a clinical environment. Computation reliability warnings are issued for: 1) unreliable reference slice selection (stage 2), when the probability of the selected slice is below a predefined threshold (the preset value is 0.5, determined empirically). This heuristic is based on the observation that the slice selection network tends to be underconfident; 2) unreliable fetal brain structure segmentation (stage 3) and/or fetal brain orientation (stage 4), when brain orientations for five random points sampled on the cerebellum differ (our experimental results shows that five points are sufficient). This heuristic is based on the assumption that the cerebellum is inferior to the brain hemispheres, therefore an inconsistency in brain orientation may suggest that cerebellum segmentation is incorrect. Note that this heuristic is targeted to identify the cases when the fetal brain segmentation causes the failure the mid-sagittal line or fetal brain orientation, and is not designed to detect all possible segmentation errors; 3) unreliable mid-sagittal line (stage 4), when the mid-sagittal line angles of adjacent slices differ; 4) unreliable BBD measurement (stage 5) when the measurements on the original and CLAHE-enhanced [27] reference slices differ. The latter is determined by computing the measurement values on the reference slice with contrast limited adaptive histogram equalization (CLAHE) with tile size of 20×20 and clipping limit of 0.01. The rationale for this approach is that the fetal cerebrospinal fluid (CSF) might yield intensity inhomogeneity and imaging artifacts, so enhancing and equalizing the contrast may enhance the borders between CSF and brain parenchyma; 5) unreliable TCD measurement (stage 5), when the line angles between two methods of measurement differ more than 10 o (pre-set value determined empirically): (a) the cerebellum convex hull diameter; and (b) the cerebellum bounding box long axis. Fig. 5b shows an example of unreliable TCD measurement detection. This heuristic is based on the characteristic butterfly-like shape of the fetal cerebellum in the coronal slice. When the cerebellum segmentation is incorrect, the symmetry may be affected, causing the line misalignment. Note that the reliability of the computation of the fetal brain ROI (stage 1) is described in our previous paper [8] and has already been validated there. The fetal brain ROI computation produced correct results on all the datasets of our study. Experimental results To evaluate our method, we collected fetal brain MRI volumes, annotated them, and conducted six experimental studies. Dataset annotation Manual reference slice selections and CBD, BBD and TCD measurements were obtained for all volumes in the full dataset volumes with ITK-SNAP [28] by the senior pediatric neuro-radiologist co-author (LBS). The mean time required for reference slice selection and manual measurement for all three measurements per volume was 110 secs (range 60-150 secs). Validated slice-based fetal brain structure segmentations were obtained on a subset of slices from the full dataset. The initial fetal brain segmentation was obtained with the method in [8] for 63 volumes (1,389 slices) from the training dataset. The resulting segmentations were post-processed by removing small connected components, selecting the 2-3 largest connected components (left and right fetal brain hemispheres and the cerebellum when present in the slice) and performing spectral clustering with discretization [29]. Of the resulting segmentations, 1,108 slices were reviewed and approved by a knowledgeable co-author (OBZ, a graduate student who has learned from the expert radiologist how to perform this task). Evaluation metrics We used the following metrics to quantify and compare the accuracy and variability of the annotations. Linear measurement differences were defined as ( 1 , 2 ) =| 1 − 2 | where 1 , 2 are two linear measurement values. Slice selection differences were defined as _ ( 1 , 2 ) =| 1 − 2 | where 1 , 2 are two selected slice indices. The slice selection accuracy was defined as _ _ ( 1 , 2 ) = 1 − | 1 − 2 |⁄ where N is the number of volume slices. The MSL angle difference was defined as _ _ ( 1 , 2 ) =| ( 1 ) − ( 2 )| where 1 , 2 are two MSLs measured on the same volume. We used the Bland Altman method [30] to estimate the agreement between two sets of measurements. Agreement was defined by the 95% confidence interval, 95 . For two sets, 95 = 1.96 * ( 1 − 2 ) where 1 , 2 , are the measurement of sets 1 and 2, each with measurements. The bias is Interobserver variability The interobserver variability for the three manual linear measurements was established for subset from the training set (n=45) by computing the bias, difference and agreement metrics of the CBD, BBD and TCD measurements between two expert radiologists (co-authors EM and LBS). Table 1 shows the results. Table 1: Observer variability for the CBD, BBD and TCD measurements between two expert radiologists: Bias, 95% confidence interval, and difference. Studies and results The first study evaluated the overall performance of the method and of the self-assessment of reliability. The next four studies evaluated the performance of the various steps of the method and justified the algorithmic choices. The last study evaluated the performance of the method with gestational age and fetal brain abnormalities. The entire pipeline was tested on the test dataset, which is disjoint from the training set. The results for the entire pipeline for the full dataset are provided because our method includes both model-based and machine-learning methods. The training dataset was divided into various subsets that were used for both training and validation of the five stages of the fetal biometric measurements pipeline. The size of the training and validation datasets for each stage was determined according to the specific characteristics of the step. For the fetal brain ROI detection, the pretrained network in [8] was trained with 6 volumes and validated with 29 volumes. For the reference slice selection, 49 volumes were used for training and 43 for validation for both the CBD/BBD and the TCD slice selection networks. For the fetal brain structure segmentation, 53 volumes were used for training and 10 for validation. For the linear measurements computation, which includes the fetal brain orientation computation, and the computation reliability estimation, 45 volumes were used for the parameters value selection. The training, validation, and test datasets are disjoint for all experiments. Biometric measurement evaluation. This study evaluated the accuracy of the computed CBD, BBD and TCD measurements and of the self-assessment of reliability. All results are below the inter-observer variability range. The reliability self-assessment on the full dataset identified uncertainty in 28 volumes out of 214 (13%) for the TCD measurement, 8 volumes out of 214 (3%) for BBD/CBD slice selection, 3 volumes out of 214 (1.5%) for TCD slice selection, 3 volumes out of 214 (1.5%) for the BBD measurement, and no unreliable volumes for the brain orientation and the mid-sagittal line angle computation. The reliability self-assessment method also improved the variability measures on all datasets. For the test dataset, the CI 95 decreased from 3.94mm to 3.09mm (27%), from 3.26mm to 2.84mm (15%) and from 3.27mm to 2.17mm (50%) for the CBD, BBD and the TCD measurements respectively. Reference slice selection: network selection. This study compared the accuracy of five different networks that were used for reference slice selection: ResNet18, ResNet34, ResNet50, DenseNet121 [31] and VGG16 [32] pretrained on ImageNet. For each, we trained the network with its recommended hyperparameters on 1,130 slices from 49 volumes and validated them with different training and validation subsets of the training dataset on the TCD reference slice selection (Table 3). ResNet50 has the highest accuracy on the validation split with no significant difference in accuracy between all ResNets and DenseNet. Table 3: Accuracy of five reference slice selection networks for TCD reference slice selection task for the training and validation splits. The selected network results are shown in bold. We then trained the ResNet50 network for CBD/BBD and TCD reference slice selection on different training and validation subsets of the training dataset. We evaluated the reference slice selection accuracy with respect to the manual reference slices selected on the test dataset (Table 4). All networks achieved a mean difference of < 1 slice. Computation reliability estimation validation. This study evaluated the reliability estimation mechanism on the test dataset. An experienced radiologist visually inspected all the 50 volumes of the test dataset and graded each of the three measurements, CBD, BBD and TCD, as valid or unacceptable. The selection results were then compared to the computation reliability warnings that were automatically issued by the method. (a) (b) (c) Fig. 6: Confusion matrices of the reliability estimation computation: manual expert radiologist evaluation (vertical axis) vs. automatic reliability estimation: (a) CBD; (b) BBD; (c) TCD measurement. Fig. 6 shows the confusion matrices for each measurement. The accuracy of the reliability estimation computation is 80%, 78% and 82% for CBD, BBD, and TCD measurements, respectively. Note that the expert accepted 78%, 82% and 74% of the CBD, BBD and TCD measurements, respectively. For the cases where the expert did not accept and the method did accept, the three measurement variability was 2.95mm, 1.65mm and 1.92mm, respectively, which is lower than the inter-observer variability. This means that the computed measurement is clinically acceptable. Upon inspection, the main reason for which the method did not rule out these cases is the measurement angle, which was not covered by our heuristics. 6. Performance with gestational age and abnormalities. This study examines the impact of the fetus gestational age (GA) and fetal brain abnormalities on the entire pipeline on the full dataset, after elimination by the reliability estimation. First, we compute the mean deviation ratio from the absolute value and its standard deviation for each measure for each GA. Fig. 7 shows the results. The mean error as a function of GA is consistently the same across all measures, except for the lower GA in the BBD measurements --week 24, with a 4% deviation vs. 2% for all other GAs --and TCD measurements --week 22, with a 6% deviation vs. 3% for all other GAs. However, this discrepancy may be caused by the fact that we have a single sample for this GA (week 22). Second, we analyze the impact of fetal abnormalities by computing the joint distribution of the error and fetal GA vs. case diagnosis (normal vs. abnormal). Fig. 8 shows the results. Note that the marginal distributions of errors for normal and abnormal cases over all measurements are similar. This means that the subject condition does not affect the system performance. Further investigation shows that the specific anomalous cases that have large deviation in CBD/BBD measurements are from the Malformation of Cortical Development (MCD) diagnosis. The MCD group shows high variability of brain disorders which can affect the measurements. However, cases with Dolicocephaly and extra axial fluid diagnoses, which are the ones that should be detected with CBD/BBD measurements, performed well with a measurement deviation of < 2mm even though there is no representation for these groups in the training dataset. top and on the right shows the Gestational Age and measurements mean deviation distribution of the normal (blue) and abnormal (orange), respectively. Conclusions To the best of our knowledge, this is the first fully automatic method that measures biometric linear measurements of the fetal brain from MRI according to accepted clinical guidelines. It is based on a hybrid approach combining deep learning methods and geometrical algorithms for fetal brain ROI detection, fetal brain component segmentation, MSL and brain orientation estimation and three main biometric linear measurements. Two unique features of our algorithm are a new reference slice selection method using CNNs, and a new method for selfassessment of reliability to alert clinicians when the computed measurements may be unreliable. We believe that the methodology and experimental results presented here can be useful for developing methods for automatically computing biometric linear measurements in volumetric scans. The pipeline is generic in its first three stages and in its approach to reliability self-evaluation. The deep learning methods used rely on a few dozen annotated datasets, which makes it practical. The proposed method achieves human-level performance, while handling high input variability represented in clinical use: a variety of gestational ages, pathological fetal brain conditions, and diverse MRI scanning parameters. It therefore may be useful in the assessment of fetal brain biometry, and improving routine clinical practice.
2021-06-16T01:16:42.404Z
2021-06-15T00:00:00.000
{ "year": 2021, "sha1": "91f27a27b81eeb3adad2a097ca098ff9a073cb8d", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2106.08174", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "91f27a27b81eeb3adad2a097ca098ff9a073cb8d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Computer Science", "Medicine", "Engineering" ] }
244372693
pes2o/s2orc
v3-fos-license
Altered Pattern of Macrophage Polarization as a Biomarker for Severity of Childhood Asthma Purpose Asthma causes a substantial morbidity and mortality burden in children and the pathogenesis of childhood asthma is not completely understood. Macrophages are heterogeneous with divergent M1/M2 polarization phenotypes in response to various stimulations during the inflammatory process. We aimed to investigate the pattern of macrophage polarization and its association with severity and exacerbation in asthmatic children. Patients and Methods Normal and asthmatic children aged 4–18 years were enrolled for 12 months. Children with asthma were further subgrouped according to their severity and the requirement for hospitalization during exacerbations. Clinical data were obtained from medical records. Peripheral blood samples were collected to analyze macrophage polarization, including M1, M2, and subsets, by flow cytometry. Results Fifty-one asthmatic cases and 27 normal controls were included in this study. The level of PM-2K+CD14+ but not PM-2K+CD14− was decreased in asthmatic children. The levels of M2a (CCR7−CXCR1+), M2b (CCR7−CD86+), and M2c (CCR7−CCR2+) subsets, but not M1 (CCR7+CD86+), were increased in asthmatic children. The levels of M1 were decreased, but the levels of M2c were increased, in children with moderate asthma compared to those with mild asthma. The levels of PM-2K+CD14+ cells and M1 subsets were decreased, but the M2c subset cells were increased in asthmatic children requiring hospitalization during exacerbations. Conclusion Macrophage polarization may be involved in the pathogenesis of childhood asthma and is a potential biomarker of childhood asthma disease severity. Introduction Asthma and allergic diseases are the most common chronic inflammatory diseases in children, causing a substantial morbidity and mortality burden. In severe cases, the symptoms are frequently not controlled, even with intensive guideline-based therapy. 1 There is still a lack of complete understanding of the pathogenesis of asthma. Therefore, biomarkers for the evaluation and intervention of severe asthma in children are needed. CD 14 is a glycoprotein strongly expressed on the cell surface of monocytes and macrophages and it plays a critical role in the activation of innate immune activity and the Toll-like receptor (TLR) signaling pathway. 2 Previous research revealed that human monocytes were identified as classical monocytes (high CD14 expression) and nonclassical monocytes (low CD14 expression). Classical monocytes promote innate immune responses such as phagocytosis, cell adhesion, and migration, whereas nonclassical monocytes mediate primarily complement and Fc gamma-mediated phagocytosis with adhesion. 3 Monocyte-derived macrophages are believed to be important in the asthmatic airway, although their mechanisms of action remain to be defined. 4 Macrophages dynamically take part in the initial stage of inflammation and the late stage of resolution. 5,6 To identify mature tissue macrophages and distinguish macrophages from other monocyte-derived cell populations, PM-2K was recently developed as a marker for the culture of human macrophages. 7 The validity of PM-2K as a marker of macrophages has been proven in alveolar macrophages isolated from human lung biopsy samples. 7 I In response to pathogen and allergen stimulation, alveolar macrophages are polarized into classically activated macrophages (M1 cells) and alternatively activated macrophages (M2 cells). M1 cells express proinflammatory cytokines such as TNF-α and IL-1β to induce lung inflammation and tissue damage. M2 cells can be further divided into M2a, M2b, and M2c subsets in different microenvironments. M2a cells produce the allergy-related cytokines IL-4 and IL-13, whereas M2c cells produce the anti-inflammatory cytokine IL-10. 8,9 M2a and M2c cells are involved in the initiation, inflammation resolution, and tissue remodeling in the various stages of asthma. 10 Our previous study showed that the patterns of circulating macrophage polarization in peripheral blood are associated with the severity, lung function, and control status of adult asthma patients. 11 Currently, it is known that the pathogenesis, phenotypes, and factors associated with severity are quite different between childhood and adult asthma. 12 However, the roles of circulating macrophages and their subsets, M1 and M2 macrophages, in the pathogenesis of childhood asthma remain unclear. In the present study, we revealed, for the first time in the literature, that circulating macrophages and their subsets M1, M2a, M2b, and M2c are associated with severity as well as exacerbations that require hospitalization of children with asthma. Study Population and Isolation of Monocytes The study population enrolled patients with childhood asthma from the outpatient departments of 3 hospitals belonged to Kaohsiung Medical University including one medical center (Kaohsiung Medical University Hospital) and two community hospitals (Kaohsiung Municipal Ta-Tung Hospital; Kaohsiung Municipal Hsiao-Kung Hospital) in southern Taiwan. Patients who met the following inclusion criteria were eligible for enrollment: 1. under the age of 18 years of age, and 2. physician-diagnosed asthma. Physicians' diagnosis of asthma and its severity were made according to an operational description suggested by the Global Initiative for Asthma (GINA) guidelines. Patients with other serious systemic diseases, such as congenital heart diseases, neuromuscular disorders, or autoimmune diseases, were excluded. Normal subjects were enrolled from among children who visited the outpatient department for routine health examinations or vaccination. The severity of asthma and asthma exacerbations were defined according to step therapy in the Global Initiative for Asthma (GINA) guidelines, as in our previous studies. Patients receiving step 1 and 2 therapy were defined as having mild asthma, whereas patients receiving step 3 therapy were defined as having moderate asthma. The criteria for hospitalization for patients with asthma exacerbation at the emergency department (ED) or outpatient department (OPD) were based on the suggestions from the guidelines for inpatient treatment during asthma exacerbations. The flowchart of the patient enrollment criteria is summarized in Figure 1. The protocol was approved by the Institutional Review Board of Kaohsiung Medical University Hospital (KMUHIRB 20130037). All participants provided their written informed consent signed by their parents or legal guardians. This study was conducted in accordance with the Declaration of Helsinki. After informed consent was obtained, peripheral blood samples were obtained from the healthy individuals and asthmatic patients. Nested casecontrol comparisons were implemented, depending on the availability of the respective samples at the time of analysis. Flow Cytometry A multicolor flow cytometric method was established to identify and distinguish circulating macrophages, which were defined by the expression of PM-2K in the Ficollisolated peripheral blood mononuclear cells (PBMCs) of study patients; appropriate isotype controls were used. PBMCs were sequentially stained with human Fc receptor binding inhibitor (eBioscience), purified anti-macrophage antibody (PM-2K, Serotec), and anti-mouse IgG-FITC. After washing, the cells were stained with purified anti-CXCR1 antibody (eBioscience) followed by anti-mouse IgG-Qdot 585 (Thermo Fisher Scientific, Waltham, USA). Statistical Analysis Statistical analysis was performed using GraphPad Prism (Version 5, GraphPad Prism Software, Los Angeles, CA, USA). The Mann-Whitney U-test was used to determine the difference between normal subjects and patients. The Kruskal-Wallis test with post hoc Dunn's multiple comparison test was used to determine the differences between subgroups of patients. Results In the case-control study population, 51 asthmatic children and 27 normal controls were enrolled. There was no significant difference in age or sex distribution between the cases and controls (Table 1). The Levels of the M1 Macrophage Subset Were Similar in the Peripheral Blood of Asthmatic and Normal Healthy Children We next investigated whether the pattern of macrophage polarization was different in asthmatic and normal healthy children. Macrophages expressing CCR7+CD86+ cells were defined as the M1 subset according to our previously Figure 4C) subsets of PM-2K + CD14 + macrophages were significantly higher in asthmatic children. However, the levels of M2a, M2b, and M2c subsets of PM-2K + CD14 − macrophages were not significantly different between asthmatic and normal healthy children ( Figure 4D-F). Macrophage Subsets Discriminated the Severity of Asthmatic Children Next, we investigated whether the levels of macrophage subsets are different in asthmatic children with varying severity. The levels of PM-2K + CD14 + ( Figure 5A) and PM-2K + CD14 − ( Figure 5B) macrophages were not significantly different between mild and moderate asthmatic children. Interestingly, the levels of the M1 subsets of both PM-2K + CD14 + ( Figure 5C) and PM-2K + CD14 − ( Figure 5D) macrophages were significantly decreased in children with moderate asthma in comparison to those in children with mild asthma. Regarding the M2 subsets, the levels of M2c ( Figure 5G) but not M2a ( Figure 5E) or M2b ( Figure 5F) in PM-2K + CD14 + macrophages were significantly increased in children with moderate asthma in comparison to those in children with mild asthma. Similarly, regarding M2 subsets in PM-2K + CD14 − macrophages, only the levels of M2c ( Figure 5J) were significantly increased in children with moderate asthma in comparison to those in children with mild asthma. In contrast, the levels of M2a ( Figure 5H) were borderline decreased in children with moderate asthma in comparison to those in children with mild asthma. Macrophage Subsets Discriminated Against Asthmatic Children Requiring Hospitalization During Exacerbation Based on the findings that the levels of macrophage subsets were associated with severity in asthmatic children, we investigated whether the levels of macrophage subsets in asthmatic children were associated with the requirement for hospitalization during asthma exacerbations. Interestingly, we found that the level of PM-2K + CD14 + ( Figure 6A) was decreased in asthmatic children requiring hospitalization during exacerbations. Moreover, the levels of the M1 subsets of both PM-2K + CD14 + ( Figure 6C) and PM-2K + CD14 − ( Figure 6D) macrophages were significantly decreased in asthmatic children requiring hospitalization. Regarding M2 subsets, the levels of M2c ( Figure 6G), but not M2a ( Figure 6E) or M2b ( Figure 6F), of PM-2K + CD14 + macrophages were significantly increased in asthmatic children requiring hospitalization during exacerbations. Similarly, only the levels of M2c ( Figure 6J) but not M2a ( Figure 6H) or M2b ( Figure 6I) in PM-2K + CD14 − macrophages were significantly increased in asthmatic children requiring hospitalization during exacerbations. Macrophages Were Increased in Patients with Higher IgE Levels IgE is critically involved in allergic asthma in children. We next investigated the association between IgE and macrophage subsets in children with asthma. As shown in Figure 7, we found that the levels of M2c in PM-2K + CD14 + ( Figure 7G) and PM-2K + CD14 − macrophages ( Figure 7J) but not other subsets were increased in patients with higher IgE levels. Discussion Childhood asthma is considered to be a specific phenotype with a predominant T-helper (Th) type 2 immune reaction and inflammation. 12 In the present study for the first time in the literature, we found that the pattern of macrophage polarization was altered in children with asthma, and the polarization subsets M1/M2 distinguished the severity of asthma. Furthermore, the specific subsets of macrophage polarization were associated with the requirement for hospitalization during asthma exacerbations. These findings suggest the key role of macrophage polarization in the pathogenesis of childhood asthma and the possible clinical applications of these findings as useful biomarkers in the management of childhood asthma. Lung macrophages link both innate and adaptive immune responses in allergic airway inflammatory responses and have been suggested to play critical roles in the pathogenesis of asthma. Immunohistochemically, PM-2K recognizes most tissue macrophages in lymphoreticular organs such as the thymus, spleen, lymph node, and tonsil. 13 Since macrophages are involved in the pathogenesis of asthma, our study provided a rational approach to investigate the pathogenesis of asthma by studying PM-2K-positive circulating macrophages in children with asthma. In the present study, we found that the percentage of PM-2K + CD14 + cells but not PM-2K + CD14 − cells was decreased in children with asthma. In addition, M2 subsets of PM-2K + CD14 + cells were increased in children with asthma. Moreover, the levels of M1 and M2c distinguished the severity of asthma and predicted the need for hospitalization during an asthma exacerbation. These findings suggested dynamic changes between circulating and tissue macrophages in contributing to and regulating local inflammation in the airways of children with asthma. In previous animal asthma models, lipoproteinassociated phospholipase A2 (Lp-PLA2) gene knockout mice revealed that inflammatory responses induced by airway allergen sensitization were significantly decreased, which may contribute to increased IL-10 production and M2c macrophage polarization. 14,15 In contrast, in surfactant protein A (SP-A) gene knockout mice, the inflammatory response after exposure to allergens was significantly increased due to high IL-13 expression with M2a macrophage polarization. 16 Hence, the level of M2c macrophages could be utilized as a marker in chronic inflammatory lung disorders such as asthma, both for disease severity and in guiding the treatment strategy. 17 In contrast to M2a cells, M2c cells have an anti-inflammatory ability due to greater IL-10 expression involved in the resolution of lung inflammation and initiating tissue repair by releasing IL-10. Therefore, M2c cells can be considered a significant macrophage subset to participate in initiating inflammation resolution, and these processes may occur by upregulating CD163 and CD206. 17 In the present study, we revealed that increased circulating M2c subsets were closely associated with not only the severity but also the requirement for hospitalization during asthma exacerbations. For the first time in the literature, this study proved the role of M2c subsets in human subjects. Exposure to pathogens or allergens alters the microenvironment in the tissue and leads to the polarization of macrophages, depending on a variety of cytokines produced by lung epithelial cells and innate immune cells. 18 After polarization, these macrophage subsets express various cell surface markers and cytokines/ chemokines. For example, under LPS, IFN-r, and GM-CSF stimulation, macrophages are polarized to the M1 subset, producing proinflammatory Th1 cytokines and chemokines with a predominant role in pathogen clearance and tissue damage. 9 With the stimulation of IL-4/IL-13, macrophages are polarized to the M2a subset, activating Th2 cells and recruiting eosinophils into the lungs and they are involved in allergic inflammation. 19 The M2b subset can be elicited upon the stimulation of IL-1R ligands, immune complexes, and LPS, while M2c can be induced upon the stimulation of IL-10, TGF-β1, and glucocorticoids. M2c exerts an anti-inflammatory function during recovery, and both M2b and M2c predominantly participate in tissue remodeling and fibrosis. 20 In our present study, we found that all M2 subsets including M2a, M2b, and M2c subsets were increased in asthmatic children in comparison to those in normal children. Moreover, the levels of increased M2a and M2c subsets were positively related to the severity, and the level of increased M2c subsets predicted the need for hospitalization during asthma exacerbations. While the M1 subset was not changed in asthmatic children, the M1 subset was significantly decreased in asthmatic children with moderate severity compared to asthmatic children with mild severity, and the degree of decrease of the M1 subset also predicted the need for hospitalization. These data suggested that the allergic immune response plays a critical role in the pathogenesis of childhood asthma, and impaired balance and regulation between not only local but also circulating M1 and M2 subsets may exist in asthmatic children. It is known that asthma is a heterogeneous disease that varies considerably across the life course and shows differences between childhood-onset and adult-onset courses. 12 For example, childhood asthma is known for its male predominance before puberty, high remission rate, and rare mortality. Adult asthma is known for its female predominance, low remission rate, and relatively higher mortality. Childhood asthma and adult asthma share different factors associated with severity. Interestingly, in comparison to our previous work exploring circulating macrophages in asthmatic adults, 11 PM-2K + CD14 − and PM-2K + CD14 + cells were predominantly decreased in asthmatic adults, while only PM-2K + CD14 + cells were decreased in asthmatic children, as also shown in the present study. The imbalance between M1 and M2 subsets is also quite different. In the present study, an increased M2 subset was found in asthmatic children, and the levels of increased M2/decreased M1 subset were related to the severity and the need for hospitalization. Very interestingly, these findings were in contrast to our previous work in asthmatic adults, where the M1 subset was increased and the M2 subset was decreased, and the levels of increased M1/decreased M2 subsets were positively correlated with severity. In addition, previous research has revealed that elevated IgE levels were found in patients with atopic status, as it provides a linkage between antigen recognition and the effector function of acute-phase inflammatory cells such as mast cells and basophils. IgE plays an important role in the allergic inflammatory pathogenesis of bronchial asthma; therefore, it can be considered a cause of allergic asthma. 21,22 In the present study, elevated IgE levels were positively associated with the M2c subset. Currently, little evidence suggests that IgE antibodies may activate and recruit macrophages toward tumors, but little is known about the effects of IgE IgE-Fcε-receptor interactions on the downstream effects of macrophage polarization. 23 The observation of elevated M2c levels in patients with elevated IgE indicates a need for further studies of the interaction between IgE and macrophage polarization. Conclusion The findings of the present study suggest that the use of multicolor flow cytometry with a small amount of peripheral blood to obtain the levels of circulating macrophages and their subsets can discriminate the severity of asthma and the requirement for hospitalization during exacerbations in children with asthma. These findings offer a new approach for further understanding the pathogenic mechanisms of childhood asthma and may be useful to study other childhood immune diseases associated with macrophages. M2c subsets, as a potential biomarker for clinical applications, requires further investigation in an expanded study population to test the clinical utility of these findings.
2021-11-19T16:26:58.685Z
2021-11-01T00:00:00.000
{ "year": 2021, "sha1": "78bf43085edbae13c9b1ae6f19ba8ceab6f98562", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=75946", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "91f393f2118e585aead4763af2be5e985d55e468", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
237208249
pes2o/s2orc
v3-fos-license
Cutaneous microbial biofilm formation as an underlying cause of red scrotum syndrome Background Red scrotum syndrome is typically described as well-demarcated erythema of the anterior scrotum accompanied by persistent itching and burning. It is chronic and difficult to treat and contributes to significant psychological distress and reduction in quality of life. The medical literature surrounding the condition is sparse, with the prevalence likely under-recognized and the pathophysiology remaining poorly understood. Formation of a cutaneous microbial biofilm has not been proposed as an underlying etiology. Microbial biofilms can form whenever microorganisms are suspended in fluid on a surface for a prolonged time and are becoming increasingly recognized as important contributors to medical disease (e.g., chronic wounds). Case presentation A 26-year-old man abruptly developed well-demarcated erythema of the bilateral scrotum after vaginal secretions were left covering the scrotum overnight. For 14 months, the patient experienced daily scrotal itching and burning while seeking care from multiple physicians and attempting numerous failed therapies. He eventually obtained complete symptomatic relief with the twice daily application of 0.8% menthol powder. Findings in support of a cutaneous microbial biofilm as the underlying etiology include: (1) the condition began following a typical scenario that would facilitate biofilm formation; (2) the demarcation of erythema precisely follows the scrotal hairline, suggesting that hair follicles acted as scaffolding during biofilm formation; (3) despite resolution of symptoms, the scrotal erythema has persisted, unchanged in boundary 15 years after the condition began; and (4) the erythematous skin demonstrates prolonged retention of gentian violet dye in comparison with adjacent unaffected skin, suggesting the presence of dye-avid material on the skin surface. Conclusion The probability that microorganisms, under proper conditions, can form biofilm on intact skin is poorly recognized. This case presents a compelling argument for a cutaneous microbial biofilm as the underlying cause of red scrotum syndrome in one patient, and a review of similarities with other reported cases suggests the same etiology is likely responsible for a significant portion of the total disease burden. This etiology may also be a significant contributor to the disease burden of vulvodynia, a condition with many similarities to red scrotum syndrome. are commonly age > 50, though young patients are also affected. The condition is typically chronic and difficult to treat, with symptoms often lasting for years and contributing to significant psychological distress and reduction in quality of life [4]. The medical literature surrounding RSS is sparse, and the prevalence of the condition is likely under-recognized. Primary neuropathy, microvascular dysregulation, and overuse of topical corticosteroids have been suggested as potential causes, but the underlying pathophysiology remains poorly understood [5,6]. It is possible the condition represents a constellation of similar symptoms and physical findings that result from a number of different etiologies. This report describes one patient's 15-year experience with the condition, and provides evidence implicating a cutaneous microbial biofilm as the likely etiology. In addition, a review of the current literature will implicate the same etiology as likely responsible for a significant portion of the total RSS disease burden. In nature, microorganisms (e.g., bacteria, fungi) exist in 2 states: planktonic (free in the environment) and within biofilm (surface-attached). The biofilm state can occur whenever microorganisms are suspended in fluid on a surface. Biofilm is created when microbes attach to the surface and subsequently secrete extracellular polymers that form a protective matrix, allowing the microorganisms to live and grow in protection from environmental threats [7][8][9][10]. Evidence shows that under proper conditions biofilm formation can take place within several hours, with continued maturation occurring over days, months, and years [11]. Once created, biofilms are notoriously difficult to eradicate, as the microorganisms are protected from antibiotics and other chemical antimicrobials by the thick extracellular matrix. In addition, microbes within biofilm often assume a relatively slow growth and metabolic rate, creating less susceptibility to toxic inhibitors. Microorganisms within biofilm also have the ability to efficiently communicate with one another (i.e., quorum sensing) and rapidly share extrachromosomal genetic material, further amplifying their capacity for antimicrobial resistance [12][13][14]. Over the past 20 years, microbial biofilms have been increasingly recognized for playing a prominent role in human infection. Early on, evidence primarily implicated biofilms in infections that involve nonliving surfaces, including urinary or intravenous catheters, prosthetic joints, and prosthetic heart valves [15]. However, microbial biofilms have more recently been appreciated as either significant contributors or the underlying cause of many difficult-to-treat and/or chronic diseases that involve biofilm attachment to living surfaces. Among these conditions are native valve infective endocarditis, chronic prostatitis, chronic wound infections, and chronic sinusitis [16][17][18][19]. Undoubtedly, the role of microbial biofilms in other chronic conditions is yet to be recognized. Case presentation At condition onset, the patient was a 26-year-old man with a history of recurrent tinea versicolor affecting the trunk and neck and no other significant medical history, including no history of topical corticosteroid use. He experienced the abrupt onset of erythema of the bilateral scrotum accompanied by intense itching and burning. The signs and symptoms began one morning following sexual intercourse with a female partner the previous night. The patient did not clean up after intercourse and fell asleep lying on his back with vaginal secretions covering his scrotum. On examination, the erythema was well-demarcated, closely following the distribution of hair on the bilateral scrotum and also involving a small portion of the haircovered ventral shaft of the penis. There was sparing of erythema along the hairless portion of the scrotal midline and the hairless underside of the scrotum (Fig. 1). There was no evidence of ulceration or swelling, and there was no tenderness on palpation of the affected area. The patient first sought care with his primary care physician, where HIV testing and HSV 1 and 2 serum antibody testing were performed and were all negative. The patient then completed a 4-week course of topical clotrimazole cream with no improvement in symptoms. A 10-day course of erythromycin was also tried under the assumption that the condition represented an atypical presentation of erythrasma, but also with no improvement. Symptomatically, the patient continued to experience nearly constant symptoms of scrotal itching and burning that were made worse by long periods of sitting. He was subsequently referred to a dermatologist at a large academic center, where he was told the condition represented some type of mast cell overactivity and there was no specific treatment for it. Unsatisfied with that conclusion, the patient returned to see another dermatologist at the same institution. After a brief examination, the second dermatologist told the patient his scrotal skin was "normal, " and he should try to "forget about" his symptoms. At the insistence of the patient, a punch biopsy of the erythematous scrotal skin was performed, with the dermatopathologic results reported as normal other than hypervascularity. During this time the patient experienced significant psychological distress, including decreased concentration, insomnia, anhedonia, decreased libido, and decreased appetite with weight loss. Seeking further assistance, the patient sought the opinion of a private practice dermatologist, who had previous experience as a military physician. That dermatologist acknowledged that he had seen about a dozen cases of RSS previously and advised that from his experience only about half of patients experience improvement. Additional treatments were attempted including pulsed dye laser therapy (targeted at hypervascularity) and liquid nitrogen cryotherapy, neither of which provided improvement. An extended course of minocycline was also attempted, but the patient had to discontinue the drug after several days due to intolerance (metallic taste). After 14 months with no improvement in symptoms and only slight fading of the erythema, the patient began applying 0.8% menthol powder (e.g., extra strength Gold Bond) to the scrotum twice per day after cleaning the skin with water and thorough drying with a towel. The itching and burning symptoms gradually improved and eventually resolved after 1-2 months of treatment. The erythema persisted, however, slightly faded but without change in affected area or demarcation. Over time, the patient was able to wean down to once daily application of 0.8% menthol powder, and he has continued this regimen for over 14 years, remaining mostly symptom free. The intense itching and burning has briefly returned on a few occasions, most notably during a 2-week-long period (12 years after the condition first began) when the patient ingested 8 oz of kefir daily in an effort to improve gastrointestinal upset. The patient eventually suspected the kefir as an exacerbating factor and discontinued use, with the itching and burning again resolving within a couple weeks. Over the years, the patient has attempted other treatments in an effort to completely eradicate the condition. At one point, he attempted daily application of gentian violet to the affected area given evidence of antistaphylococcal and antifungal properties of the agent. The therapy was discontinued after several days due to worsening scrotal discomfort. Remarkably, however, the gentian violet quickly washed off of the unaffected scrotal skin but for several days remained in place on the affected skin, exactly matching the erythematous borders to suggest the presence of something holding it there (Figs. 2 and 3). The twice daily application of 2.5% selenium sulfide solution was also attempted. After 2-3 days of therapy, the affected scrotal skin became bright red and slightly raised and glassy-appearing (in sharp contrast with the unaffected skin), but after about a week of therapy the Discussion This patient's 15-year experience with RSS presents a compelling argument for a cutaneous microbial biofilm as the underlying etiology. The evidence in support of this conclusion can be summarized as follows: • The symptoms began following a typical scenario that would facilitate biofilm formation. Vaginal secretions resting on the scrotum overnight combine a fluid environment containing microorganisms (both skin flora and vaginal flora) with sufficient time (e.g., 6-7 h) for biofilm formation to occur. • The demarcation of affected skin closely following the hairline is readily explained by the hair follicles acting as scaffolding during biofilm formation. The hair follicles likely trapped fluid and held it in place on the skin surface overnight, with hairless skin being spared. • The chronicity of the patient's signs and symptoms as well as the recalcitrance of the condition to both topical and systemic antimicrobials is highly consistent with a biofilm infection. • The persistence of gentian violet dye exactly matching the boundaries of the affected skin after quickly washing away from the unaffected skin strongly suggests, if not proves, the presence of abnormal material on the surface of the affected skin. The extracellular polysaccharide matrix of biofilm can avidly retain dye; therefore, the presence of a cutaneous microbial biofilm readily explains this observed phenomenon [20,21]. The marked difference in response of the affected scrotal skin and unaffected scrotal skin to the application of 2.5% selenium sulfide, a known biofilm dispersal agent, further suggests the presence of biofilm [21]. The arguments against this conclusion can be mostly dismissed by a review of the biofilm literature and an acknowledgement of what is currently known and unknown about biofilm behavior: • Any notion that biofilm formation cannot occur on intact skin is in conflict with current evidence. Malasezzia furfur/ovale has been shown to form biofilms both in vitro and in vivo, and cutaneous biofilm formation on intact skin is believed to be a major factor in the pathogenesis and chronicity of tinea versicolor [21,22]. Biofilm formation on chronic wounds is widely recognized, and there is no proven attribute of intact epidermis that would prevent biofilm formation but is lacking on the surface of chronic wounds [18,23,24]. The intact scrotal skin may be particularly susceptible to biofilm formation due to its thin epidermis and irregular surface, as evidence shows that biofilms form more easily on rough surfaces [12]. • Visualization of biofilm is difficult using routine light microscopy and typically requires specialized staining or microscopy techniques (e.g., electron microscopy) [20,25,26]. Therefore, it is expected that the punch biopsy performed early in this patient's disease course would appear essentially normal as evidence of a cutaneous microbial biofilm was not specifically sought. • An argument could be made that the proposed mechanism of disease (i.e., vaginal secretions on the scrotum overnight) is likely such a common occurrence that the condition would be widespread and well-recognized by now. However, too little is known at this time to support such an argument. It is possible and even likely that specific parameters must be met (e.g., presence of particular microorganisms, host-specific deficiency in innate immunity) for cutaneous microbial biofilm formation to occur. Even Photographs of the right side of the scrotum after 2-3 days of twice daily application of 2.5% selenium sulfide solution. The affected skin has become slightly raised and glassy-appearing, in sharp contrast with the unaffected scrotal midline. further, the true incidence of RSS is almost certainly under-recognized as no large epidemiologic studies have been performed [5,6]. Many affected patients may suffer with the condition for years without seeking treatment or discover their own symptomatic management independent of the medical community. • An additional argument could be made citing doubt in how biofilm that is attached to the surface of the stratum corneum could a) cause discomfort and b) allow for the continued normal turnover of skin cells. Nerve endings are known to extend into and possibly slightly beyond the granulosum layer of the epidermis [27]. Given that biofilms are commonly 100 microns or greater in thickness and the scrotal stratum corneum is < 10 microns thick, it is reasonable that a biofilm could exert mechanical stimulus on epidermal nerve endings [28,29]. It is also possible that biofilm releases chemical irritants into the skin or disrupts thermal regulation of the skin, causing the release of vasodilatory mediators that also interact with pain signaling. This would also explain the hypervascular appearance of the affected skin. Regarding normal skin cell turnover, biofilms are complex systems capable of utilizing nutrients and processing wastes [7,15]. It is likely that biofilm is able to metabolize dead skin cells coming from the stratum corneum and allow for ongoing cell turnover. In relating the findings in this case to other cases of RSS reported in the literature, a cutaneous microbial biofilm would seem to explain much of the reported disease burden. The majority of cases describe well-demarcated erythema with associated itching and burning that can last for years [1][2][3]. Details surrounding the onset of the condition are usually unclear, making it highly possible the condition unknowingly began after the patient failed to clean up following sex. The fact that the anterior scrotum is always affected also fits with this mechanism. Erythema closely following the distribution of hair is rarely specifically reported, though sparing of the hairless scrotal midline is commonly apparent in photographs. Nonetheless, it is possible that hair follicles may assist in biofilm formation but are not a necessary component. An association with prior topical corticosteroid use is often reported. It would seem the contribution of topical corticosteroids to the overall disease burden is likely overappreciated; however, it is possible that in some cases topical corticosteroids contribute to a favorable microbial or immunologic environment for biofilm formation to occur [30]. Psychiatric comorbidities are frequently reported with RSS (e.g., 75% prevalence in one small cohort of 12 patients) [2,4]. Without a doubt, the emotional toll inflicted by chronic daily symptoms of scrotal itching and burning can be substantial. Unfortunately, in the absence of an evident organic disease process many clinicians likely consider the psychiatric comorbidities part of the cause of the symptoms rather than the more likely result of the symptoms, further worsening the situation for the patient. Successful symptomatic treatment of RSS has been reported with medications including gabapentin, amitriptyline, calcineurin inhibitors, and doxycycline [2,5,6,31,32]. The mechanism of successful treatment is largely unclear though it is likely to involve modification of neuropathic pain signaling (it is likely the 0.8% menthol powder is effective via a similar mechanism in this case) [33]. Whether or not the erythema resolves following these treatments is rarely specified, but it is often implied that the erythema remains despite improvement in symptoms (as occurred in this case). Biofilms are often polymicrobial but given this patient's history of tinea versicolor it would seem Malassezia furfur/ovale may be a crucial contributor to the pathology in this particular case, and the condition itself may represent a variant of tinea versicolor [21,22,34]. This patient's recurrence of symptoms during daily ingestion of kefir may also suggest the involvement of Lactobacillus species, which are common components of normal vaginal flora. It is possible the abundant Lactobacillus organisms in kefir somehow triggered a temporary, heightened response to the biofilm [35]. Importantly, it stands to reason that a similar mechanism of cutaneous microbial biofilm formation may also underlie a significant portion of the disease burden of vulvodynia, a chronic condition characterized by vaginal itching and burning (and often erythema) that, like RSS, is in dire need of improved understanding and treatment options [36]. This suspicion is based on similarities with RSS in terms of both symptoms and chronicity, and a potential similar susceptibility of the vulvar skin to biofilm formation. Successful treatments for RSS have largely mirrored those for the more widely studied vulvodynia in that they are minimal and primarily focus on symptom relief via modification of neuropathic pain signaling. In addition, perhaps the only definitive treatment for vulvodynia involves surgical removal of the affected tissue, a characteristic therapy of last resort for biofilm infections [37,38]. In conclusion, the probability that microorganisms, under proper conditions, can form biofilm on intact skin is poorly recognized. Hair covered portions of the genital skin may be particularly susceptible to this phenomenon. Further study is needed to confirm the proposed mechanism of cutaneous microbial biofilm formation as an underlying etiology of RSS, better characterize the prevalence of RSS, and further investigate both symptomatic and fully eradicative treatments. Informational campaigns to provide preventative hygiene recommendations to reduce future incidence, and to provide support for those currently affected are also needed. Finally, further investigation is needed into a similar mechanism of cutaneous microbial biofilm formation as a potential significant contributor to the disease burden of vulvodynia.
2021-08-19T13:41:22.072Z
2021-08-19T00:00:00.000
{ "year": 2021, "sha1": "538248bb7261be7e20f0f2e394b8192c479fe98e", "oa_license": "CCBY", "oa_url": "https://eurjmedres.biomedcentral.com/track/pdf/10.1186/s40001-021-00569-9", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "538248bb7261be7e20f0f2e394b8192c479fe98e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
256116292
pes2o/s2orc
v3-fos-license
Two-dimensional hydrogen-like atom in a weak magnetic field We consider a non-relativistic two-dimensional (2D) hydrogen-like atom in a weak, static, uniform magnetic field perpendicular to the atomic plane. Within the framework of the Rayleigh-Schrödinger perturbation theory, using the Sturmian expansion of the generalized radial Coulomb Green function, we derive explicit analytical expressions for corrections to an arbitrary planar hydrogenic bound-state energy level, up to the fourth order in the strength of the perturbing magnetic field. In the case of the ground state, we correct an expression for the fourth-order correction to energy available in the literature. Recently, we have come across the need to know exact analytical representations for low-order perturbation theory corrections to an arbitrary energy level of a two-dimensional analogue of a hydrogen-like atom placed in a weak and uniform magnetic field perpendicular to the atomic plane. The first-order correction may be obtained trivially for any atomic state. Exact values of the second-order corrections for states with the principal quantum numbers 1 n 4 may be derived from a table provided in ref. [4]. The third-order correction may be shown to vanish identically for any state (in fact, the same happens for all odd-order corrections other than the first-order one), while in refs. [25,31] the fourth-order correction has been given, but for the ground level only. Approximate expressions for several higher even-order corrections to states with zero radial quantum numbers and with principal quantum numbers not exceeding six are contained in ref. [35]. However, neither of the publications invoked above, nor any other related one we have had in hands in the course of browsing the literature, contains the general formulas we have been seeking for. This is a bit astonishing in view of the fact that for a similar problem of the planar one-electron atom placed in a weak, uniform, in-plane electric field, closed-form analytical expressions for Stark-Lo Surdo corrections to energies of discrete parabolic eigenstates are known up to the sixth order in the perturbing field [37,38]. Under these circumstances, we have derived expressions for the second-and fourth-order magnetic-field-induced corrections to an arbitrary energy level of the planar hydrogenic atom. The results of that study are presented in this work. We believe they may be of some interest, in particular because the result for the fourth-order correction to the ground state given in ref. [25], and then repeated in ref. [31], has been found to be incorrect. a static uniform magnetic field of induction B, which is perpendicular to the atomic plane. With the electron radius vector r being referred to the nucleus, the two-dimensional time-independent Schrödinger equation for the electron is where r = |r| and A(r) is a vector potential of the magnetic field. Equation (1) is to be solved, with the electron energy E chosen as an eigenvalue, subject to the constraint that the wave function Ψ (r) is single-valued and bounded for all r ∈ R 2 , including the point r = 0 and the point at infinity. Throughout this paper, we shall be working in the symmetric gauge, in which the vector potential A(r) is Then, the Schrödinger equation (1) may be rewritten as where is a (dimensionless) orbital angular-momentum operator for the electron. The form of the Hamiltonian operator in the Schrödinger equation (3) suggests one introduces the polar coordinates r and ϕ, with 0 r < ∞ and 0 ϕ < 2π; eq. (3) is then transformed into the following one: The benefit from the use of the polar coordinates is that eq. (5) is separable, in the sense that it possesses particular solutions of the form where Plugging eq. (6) into eq. (5) and exploiting eq. (7) yields the radial Schrödinger equation which is to be solved subject to the boundary conditions P nlm l (r)/ √ r bounded for r → 0 and for r → ∞. It is easy to deduce from the standard asymptotic analysis that for B = 0 the constraints displayed in eq. (8b) may be replaced by the following ones: The symbol n that has appeared the first time as a subscript in eq. (6) is the principal quantum number defined as n = n r + l + 1, where n r ∈ N 0 is the radial quantum number which counts the number of nodes (zeroes) in the radial wave function. Since the term linear in B which appears in the differential operator in eq. (8a) is independent of the variable r, it is clear that the energy eigenvalue E nlm l may be written as with It is also evident that the radial function P nlm l (r) does depend on m l through l = |m l | only: Consequently, the starting point for further considerations will be the radial eigenvalue problem 3 Perturbation theory analysis Basics and the zeroth-order problem Closed-form analytical solutions to the eigenproblem (13) are not known. Therefore, below we shall attempt to find its approximate solutions, under the assumption that the magnetic field is weak, with the use of the Rayleigh-Schrödinger perturbation theory. To this end, we write the radial differential operator from eq. (13a) as where and We shall treat the diamagnetic term (16) as a small perturbation of the radial Coulomb Hamiltonian (15). Since H (2) (r) is of the second order in the perturbing magnetic field, we seek solutions to the eigensystem (13) in the form of the perturbation series and involving even-order terms only. Here E (0) nl and P (0) nl (r) are those solutions to the zeroth-order eigenproblem (being the radial Coulomb one) (subscripts have been omitted intentionally), which correspond to the discrete part of its spectrum, consisting of the eigenvalues with and with being the Bohr radius. Eigenfunctions associated with the eigenvalues (20), orthonormal in the sense of where L (α) k (x) is the generalized Laguerre polynomial ( [39], sect. 5.5). For integration purposes, it is frequently convenient to have these functions rewritten as The second-order corrections to Coulomb energies For the present problem, the second-order correction to energy, E nl , is given by or, equivalently, if use is made of eq. (16), by Plugging eq. (25) into the integrand and exploiting the integration formula which may be deduced from the general expression ( [40], eqs. (E54), (E56) and (E60)) yields where is the atomic unit of magnetic induction. For states with l = n − 1 (i.e., those with n r = 0), the expression in eq. (30) simplifies to The fourth-order corrections to Coulomb energies Proceeding along the standard route, one finds that for the present problem the fourth-order correction to energy, E nl , is given by E where the second-order correction to the radial wave function, P (2) nl (r), is a solution to the inhomogeneous boundaryvalue problem subject to the further orthogonality restraint The formal solution to the problem (34)- (35) is where G (0) nl (r, r ) is a generalized (or reduced) radial Coulomb Green function associated with the Coulomb energy level E (0) n . The latter function is defined as that particular solution to the inhomogeneous boundary-value problem where δ(r − r ) is the Dirac delta function, which obeys the additional orthogonality constraint Since the zeroth-order eigenproblem (19) is self-adjoint, the function G (0) nl (r, r ) is symmetric in its arguments: When this is combined with eq. (38), one deduces the formula which allows us to simplify eq. (36) to obtain Plugging eq. (41) into the right-hand side of eq. (33) gives the energy correction E (4) nl in the form or, still more explicitly, in the form A representation of the generalized radial Coulomb Green function G (0) nl (r, r ) which is perhaps the most suitable for the use in eq. (43) is the one in the form of a series expansion in the discrete radial Coulomb Sturmian basis. We shall construct it below. The discrete radial Coulomb Sturmian functions are defined as solutions to the spectral problem nrl (E) chosen as an eigenvalue. The spectrum of this problem is purely discrete, and eigenvalues are given by where Eigenfunctions, orthonormal in the sense of are Contrary to the discrete Coulomb eigenfunctions (24), the Sturmians (48) form a complete set, the corresponding closure relation being If the parameter E coincides with the Coulomb energy eigenvalue E (0) n displayed in eq. (20) (we assume n is related to n r and l used here as in eq. (9)), it is easy to see from eqs. (45), (46), (20) and (21) The radial Coulomb Green function, G l (E, r, r ), is defined to be a solution to the inhomogeneous equation subject to the boundary constraints Since the Sturmian functions (48) form a complete set, the Green function G l (E, r, r ) may be sought in the form of the series To determine the expansion coefficients C nrl (E, r), we plug eq. (53) into eq. (52a), multiply both sides of the resulting identity by S (0) n r l (E, r), then integrate with respect to r over the interval [0, ∞), and apply the orthogonality relation (48). Upon the replacement of n r with n r , this yields hence, we obtain the following symmetric Sturmian expansion of G l (E, r, r ): It follows, from eqs. (37), (38) and (52), that the generalized radial Coulomb Green function G (0) nl (r, r ) may be obtained from the radial Coulomb Green function G (0) l (E, r, r ) through the limit procedure (56) By virtue of the de l'Hospital rule, the latter equation is equivalent to the following one: which is particularly suitable for the construction of the Sturmian expansion of G (0) nl (r, r ). Inserting the series representation (55) into the right-hand side of eq. (57) and then making use of the relationships lim which may be easily derived from the defining eqs. (45) and (48), one eventually arrives at the sought Sturmian expansion of the generalized radial Coulomb Green function, which is Once the Sturmian expansion of G (0) nl (r, r ) has been found, we are ready to complete the task to find the fourthorder energy correction E (4) nl . To this end, we insert eq. (63) into eq. (43) and use the relationship in eq. (51), together with integrations by parts, to eliminate derivatives of Sturmian functions. This gives E (4) nl in the form The integrals in eq. (64) may be taken after one exploits eqs. (25) and (48), with the use of the integration formula which generalizes the one in eq. (28) and, similarly to the latter, may be derived from the general expression (29). Since only terms with n r constrained by 1 |n r − n r | 3 are seen to contribute non-vanishingly to the sum in eq. (64), we eventually obtain (66) For states with l = n − 1 (i.e., those with n r = 0), eq. (66) becomes E (4) n,n−1 = − 1 2 9 n n + For the ground state (n = 1), eq. (67) yields This differs from the result announced in refs. [25] (eq. (32)) and [31] (eq. (6.59)), which is The latter one is thus found to be incorrect. Summary and concluding remarks On the preceding pages, we have shown that energy levels of the planar hydrogen-like atom placed in a weak, static, uniform magnetic field of induction B perpendicular to the atomic plane may be expressed in the form where In eq. (71), Z is an electric charge of the atomic nucleus in units of the elementary charge e, a 0 is the Bohr radius, is the atomic unit of magnetic induction, while the dimensionless and Z-independent coefficients ε (k) ... are given by with n ∈ N + , m l ∈ Z and 0 l = |m l | n − 1. Numerical values of the coefficients ε (2) nl and ε (4) nl for states with 1 n 4 are displayed in table 1. It has to be emphasized that the formula in eq. (70) is valid only if the electron spin is ignored. If this cannot be done, the Schrödinger equation (1) − Ze 2 (4π 0 )r Ψ (r) = EΨ (r) (r ∈ R 2 ),
2023-01-24T14:09:22.440Z
2018-08-01T00:00:00.000
{ "year": 2018, "sha1": "c20c918f26ed067457f4c88848300b9ae0ed7f3b", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1140/epjp/i2018-12126-7.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "c20c918f26ed067457f4c88848300b9ae0ed7f3b", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
266843261
pes2o/s2orc
v3-fos-license
Analysis of Clinical Characteristics and Neuropeptides in Patients with Dry Eye with and without Chronic Ocular Pain after FS-LASIK Introduction Chronic ocular pain, particularly prevalent in patients with dry eye disease and post-femtosecond laser-assisted laser in situ keratomileusis (FS-LASIK) surgery, presents with unclear clinical characteristics and an undefined pathogenesis. In this study, we aimed to compare clinical characteristics and tear neuropeptide concentrations in patients with dry eye disease (DED) with and without chronic ocular pain following FS-LASIK, and investigate correlations between ocular pain, clinical characteristics, and tear neuropeptide levels. Methods Thirty-eight post–FS-LASIK patients with DED were assigned to two groups: those with chronic ocular pain and those without chronic ocular pain. Dry eye, ocular pain, and mental health-related parameters were evaluated using specific questionnaires and tests. The morphology of corneal nerves and dendritic cells (DCs) was evaluated by in vivo confocal microscopy. Function of corneal innervation was evaluated by corneal sensitivity. Concentrations of tear cytokines (interleukin [IL]-6, IL-23, IL-17A, and interferon-γ) and neuropeptides (α-melanocyte-stimulating hormone, neurotensin, β-endorphin, oxytocin, and substance P [SP]) were measured using the Luminex assay. Results Most patients with chronic ocular pain experienced mild to moderate pain; the most common types included stimulated pain (provoked by wind and light), burning pain, and pressure sensation. More severe dry eye (P < 0.001), anxiety symptoms (P = 0.026), lower Schirmer I test values (P = 0.035), lower corneal nerve density (P = 0.043), and more activated DCs (P = 0.041) were observed in patients with ocular pain. Tear concentrations of SP and oxytocin were significantly higher in patients with ocular pain (P = 0.001, P = 0.021, respectively). Furthermore, significant correlations were observed among ocular pain severity, SP, and anxiety levels. Conclusions Patients with DED after FS-LASIK who have chronic ocular pain show more severe ocular and psychological discomfort and higher tear levels of neuropeptides. Furthermore, ocular pain severity is correlated with tear SP levels. Trial Registration ClinicalTrials.gov identifier: NCT05600985. Introduction: Chronic ocular pain, particularly prevalent in patients with dry eye disease and post-femtosecond laser-assisted laser in situ keratomileusis (FS-LASIK) surgery, presents with unclear clinical characteristics and an undefined pathogenesis.In this study, we aimed to compare clinical characteristics and tear neuropeptide concentrations in patients with dry eye disease (DED) with and without chronic ocular pain following FS-LASIK, and investigate correlations between ocular pain, clinical characteristics, and tear neuropeptide levels.Methods: Thirty-eight post-FS-LASIK patients with DED were assigned to two groups: those with chronic ocular pain and those without chronic ocular pain.Dry eye, ocular pain, and mental health-related parameters were evaluated using specific questionnaires and tests.The morphology of corneal nerves and dendritic cells (DCs) was evaluated by in vivo confocal microscopy.Function of corneal innervation was evaluated by corneal sensitivity.Concentrations of tear cytokines (interleukin [IL]-6, IL-23, IL-17A, and interferon-c) and neuropeptides (a-melanocyte-stimulating hormone, neurotensin, b-endorphin, oxytocin, and substance P [SP]) were measured using the Luminex assay.Results: Most patients with chronic ocular pain experienced mild to moderate pain; the most common types included stimulated pain (provoked by wind and light), burning pain, and pressure sensation.More severe dry eye (P \ 0.001), anxiety symptoms (P = 0.026), lower Schirmer I test values (P = 0.035), lower corneal nerve density (P = 0.043), and more activated DCs (P = 0.041) were observed in patients with ocular pain.Tear concentrations of SP and oxytocin were significantly higher in INTRODUCTION Pain is defined as ''an unpleasant sensory and emotional experience'' [1].Chronic ocular pain is defined as pain originating from the ocular surface that persists for more than 3 months and significantly affects the daily activities of patients, and is common in patients with dry eye disease (DED), especially in patients after refractive surgery [2].Although femtosecond laser-assisted laser in situ keratomileusis (FS-LASIK) has been the most frequent refractive surgery in recent years, there are still a number of patients who suffer from some form of ocular pain [3], especially those with psychiatric and neurological problems including fibromyalgia, anxiety, and depression [4].Additionally, chronic ocular pain has a substantial impact on the quality of life and causes a huge financial burden [5]. Although chronic ocular pain after FS-LASIK has been reported previously [6], the description of its features remains scarce and the mechanism underlying chronic ocular pain in patients with DED following FS-LASIK is unclear.Due to the lack of understanding of ocular pain, there is currently no effective drug or treatment for ocular pain.Recently, a new questionnaire, the Neuropathic Pain Symptom Inventory modified for the Eye (NPSI-Eye), was validated to assess ocular pain features, which will help eye care practitioners better understand the characteristics of ocular pain [7].There is past evidence to support that corneal nerve damage partially results in ocular pain in patients after refractive surgery [8,9].However, this is not sufficient to explain the pathogenesis of ocular pain.At present, increasing attention is being paid to the interactions between dendritic cells (DCs) and neuropeptides following FS-LASIK, which are the core of neuro-immune interaction [10].However, whether tear neuropeptides and corneal DCs are involved in the pathogenesis of ocular pain, or whether it is the consequence of the pain process, remains unknown [11]. Therefore, we performed a cross-sectional survey aiming to investigate chronic ocular pain features, ocular characteristics, corneal DCs, tear film cytokines, and neuropeptides in patients with and without ocular pain, and explored the profile of clinical characteristics and pain-related neuropeptides in patients experiencing chronic pain associated with DED following FS-LASIK. METHODS This cross-sectional study included 38 patients with post-FS-LASIK DED.The patients were categorized into two groups based on the presence or absence of chronic ocular pain: (1) patients with DED after FS-LASIK who have chronic ocular pain and (2) patients with DED after FS-LASIK who do not have chronic ocular pain.This study complied with the principles of the Declaration of Helsinki and was approved by the Medical Science Research Ethics Committee of Peking University Third Hospital (M2023048).Written informed consent was obtained from all participants before their participation. The sample size was estimated based on the difference in numerical rating scale (NRS) scores between post-refractive surgery patients with and without ocular pain, as documented in previous studies [12].We employed the following formula to estimate the standard deviation (SD) from the 95% confidence intervals (CIs) of the mean NRS scores: SD & [(CI upper -CI lower)/2 9 1.96] 9 Hn.With the estimated SDs, we conducted sample size calculations via PASS 15.0 software, opting for the method using twosample t-tests allowing unequal variance.This analysis indicated that 12 patients per group would be required to achieve 90% power with an alpha level of 0.01.Therefore, 20 and 18 participants were recruited in two groups, respectively, to allow for missing data.All tests were performed on both eyes; only the data from the right eye were used for analysis to ensure consistency. Participants We included patients aged C 18 years who had (1) undergone bilateral FS-LASIK 12 months earlier, (2) experienced DED [13] diagnosed according to the Tear Film & Ocular Surface Society (TFOS) DEWS II criteria (Ocular Surface Disease Index [OSDI] score C 13 and tear breakup time [TBUT] \ 10 s) for more than 6 months after FS-LASIK, and (3) experienced chronic ocular pain, which was indicated by a numerical rating scale (NRS) score C 2 [8] and lasted at least 3 months.[14] The exclusion criteria were as follows: (1) any other ocular surgery, (2) ocular active infections, (3) glaucoma, (4) topical or systemic medication therapies within 2 weeks prior to recruitment, and (5) any other major systemic diseases, including diabetes, malignant tumors, and autoimmune diseases, such as Sjo ¨gren's syndrome. Ocular Surface Evaluations We performed TBUT, Schirmer I test (SIt), corneal fluorescein staining (CFS), and conjunctival lissamine green (LG) staining to evaluate ocular surface signs.TBUT was evaluated with a cobalt blue filter over a slit-lamp biomicroscope.The SIt was conducted using Schirmer paper strips (5 9 35 mm) without anesthesia.CFS and LG staining were evaluated using the National Eye Institute Workshop guidelines (total score: 0-15) [19] and the Oxford grading panel (total score: 0-15) [20], respectively.Corneal sensitivity is a method used to evaluate the function of corneal nerves, and was measured using a Cochet-Bonnet esthesiometer (Luneau Ophthalmologie, Chartres, France). The morphological parameters of the corneal nerve were analyzed using ACCMetrics software (University of Manchester, UK) [21].Participants were asked to fixate on a specially designed target to map a 1 mm 2 image of the corneal sub-basal nerve plexus at the central cornea [22].Five representative images of the sub-basal nerve plexus of the central cornea were selected by two masked observers for analysis (resolution: 384 9 384 pixels; area: 400 mm 9 400 mm [0.16 mm 2 ]) [23].The nerve parameters included corneal nerve fiber density (CNFD), corneal nerve branch density (CNBD), corneal nerve fiber total branch density (CTBD), and corneal nerve fiber length (CNFL).DCs are hyperreflective cells with or without processes emanating from the cell body.Activated DCs (aDCs) and non-activated DCs were distinguished according to the number of processes [24,25].DCs are categorized as ''activated'' if they have at least three processes emitting from the cell body [24,26].DCs and aDCs were manually counted by two independent observers masked to the clinical findings based on the previous literature [25].A semiautomatic image processing software (ImageJ, National Institutes of Health, Bethesda, MD, USA) was used to quantify DC and aDC parameters.Marked cells of each type were averaged between both observers.Prior to commencing the study, we first examined inter-rater reliability using the intra-class correlation coefficient (ICC).Two masked readers evaluated 40 images with an ICC of 0.974 (P \ 0.001) for the DC number and 0.960 (P \ 0.001) for the aDC number. Analysis of Tear Cytokine and Neuropeptides Recent studies have shown that inflammatory cytokines and neuropeptides correlate with refractive surgery-related DED or ocular pain [12].Therefore, we measured the levels of inflammatory cytokines (interleukin [IL]-17A, IL-23, IL-6, and interferon [IFN]-c) and neuropeptides (a-melanocyte-stimulating hormone [a-MSH], neurotensin, b-endorphin, oxytocin, and substance P [SP]) in the two groups.In order to analyze for these substances, basal tear samples (5 lL) were collected non-traumatically from the external canthus of the patient's eyes with clean glass capillary micropipettes (Drummond Scientific Co., Broomall, PA, USA), which were collected in sterile collection tubes.Care was taken to avoid additional tear reflex as much as possible.The collection tubes were kept cold (4 °C) during collection and then immediately stored at -80 °C.The cytokines and neuropeptides were analyzed using the Luminex assay.According to previous research [27], tear collection was performed before any other test and within a maximum of 10 min. Statistical Analyses Statistical analyses were performed using SPSS software (version 26.0;IBM Corp., Armonk, NY, USA).Normal distribution was checked using the Shapiro-Wilk test.Quantitative data are summarized as means ± standard deviations (SD) or medians (interquartile ranges) according to their normality distributions, whereas qualitative data were summarized using percentages.If the data did not conform to a normal distribution, the Mann-Whitney U test was used for two independent samples.Spearman's correlation coefficient was used to analyze the correlation between neuropeptides and ocular parameters.Statistical significance was set at P \ 0.05. RESULTS This study included 38 patients with a mean age of 32.03 ± 6.37 years (range 21-46 years).Among them, 20 patients were enrolled in the post-FS-LASIK DED with ocular pain group and 18 in the post-FS-LASIK DED without ocular pain group.Table 1 shows the demographic characteristics of the two groups of patients.There were no significant differences in mean age, sex ratio, or spherical equivalent between the two groups. Severity and Features of Ocular Pain Table 2 shows the ocular pain characteristics of the post-FS-LASIK DED with ocular pain group. According to the NRS, 19 (95%) of patients with ocular pain experienced mild or moderate pain.Among the included patients, 12 (60%) reported mild pain (scores 1-3), 7 (35%) reported moderate pain (scores 4-6), and 1 (5%) reported severe pain (scores 7-10).The mean NRS score was 3.30.According to the NPSI-Eye, the most common types of ocular pain were stimulated pain provoked by wind (60%) and light (45%), pressure sensation (55%), and burning pain (45%) (Fig. 1).symptoms (Fig. 2).Regarding the morphological parameters of the corneal nerve, the CNFD and CNFL were lower in the post-FS-LASIK DED with ocular pain group than in the post-FS-LASIK DED without ocular pain group (P = 0.043, P = 0.048, respectively).Corneal sensitivity was similar between the two groups (P = 0.125).Total DCs and activated DCs (aDCs) were reported as numbers per image (cells/image).There was no significant difference in the total number of DCs between the two groups (P = 0.063).However, the number of aDCs in the post-FS-LASIK DED group with ocular pain was higher than that in the post-FS-LASIK DED group without ocular pain (P = 0.041) (Table 3). Cytokines and Neuropeptides Levels Tear cytokine and neuropeptide levels are presented in Table 4.There were no significant differences in any of the inflammatory cytokines (IL-17A, IL-23, IL-6, and IFN-c) between the two groups (all P [ 0.05).As for neuropeptide concentrations, the post-FS-LASIK DED with ocular pain group showed higher levels of oxytocin and SP than those of the post-FS-LASIK DED without ocular pain group (P = 0.021, P = 0.001, respectively); however, there were no significant differences in a-MSH, b-endorphin, or neurotensin levels between the two groups (all P [ 0.05) (Fig. 3). Fig. 2 The severity of dry eye symptoms in the two groups In patients with chronic ocular pain, there was a significant correlation between the NRS scores and SP levels (r = 0.477, P = 0.033).In addition, a significant correlation was found between the NRS and HAMA scores (r = 0.479, P = 0.033) (Fig. 4).However, no significant correlation was found between ocular pain and the other parameters. DISCUSSION In this study, we characterized a cohort of patients with post-FS-LASIK DED with and without chronic ocular pain.The goal of this study was to delineate the ocular pain features, ocular characteristics, corneal nerves, DCs, tear film cytokines, and neuropeptides in post-FS-LASIK patients with DED and develop tear molecular profiles that are associated with ocular pain. The most common types of ocular pain in our study included stimulated pain (provoked by wind and light), pressure sensations, and burning pain.In previous study, the symptoms of ''burning,'' ''sensitivity to wind and light,'' ''pins and needles,'' or ''shooting pain'' may indicate neuropathic pain instead of nociceptive pain [28].These findings suggest that ocular pain types in post-FS-LASIK patients with DED may be neuropathic or a combination of neuropathic and nociceptive pain. In our study, the SIt value was lower in the post-FS-LASIK DED with ocular pain group than in the group without ocular pain.Decreased tear secretion can lead to ocular inflammation, which further sensitizes the polymodal and mechanonociceptor nerve endings, eventually inducing ocular pain [29].Structural and functional dysfunctions in the ocular sensory pathways ultimately lead to neuropathic ocular pain.However, there were no significant differences in TBUT, CFS, or LG staining between the two groups, which indicates that tear film instability may not be the main cause of the ocular pain. Our findings revealed that although the ocular surface signs were similar in both groups, patients in the post-FS-LASIK DED with ocular pain group reported more severe dry eye symptoms and anxiety compared to those without ocular pain.Moreover, there was a positive correlation between the NRS and HAMA scores.Our findings suggest that chronic ocular pain may aggravate DED and anxiety symptoms in post-FS-LASIK patients with DED.These results are consistent with those of a previous study, which demonstrated that patients with more severe ocular pain exhibit higher OSDI scores and less healthy mental health indices [30,31].On the basis of these findings, a thorough psychiatric and social history and referral for mental health evaluation may be beneficial for patients with chronic ocular pain. Regarding the morphology of the corneal sub-basal nerves, the ocular pain group had significantly lower CNFD and CNFL values than the patients without ocular pain.Zhang et al. [32] compared the corneal sub-basal nerve between patients with ocular pain and healthy participants and found that both CNFD and CNFL were decreased in patients with ocular pain.This finding suggests that poor corneal nerve recovery may be related to the pathogenesis of ocular pain.The cornea is densely innervated by sensory neurons [33], which are responsible for pain perception when the ocular surface is exposed to harmful stimulation or inflammation [34].The injury of the ocular surface nerve upregulates the voltage-gated sodium channel of the neuron, thus reducing the threshold of signal transmission, including pain [35].Interestingly, both groups showed similar results in terms of corneal sensitivity.This may be because corneal perception mainly represents the density of the subepithelial nerve endings and does not completely reflect the density and length of the corneal sub-basal nerve. DCs are antigen-presenting cells that constitute the majority of immune cells in the cornea and are considered a key role in neuroimmune crosstalk [36,37].Dendrites of DCs are one of the morphological characteristics of ''activation.''When the immune response is activated, DCs become larger and have longer dendrites [38,39].In the present study, the total number of corneal DCs was similar between the Fig. 4 Correlations between NRS scores and tear SP levels and HAMA scores in post-FS-LASIK DED with ocular pain group.Correlation of NRS scores with tear levels of SP (A).Correlation between NRS scores and HAMA scores (B).The r and P values were determined using Spearman's correlation coefficient.NRS numerical rating scale; HAMA Hamilton Anxiety Rating Scale; SP substance P; DED dry eye disease; FS-LASIK femtosecond laser-assisted laser in situ keratomileusis two groups.However, aDC density was significantly higher in patients with ocular pain than in those without.These novel findings indicate new mechanisms by which DCs may be involved in regulating ocular pain.More aDCs showed an enhanced response to the antigens.Consequently, immune-targeted therapies may be effective strategies for treating ocular pain. Biomarkers in tears can potentially be used as indicators of ocular surface innervation status.Although the number of studies examining neuropeptides and their role in DED is increasing [40,41], only a few studies have investigated the relationship between neuropeptides and chronic ocular pain.In our study, higher tear SP and oxytocin neuropeptide levels were observed in patients with DED after FS-LASIK who experienced ocular pain, and a positive correlation was observed between NRS scores and tear SP levels.However, the levels of inflammatory cytokines in the tears were similar between the two groups.This suggests that nervous system function may account for ocular pain. SP plays a key role in the migration of immune cells and the expression of chemokines [42].Corneal nerve stimulation induces the local release of SP [43].Factors released from neurons are recognized as mediators of persisting pain [44].Similar to neuroinflammation, pain is not merely a ''messenger'' of peripheral tissue damage, but it can also trigger pro-inflammatory activity in the brain [45].Pain impairs trigeminal neuronal regulatory activity in the brain, which triggers the continuous release of SP and possibly other neuromediators into the ocular surface, leading to an excessive inflammatory response [46].Lasagni et al. revealed that stimulation of corneal nerves promoted ocular inflammation and initiated pain through the release of SP, and provided evidence that SP modulation can be exploited therapeutically [47].Furthermore, reduced corneal pain has been observed in SP-knockout mice [48].Our findings indicate that SP levels vary among the studies examined.This variation may be interpreted within the framework of methodological heterogeneity, including differences in assay techniques such as enzymelinked immunosorbent assay (ELISA) and Luminex assay.Furthermore, the discrepancy in SP levels could be due to the variable severity of ocular surface conditions present within the study cohorts, as well as the innate biological variation in SP levels across distinct patient populations. Oxytocin influences the immune and nervous systems and serves as an anti-inflammatory agent.[49] The second possible role of oxytocin is to accelerate nerve regeneration, probably by increasing the level of nerve growth factor.[50,51] However, few studies have investigated the role of oxytocin in ocular diseases.Interestingly, we found that oxytocin levels were higher in patients in the ocular pain group; however, previous studies did not find any correlation between oxytocin expression and other ocular parameters.The elevated tear oxytocin concentration in patients with ocular pain could be due to corneal nerve damage and ocular inflammation caused by ocular surgery and DED, which could also be supported by the elevated aDCs and reduced corneal nerves in our patients. This study had some limitations.First, we specifically concentrated on patients who underwent FS-LASIK, and consequently, we did not incorporate a healthy control group.This means that the findings are primarily applicable to the FS-LASIK patient population and may not be generalized beyond this group.Second, we did not differentiate between nociceptive and neuropathic pain in the patients.We will develop more detailed questionnaires and diagnostic protocols to further differentiate types of ocular pain, such as the Modified Single-Item Dry Eye Questionnaire (mSIDEQ), anesthetic challenge test, Belmonte's gas esthesiometry, and analysis of the presence of microneuromas.This approach, especially the investigation of microneuromas as a potential objective biomarker of corneal neuropathic pain [52], is expected to facilitate a more nuanced differentiation of ocular pain types and enhance our understanding of the pathogenic mechanisms underlying this condition.Third, since this was a cross-sectional observational study, we could not conclusively determine the causative effect for ocular pain.Further longitudinal studies are needed to evaluate the relationship between ocular pain severity and the dynamic changes in other ocular parameters in post-FS-LASIK patients with DED. CONCLUSION In summary, our study found that patients with DED after FS-LASIK who experienced ocular pain exhibited more severe dry eye and anxiety symptoms than those without ocular pain.This group also exhibited higher neuropeptide levels, lower corneal nerve densities, increased aDCs, and lower tear secretions.Understanding the characteristics and mechanisms of ocular pain can assist eye care practitioners in identifying diagnostic and management needs, targeting treatments, and improving outcomes. Fig. 1 Fig. 1 The features of ocular pain in post-FS-LASIK DED with ocular pain group.NPSI-Eye Neuropathic Pain Symptom Inventory modified for the eye; DED dry eye disease; FS-LASIK femtosecond laser-assisted laser in situ keratomileusis Table 3 (P \ 0.001) and HAMA (P = 0.026) scores and shorter SIt values (P = 0.035) than those in the post-FS-LASIK DED without ocular pain group. Table 3 Clinical characteristics of patients in each study group DED dry eye disease; FS-LASIK femtosecond laser-assisted laser in situ keratomileusis; OSDI Ocular Surface Disease Index; TBUT tear breakup time; CFS corneal fluorescein staining; LG Lissamine green; CNFD corneal nerve fiber density; CNBD corneal nerve branch density; CNFL corneal nerve fiber length; CTBD corneal nerve fiber total branch density; HAMA Hamilton Anxiety Rating Scale; HAMD Hamilton Anxiety Depression Scale; IQR interquartile range.The Mann-Whitney U test was used for two independent samples Table 4 Concentrations of tear cytokines and neuropeptides in each study group DED dry eye disease; FS-LASIK femtosecond laser-assisted laser in situ keratomileusis; INF-c interferon-c; IL interleukin; a-MSH a-melanocyte-stimulating hormone; SP substance P; IQR interquartile range Correlations Between Ocular Pain, Neuropeptides, and Other Parameters
2024-01-09T06:17:28.818Z
2024-01-08T00:00:00.000
{ "year": 2024, "sha1": "311fb9ccb5f5cd011e7e7db8f61795bcb78f494f", "oa_license": "CCBYNC", "oa_url": "https://link.springer.com/content/pdf/10.1007/s40123-023-00861-3.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a8d540cb01b2c0b66c8e7a5e749c959977d2db7d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
245475438
pes2o/s2orc
v3-fos-license
Low Healthy Diet Self-Efficacy and Intentions Associated with High Sweet Snacks and Sugar Sweetened Beverages Consumption among African American Adolescents Recruited from Low-Income Neighborhoods in Baltimore Psychosocial factors may influence consumption patterns of sweet snacks and sugar sweetened beverages (SSB), which are potential risk factors for obesity among African American (AA) adolescents. We used multivariable linear and logistic regression models to examine cross-sectional associations among psychosocial factors, sweet snacks and SSB consumption, and BMI z-scores in 437 AA adolescents aged 9–14 years living in low-income neighborhoods in Baltimore City, U.S.A. Mean caloric intake from sugar was 130.64 ± 88.37 kcal. Higher sweet snacks consumption was significantly associated with lower self-efficacy (adjusted Odds Ratio (aOR) = 0.81; 95% CI = 0.71 to 0.93) and lower food intentions scores (0.43; 0.30 to 0.61). Higher SSB consumption was associated with lower outcome expectancies (aOR = 0.98; 95% CI = 0.96–0.99), lower self-efficacy (0.98; 0.96 to 0.99), and lower food intentions (0.91; 0.87 to 0.95). No significant association was found between SSB and sweet snacks consumption and weight status. Psychosocial factors may play a role in sugar consumption behaviors among AA adolescents in low-income neighborhoods. Further studies are needed to improve our understanding of causal mechanisms of this association. Introduction The burden of obesity disproportionately affects African American (AA) adolescents in the U.S. The prevalence among AA adolescents has continued to increase over the last four decades, consistently higher than their non-Hispanic white counterparts [1,2]. According to the National Center of Health Statistics (NCHS) data in 2018, the number of AA boys with obesity was two times higher than white boys, and three times higher for girls [2]. This issue raises a substantial concern in public health, as obesity in adolescents is closely lined with lower self-esteem and lower school performance as well as higher risk of cardiovascular diseases and type 2 diabetes in adulthood [3][4][5]. Sugar sweetened beverages (SSB) and sweet snacks have long been recognized as some of the potential contributors to youth obesity. A number of studies have suggested that the consumption of sugar and SSB has decreased in the overall population since the late 1990s [6]. However, this decrease is not uniform across racial/ethnic groups or weight status; for instance, declines in the number of calories from SSB were observed between 2003-2006 and 2007-2010 among AA adolescents at healthy weight, but not with overweight/obesity [7]. In comparison to other racial or ethnic groups, studies have shown higher consumption of sugary products among African Americans [8][9][10]. Among schoolchildren, AA consume more non-soda SSB (e.g., lemonade, sports drinks) and less low-fat milk than their non-Hispanic white counterparts [8]. Given increased autonomous decisions on food choices [11], as well as the potential formation of long-lasting dietary habits during adolescence years [12,13], this period presents a crucial window of opportunity to implement nutritional interventions to promote healthy eating behaviors. The Social Cognitive Theory (SCT) by Albert Bandura provides a theoretical framework for explaining and predicting behavioral changes with six core constructs [14]. Some studies have found self-efficacy as the key construct associated with sweet snacks and SSB consumption [15]. However, this association may be context dependent as variable results have been observed across studies [16], which warrants further investigation in specific subgroups at high risk of high consumption of sugary products. Other psychosocial factors, such as knowledge and attitudes, may also be associated with, and perhaps even predictive of, dietary behaviors [17,18]. A previous study found that intervention to improve health knowledge was associated with lower sugary intake among racial/ethnic minority adolescents from low-income neighborhoods in Baltimore. The relationship between knowledge and sweet snacks consumption may be mediated by psychosocial factors such as attitude and intention. The same study in Baltimore found that among AA adolescents, limited knowledge related to sugar is associated with a more positive attitude towards sweet snacks and greater intention to purchase sweet snacks [19]. Further, adolescents who perceived SSB as "safe" or as not causing harm to the body had a higher risk to consume energy drinks more often [20]. Consideration to various psychosocial factors may improve the effectiveness of interventions to change dietary behaviors among adolescents. However, the relationship between these factors and dietary behaviors among AA adolescents from low-income neighborhoods is currently understudied. Our study aimed to strengthen the evidence about which potentially modifiable psychosocial factors may affect sugar consumption among AA adolescents with the following research questions: 1. What are the patterns of sweet snacks and SSB consumption in a sample of low-income urban AA adolescents? 2. What is the relationship between AA adolescents' psychosocial factors (healthy diet knowledge, outcome expectancies, self-efficacy, intention) and their sweet snacks and SSB consumption? 3. What is the relationship between sweet snacks and SSB consumption with overweight and obesity among AA adolescents living in low-income neighborhoods? Study Design and Sample The study used the baseline data from a group randomized controlled intervention trial named B'more Healthy Communities for Kids (BHCK) conducted in Baltimore, Maryland, USA [21]. BHCK was a multilevel and multicomponent intervention targeting low-income AA youth aged 9-15 to prevent obesity by intervening at multiple levels of the food and social environments by increasing access, demand, and consumption of healthier foods. Low-income areas in Baltimore City were selected as the study location due to its limited availability of healthy foods and food source in general [22]. The city had many corner stores, carry-out restaurants, and fast-food restaurants, the majority of which stock primarily items high in added sugar items [23]. In 2007, two-thirds of Baltimore city adult residents were either overweight or obese [24]. Baseline data were collected in two waves: July 2013 to June 2014 (Wave 1), and August 2015 to January 2016 (Wave 2)-data from both waves were combined and presented in this study. The detailed explanation of the intervention has been previously published [25]. Study participants were adolescents aged 9-14 and were actively recruited from 28 low-income, predominantly AA neighborhoods categorized as food deserts [25]. Participants were recruited from recreation centers, libraries, swimming pools, grocery stores, and back-to-school events. They were then screened for eligibility criteria, including (1) residing within a mile and a half radius of the neighborhood recreation center and (2) having no intentions of moving within the next two years. Data were collected by data collectors trained on enrollment, consenting process, general questionnaire techniques, and anthropometric measures. Adolescents who did not identify themselves as AA, had no measured body weight and height, or were underweight (n = 2) were excluded from the analyses. A total of 437 adolescents were analyzed in this study. Sweet snacks and SSB Intake Dietary intake was assessed using The Block Kids 2004 Food Frequency Questionnaire (BKFFQ)-a semi-quantitative FFQ, validated in adolescents [26]. The BKFFQ contains a list of foods identified by NHANES II as commonly consumed by adolescents. The consumption data were then analyzed by NutritionQuest (Berkeley, CA, USA). NutritionQuest calculates sweet snacks consumption as a percentage of total kcal from sweets, which included sweet and grain-based desserts (i.e., sweet cereal, ice cream, cookies, donuts, cake, chocolate candy, pudding flan). SSB include consumption of regular soda, sports drinks, sweetened iced tea, etc.) expressed in total kcal and grams consumed per day. Psychosocial Factors The study assessed various psychosocial factors through a series of scales that were developed and assessed previously, as described in detail elsewhere [21]. In brief, the following psychosocial factors were assessed for this study: Healthy diet knowledge was assessed using 14 question items by asking adolescents which food is a better option for healthy eating, for example, "which snacks has less sugar?". There were four response options for each question, each correct answer was scored as 1, and all other answers were scored as 0. The total score ranging from 3 to 14, with a mean ± SD of 9.1 ± 2.5 (α = 0.63) [27]. Healthy diet outcome expectancies were measured using 11 question items regarding the expected health outcome of eating and drinking foods and beverages, for example, "I would lose weight if I drank diet soda instead of regular soda". The answer choices included "true", "mostly true", "mostly false", "false", or "don't know". Two points were given for the "true" response, 1 for "mostly true". The total score ranging from 1 to 22, with a mean ± SD of 15.8 ± 3.65 (α = 0.61) [27]. Healthy diet self-efficacy was assessed by 12 items assessing how easy or how difficult it would be for adolescents to perform healthier eating behaviors, for example, "I can drink sugar-free drinks such as Crystal Light instead of fruit punch". The answers were scored from 0 (the lowest self-efficacy) to 3 (the highest self-efficacy). The total scores ranged from 7 to 36, with a mean ± SD of 28.4 ± 2.0 (α = 0.69) [27]. Healthy diet food intentions were measured by asking children what they would choose to eat, for example, "If you wanted a snack, which would you choose?". There were three answer choices; the healthy choice was scored as 1 and 0 otherwise. The total scores ranging from 0 to 11, with a mean ± SD of 3.6 ± 2.0, and weak reliability (α = 0.43) [27]. Weight Status Adolescent height and weight were measured using a Seca 213 Portable Measuring Rod Stadiometer and a Tanita BF697W Duo Scale. Measurements were taken in duplicate; a third measurement was taken if the difference in the first two measurements was greater than 0.25 in or 0.2 lb. The results were then averaged. BMI was calculated (kg/m 2 ) as well as BMI-for-age percentile using the Center for Disease Control and Protection (CDC) growth charts, then categorized into normal, overweight, and obese. Adolescents were classified as having overweight if the BMI-for-age between ≥85th and <95th percentile and obesity if ≥95th percentile [28]. Statistical Analysis Data were analyzed using STATA/IC 16.1. An independent t-test was used to assess the difference in sweet snacks and SSB consumption based on respondents' sociodemographic characteristics that were continuous variables, and a chi-square test was used to assess categorical variables. Statistical significance was set at p-value < 0.05. Multivariable linear regressions were used after linearity, independency, normality, and equal variance assumptions were met. These models assessed the association between (1) each psychosocial factor as an independent variable and sweet snacks consumption (in percentage of total kcal and grams) as the dependent variable; and (2) each psychosocial factor and SSB consumption (in total daily kcal). Both models were adjusted for covariates, including adolescent age and sex, caregiver age and sex, household income, intervention zone, housing arrangement, and food assistance (SNAP) participation; these confounding factors were identified from previous published paper using BHCK dataset and found to be associated with adolescents' psychosocial factors and their consumption [29,30]. Confounding factors were then corrected by adding all covariates simultaneously to each model during the analysis. Logistic regressions were used to assess the association between (1) sweet snacks consumption (in percentage and grams) as independent variables and adolescents' weight status as dependent variable and (2) SSB consumption (in percentage and grams) and adolescents' weight status. Both models were adjusted for covariates, adolescent age and sex, total daily kcal intake, and household income; these confounding factors were identified from previous published paper using BHCK dataset and found to be associated with adolescents' psychosocial factors and their consumption [29,30]. Confounding factors were then corrected by adding all covariates simultaneously to each model during the analysis. Sample Characteristics and Pattern of Sweet Snacks and SSB Consumption Adolescents' average total daily caloric intake was 1735.96 ± 1063.82 kcal, the average percentage of calories from sweet snacks was 14.93 ± 7.28%, and the average daily caloric intake from SSB was 157.93 ± 157.98 kcal. In total, 22.48% of adolescents were overweight, and 26.15% were obese. A total of 87.61% of adolescents' caregivers were female, 39.08% were high school graduates, and 36.70% had annual household income > USD 30,000 (Table 1). A significant relationship between SSB consumption based on adolescents' age was found in multivariable models; younger adolescents aged 9-12 years consumed significantly less SSB compared to older adolescents aged 13-15 years. Moreover, adolescents with caregivers that were college educated consumed more sweet snacks than adolescents with a caregiver with less than high school education ( Table 2). Relationship between AA Adolescents' Psychosocial Factors and Sweet Snacks and SSB Consumptiom Adolescents with a higher score in healthy diet outcome expectancies tended to consume less SSB, but not sweet snacks (Table 3). Further, a higher healthy diet self-efficacy score was also associated with less sweet snacks and SSB consumption. However, the association was non-significant after adjustment for adolescents' and caregivers' age and sex, caregivers' education level, household income, and SNAP participation. In addition, adolescents with a higher score in healthy diet food intentions consumed fewer sweet snacks and SSB consumption. In contrast, there was no significant association between healthy diet knowledge and sweet snacks and SSB consumption, before or after adjustment. Relationship of Sweet Snacks and SSB Consumption with Overweight and Obesity We found no significant difference in the risk overweight and obesity based on sweet snacks and SSB consumption (Table 4), even after adjusting for adolescents' age, sex, and total daily caloric intake. Discussion To our knowledge, this is the first paper to assess the psychosocial factors associated with sweet snacks and SSB consumption in a sample of AA adolescents. We found higher sweet snacks consumption but lower SSB consumption among older adolescents than their younger counterparts. Higher healthy food self-efficacy and higher healthy food intentions were associated with a lower consumption of sweet snacks and SSB in this sample. Higher outcome expectancies were associated with lower SSB consumption before adjusting for adolescents' and caregivers' characteristics. However, we found that knowledge was not associated with the consumption of any of the foods and beverages assessed. Further, the consumption of sweet snacks and SSB was not significantly associated with adolescents' risk of overweight and obesity before and after adjusting for their age, sex, and total daily caloric intake. Data on grams and caloric intake in this study sample are consistent with intake data from national samples: sweet snacks was 14.9% kcal in the current study, which is comparable to the added sugar consumption of all-race adolescents as reported in the U.S. National Center for Health Statistics (NCHS) data 2005-2008 [31]; the daily caloric intake from SSB consumption was 157.93 kcal, comparable to the 2011-2014 NCHS data where the average SSB intake among AA boys was 167 kcal, and girls was 156 kcal [14]. Moreover, consistent with the NCHS data, the result of this study also suggests an increased risk of SSB intake among older adolescents [14]. This is probably related to the increasing autonomy, independence, and opportunities to decide what to drink outside the house among older adolescents [11]. An inverse association between self-efficacy and sweet snacks and SSB consumption was found in this study; adolescents with higher healthy diet self-efficacy are less likely to consume sweet snacks and SSB consumption. This result aligns with another study conducted among U.S. multi-racial adolescents [32]. According to the SCT, higher confidence in performing a healthy behavior result in a higher probability of actually engaging in that behavior [33]. The results of this study reinforce the benefit of increasing self-efficacy of a healthy diet to decrease sweet snacks and SSB consumption in AA adolescents living in low-income neighborhoods. The results of this study also found that higher healthy diet intention was associated with lower sweet snacks and SSB consumption in adolescents. While the findings from the current study demonstrate potential relationships, they are consistent with an intervention study in U.S. adolescents, which concludes that adolescents who receive intention training consume less SSB [34]. Further, the result of this study is also supported by a systematic review that found higher sugary snacking and SSB consumption among adolescents with a higher intention of unhealthy diet [35]. The results of this study indicated no significant association between healthy diet knowledge and consumption of sweet snacks and SSB. This result is aligned with other studies that report that knowledge of health risk was not associated with SSB consumption among adolescents [36]. Another study conducted in London, UK also showed no significant relationship between children's nutritional knowledge and their sweet snacks consumption [37]. According to SCT, knowledge is a more distal construct from behavior [33]; a lack of significant relationship between knowledge and health behavior is supported by the theory that knowledge by itself is insufficient to elicit a behavior [38]. The results of this study found a positive association between healthy diet outcome expectancies with SSB, but not with sweet snacks consumption. According to a systematic review on psychosocial factors of children and adolescents' eating behavior, the associations of outcome expectancies and dietary behavior are not consistent across studies [35]. Evidence on the association between outcome expectancies and sweet snacks and SSB consumption in U.S. adolescents is limited. However, a study on Taiwanese adolescents found that adolescents with greater expectations of negative outcomes consume fewer SSB [39], although this study involves vocational students majoring in food and beverages who on average had good nutritional knowledge and negative attitude toward SSB consumption and my not directly apply to the population in this study. The current study found no association between sweet snacks and SSB consumption and BMI-for-age. In contrast to our findings, a trial showed that a decrease in SSB consumption resulted in lower BMI among American adolescents [40]. The difference in the result may be due to the difference in the diet measurement methods; the respondents in the current study were not trained to quantify consumption, while the study used semiquantitative FFQ to measure the intake. Furthermore, the relationships between sweet snacks and SSB consumption with BMI-for-age in this study are potentially confounded by physical activity. In addition, this study implemented an observational design rather than an environmentally controlled trial. One strength of this study includes the focus on adolescence, a key period of development. While the national data shows consistent increases in obesity prevalence over the time, limited studies focus on obesity risk factors in this population, particularly among AA adolescents with low income. Furthermore, adolescence is an ideal target for obesity interventions as they experience higher autonomy in food decision-making. Thus, understanding the psychosocial factors associated with sweet snacks and SSB consumption is important to develop behavioral interventions to improve adolescents' diet. This study also has limitations. First, the use of a cross-sectional design prevents us from understanding the causal relationships of variables studied. Second, the sweet snacks and SSB consumption are self-reported, which may be subject to recall bias and influenced by social desirability. However, we expected this bias resulted in non-differential misclassification as the subjects of this study have homogenous characteristics. Third, we used a semi-quantitative FFQ to measure the intake, so the results depend on what items were included on the list of sweet snacks and SSB. Finally, the Cronbach's alphas for the food intention questionnaire were weak, indicating poor inter-relatedness between questions. However, the questionnaire was a modified version of a questionnaire which was previously validated in AA adolescents in Baltimore [21]. Conclusions Social cognitive theory is often used to explain sugar consumption behavior [41,42]. The current study builds upon evidence suggesting that higher healthy diet self-efficacy and intentions are associated with lower consumption of sweet snacks and SSB in AA adolescents living in low-income neighborhoods. However, no associations between knowledge of sweet snacks and SSB consumption were found. These findings suggest the importance of improving adolescents' self-efficacy and food intention to reduce sweet snacks and SSB consumption. Further, the results suggest that designing an intervention that aims solely on improving knowledge may be insufficient. Future interventions using a longitudinal design should be conducted to explore the causal and dose-response relationship of self-efficacy, food intention, and outcome expectancies with sweet snacks and SSB consumption in adolescents. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: The data presented in this study are available on request from the corresponding author. The data are not publicly available due to ethical issues.
2021-12-22T17:11:16.978Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "7d6415ff714856546da7a43e9f180b2293a9045b", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6643/13/12/4516/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "69981a935097b27320ef96bf02e26f4a753dc036", "s2fieldsofstudy": [ "Psychology", "Medicine", "Sociology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
55221899
pes2o/s2orc
v3-fos-license
The fundamentals of quantum machine learning Within the past few years, we have witnessed the rising of quantum machine learning (QML) models which infer electronic properties of molecules and materials, rather than solving approximations to the electronic Schrodinger equation. The increasing availability of large quantum mechanics reference data sets have enabled these developments. We review the basic theories and key ingredients of popular QML models such as choice of regressor, data of varying trustworthiness, the role of the representation, and the effect of training set selection. Throughout we emphasize the indispensable role of learning curves when it comes to the comparative assessment of different QML models. Introduction Society is becoming increasingly aware of its desperate need for new molecules and materials, be it new antibiotics, or efficient energy storage and conversion materials. Unfortunately, chemical compounds reside in, or rather hide among, an unfathomably huge number of possibilities, also known as chemical compound space (CCS). CCS is the set of stable compounds which can be obtained through all combinations of chemical elements and interatomic distances. For medium-sized druglike molecules CCS is believed to exceed 10 60 (Kirkpatrick and Ellis 2004). Exploration in CCS and locating the "optimal" compounds is thus an extremely difficult, if not impossible, task. Typically, one needs to constrain the search domain in CCS and obtains certain pertinent properties of compounds within the subspace, and then choose the compounds with properties which come closest to some preset criteria as potential candidates for subsequent updating or validation. Of course, one can conduct experiments for each compound. Alternatively, one can also attempt to estimate Institute of Physical Chemistry and National Center for Computational Design and Discovery of Novel Materials (MARVEL), Department of Chemistry, University of Basel, Klingelbergstrasse 80, 4056 Basel, Switzerland, e-mail: anatole.vonlilienfeld@unibas.ch its properties using modern atomistic simulation tools which, within one approximation or the other, attempt to solve Schrödinger's equation on a modern powerful computer. The latter approach is practically more favorable and referred as high-throughput (HT) computational screening (Greeley et al 2006). In spite of its popularity, it is inherently limited by the computational power accessible considering that 1) the number of possible compounds is much larger than what HT typically is capable of dealing with (∼10 3 ) and 2) often very time-consuming explicitly electron correlated methods are necessary to reach chemical accuracy (1 kcal/mol for energies), with computational cost often scaling as O(N 6 ) (N being the number of electrons, a measure of the system size). Computationally more efficient methods generally suffer from rather weak predictive power. They range from force-fields and semiempirical molecular orbital methods, density functional theory (DFT) methods to so-called linear scaling methods which assume locality by virtue of fragments or localized orbitals (Kitaura et al 1999). It remains an outstanding challenge within conventional computational chemistry that efficiency and accuracy apparently cannot coexist. To tackle this issue, Rupp, et al (Rupp et al 2012) introduced a machine learning (ML) Ansatz in 2012, capable of predicting atomization energies of out-of-sample molecules fast and accurately for the the first time. By now many subsequent studies showed that ML models enable fast and yet arbitrarily accurate prediction for any quantum mechanical property. This is no "free lunch", however, the price to pay consists of the acquisition of a set of pre-calculated training data sets which must be sufficiently representative and dense. So what is machine learning? It is a field of computer science that gives computers the ability to learn without being explicitly programmed. (Samuel 2000) Among the broad categories of ML tasks, we focus on a type called supervised learning with continuous output, which infers a function from labeled training data. Putting it formally, given a set of N training examples of the form {(x 1 ,y 1 ), (x 2 ,y 2 ), · · · , (x N ,y N )} with x i and y i being respectively the input (the representation) and output (the label) of example i, a ML algorithm models the implicit function f which maps input space X to label space Y . The trained model can then be applied to predict y for a new input x (belonging to the so-called test set) absent in the training examples. For quantum chemistry problems, the input of QML (also called representation) is usually a vector/matrix/tensor directly obtained from composition and geometry {Z I , R I } of the compound; while the label could be any electronic property of the system, notably the energy. The function f is implicitly encoded in terms of the non-relativistic Schrödinger equation (SE) within the Born-Oppenheimer approximation,ĤΨ = EΨ , whose exact solution is unavailable for all but the smallest and simplest systems. To generate training data, methods with varied degrees of approximation have to be used instead, such as the aforementioned DFT, QMC, etc. Given a specific pair of X and Y , there are multiple strategies to learn the implicit function f : X → Y . Some of the most popular ones are artificial neural network (ANN, including its various derivatives, such as convolutional neural network) and kernel ridge regression (KRR, or more generally Gaussian process regression). Based on a recent benchmark paper (Faber et al 2017b), KRR and ANN are competitive in terms of performance. KRR, however, has the great advantage of simplicity in interpretation and ease in training, provided an efficient representation is used. Within this chapter, we therefore focus on KRR or Gaussian processes exculsively. See section 2 for more details. Often, each training example is represented by a pair (x i , y i ). However, multiple {y j } i can also be used, e.g. when multiple labels are available for the same molecule, possibly resulting from different levels of theory. The latter situation can be very useful for obtaining highly accurate QML models with scarcely available accurate training data and coarse data being easy to obtain. Multi-fidelity methods take care of such cases and will be discussed in section 3. Once the suitable QML model is selected, be it either in terms of ANN, KRR or in terms of a multi-fidelity approach, two additional key factors will have a strong impact on the performance: The materials representation and the selection procedure of the training set. The representation of any compound should essentially result from a bijective map which uses as input the same information which is also used in the electronic Hamiltonian of the system, i.e. compositional and structural information {Z I ,R I } as well as electron number. The representation is then typically formatted into a vector which can easily be processed by the computer. Some characteristic representations, introduced in the literature, are described in section 4, where we will see how the performance of QML models can be enhanced dramatically by accounting for more of the underlying physics. In section 5, further improvements in QML performance are discussed resulting from rational training set selection, rather than from random sampling. Having introduced the basics of ML, we are motivated to point out two aspects of ML that may not be obvious for better interpretation of how ML works: 1) ML is an inductive approach based on rigorous implementation of inductive reasoning and it does not require any a priori knowledge about the aforementioned implicit function f (see section 2), though some insight of what f may look like is invaluable for rational design of representation (see section 4); 2) ML is of interpolative nature, that is, to make reasonable prediction, the new input must fall into the interpolating regime. Furthermore, as more training examples are added to the interpolating regime, the performance of the ML model can be systematically improved for a quantified representation (see section 4). As a sidenote, we would like to mention the importance of turning basic theories of QML into user-friendly and efficient code, so that anybody in the community can benefit from these new developments. Among multiple options, the recently released QML code (Christensen et al 2017) covers a substantial number of QML models, some of which are presented in the following sections. Gaussian process regression In this section, we discuss the basic idea of data driven prediction of labels: the Gaussian process regression (GPR). In the case of a global representation (i.e., the representation of any compound as a single vector, see section 4 for more details), the corresponding QML model takes the same form as in kernel ridge regression (KRR), also termed the global model. GPR is more general than KRR in the sense that GPR is equally applicable to local representations (i.e., the representation of any compound as a 2D array, with each atom in its environment represented by a single vector, see section 4 for more details). Local GPR models can still successfully be applied when it comes to the prediction of extensive properties (e.g., total energy, isotropic polarizability, etc.) which profit from near-sightedness. The locality can be exploited for the generation of scalable GPR based QML models which can be used to estimate extensive properties of very large systems. The global model Here we review the Bayesian analysis of the nonlinear regression model (Rasmussen and Williams 2006) with Gaussian noise ε: where x ∈ X is the representation, w is a vector of weights, and φ (x) is the basis function (or kernel) which maps a D-dimensional input vector x into an N dimensional feature space. This is the space into which the input vector is mapped, e.g., for an input vector x 1 = (x 11 , x 12 ) with D = 2, its feature space could be φ (x 1 ) = (x 2 11 , x 11 x 22 , x 22 x 11 , x 2 22 ) with N = 4. y is the label, i.e. the observed property of target compounds. We further assume that the noise ε follows an independent, identically distributed (iid) Gaussian distribution with zero mean and variance λ , i.e., ε ∼ N (0, λ ), which gives rise to the probability density of the observations given the parameters w, or the likelihood where φ (X) is the aggregation of columns φ (x) for all cases in the training set. Now we put a zero mean Gaussian prior with covariance matrix Σ p over w to express our beliefs about the parameters before we look at the observations, i.e., w ∼ N (0, Σ p ). Together with Bayes' rule distribution of w can be updated as The updated w is called the posterior with mean w. Thus, similar to equation (4), the predictive distribution for y * = f (x * ) is p(y * |x * , X, y) = p(y * |x * , w)p(w|X, y)dw. Substituting equation (2) and (5) into equation (6), which can be further simplified to p(y * |x * , X, y) = N (ȳ * ,λ ) withȳ * andλ being respectivelyȳ where I is the identity matrix, K(X, is the kernel matrix (also called covariance matrix, abbreviated as Cov). It's not necessary to know φ explicitly, their existence is sufficient. Given a Gaussian basis function, i.e., φ (x) = exp(−(x − x 0 ) 2 /(2l 2 )) with x 0 and l being some fixed parameters, it can be easily shown that the (i, j)-th element of kernel matrix K is where || · || p is the L p norm, σ is the kernel width determining the characteristic length scale of the problem. Note that we have avoided the infeasible computation of feature vectors of infinite size by using some kernel function k. This is also called the kernel trick. Other kernels can be used just as well, e.g. the Laplacian kernel, Rewriting equation 8, we arrive at a more concise expression in matrix form, where c is the regression coefficient vector, Equation (11) can also be obtained by minimizing the cost function C(w) = 1 2 ∑ i (y i − w φ (x i )) 2 + λ 2 ||w|| 2 2 , with respect to w. Note that L 2 regularization is used here, together with a regularization parameter λ acting as a weight to balance min-imizing the sum of squared error (SSE) and limiting the complexity of the model. This eventually leads to a model called kernel ridge regression (KRR) model. All variants of these global models, however, suffer from the scalability problem for extensive properties of the system such as energy, i.e., the prediction error grows systematically with respect to query system size (predicted estimates will tend towards the mean of the training data while extensive properties grow). This limitation is due to the interpolative nature of global ML models, that is, the predicted query systems and their properties must lie within the domain of training data. The local version The scalability problem can be overcome by working with local, e.g. atomic, representations. This relies on the idea that one can decompose a global extensive property of the system into local contributions. Among the many ways to partition systems into building-blocks, we select the atom-in-molecule (AIM) idea, put forth many years ago by Bader (Bader 1990). For the total energy (E) of the system, it is usually expressed as a sum over atomic energies (e), where Ω I is the atomic basin determined by the zero-flux condition of the electron density, ∇ρ(r s ) · n(r s ) = 0, for every point r s on the surface S(r s ) where n(r s ) is the unit vector normal to the surface at r s . The advantage of using Bader's scheme is that the total energy is exactly recovered, and that, at least in principle, it includes all short-and long-ranged bonding, i.e. covalent as well as non-covalent (e.g., van der Waals interaction, Coulomb interaction, etc.). Furthermore, due to nearsightedness of atoms in electronic system (Prodan and Kohn 2005), atoms with similar local chemical environments contribute a similar amount of energy to the total energy. Using the notion of alchemical derivatives, this effect, a.k.a. chemical transferability, has recently been demonstrated numerically Fias et al (2017a). Thus it is possible to learn effective atomic energies based on a representation of the local atoms. Unfortunately, the explicit calculation of local atoms is computationally involved (the location of the zero-flux plane is challenging for large molecules), making this approach less favorable. Instead, we can also assume that the aforementioned Bayesian model is applicable to atomic energies as well, i.e., where x I is an atomic representation of atom I in a molecule. By summing up terms on both sides in equation 15, we have Following Bartok (Bartók et al 2010), the covariance of the total energies of two compounds can be expressed as where I and J run over all the respective atomic indices in molecule i and j, and where x I i is the representation of atom I in molecule i. By inserting equation (17) in equation (11), we arrive at the formula for the energy prediction of a molecule * out-of-sample, where c i = ∑ j ([K + λ I] −1 ) i j E j . This equation can be rearranged, where the atomic contribution of atom J to the total energy can be decomposed into a linear combination of contributions from each training compound i, weighted by its regression coefficient, The "basis-function"ẽ J * i in this expansion simply consist of the sum over kernel similarities between atom J and atoms I ∈ i, where the contribution of atom I grows with its similarity to atom J,ẽ We note in passing that the value of the covariance matrix element (i.e., equation (17)) increases when the size of either system i or j grows, indicating that the scalability issue can be effectively resolved. Hyper-parameters Within the framework of GPR or KRR, there are two sets of parameters: 1) parameters that are determined via training, i.e., the coefficients c (see equation (12)), whose number grows with the training data; 2) hyperparameter whose value is set before the learning process begins, i.e., the kernel width σ in equation (11) and λ in equation (2). As defined in section 2.1, λ measures the level of noise in the training data in GPR. Thus, if the training data is noise free, λ can be safely set to zero or a value extremely close to zero (e.g., 1 × 10 −10 ) to reach optimal performance. This is generally true for datasets obtained by typical quantum chemical calculations and the resulting training error is (almost) zero. Whenever there is noise in the data (e.g., from experimental measurements), the best λ corresponds to some finite value depending on the noise level. The same holds for the training error. In terms of KRR, λ seems to have a completely different meaning at first glance: the regularization parameter determining the complexity of the model. In essence, they amount to the same, i.e., a minute or zero λ corresponds to the perfectly interpolating model which connects every single point in the training data, thus representing the most faithful model for the specific problem at hand. One potential risk is poor generalization to new input data (test data), as there could be "overfitting" scenarios for training sets. A finite λ assumes some noise in the training data and the model can only account for this in an averaged way, thus the model complexity is simplified to some extend by lowering the magnitude of parameters w so as to minize the cost function C(w). Meanwhile, some finite training error is introduced. To recap, the balance between SSE and regularization is vital, and reflected by a proper choice of λ . Unlike λ , the optimal value of σ (σ opt ) is more dataset specific. Roughly speaking, it is a measure of the diversity of the dataset and controls the similarity (covariance matrix element) of two systems. Typically σ opt gets larger when the training data expands into a larger domain. The meaning of σ can be elaborated by considering two extremes: 1) when σ approaches zero, the training data will be reproduced exactly, i.e., c i = y i , with high error for test data, i.e. with deviation to mean; 2) when σ is infinity, all kernel matrix elements will tend towards one, i.e., a singular matrix, resulting in large errors in both training and test. Thus, the optimal σ can be interpreted as a coordinate scaling factor to render the kernel matrix wellconditioned. For example, selected the lower bound of the kernel matrix elements to be 0.5. For a Gaussian kernel, this implies that K min = exp(−D 2 max /2σ 2 opt ) ≈ 0.5, or σ opt ≈ D max / √ 2 ln 2, where D max is the largest distance matrix element of the training data. Following the same reasoning, σ opt can be set to D max / ln 2 for a Laplacian kernel. The above heuristics are very helpful to quickly identify reasonable initial guesses for hyper-parameters for a new data set. Subsequently, the optimal values of the hyper-parameters should be fine-tuned through k-fold cross-validation (CV). The idea is to first split the training set into k smaller sets and 1) for each of the k subsets, a model is trained using the remaining k − 1 subsets as training data; the resulting model is tested on the remaining part of the data to calculate the predictive error); this step yields k predictions, one for each fold. 2) The overall error reported by k-fold cross-validation is then the average of the above k values. The optimal parameters will correspond to the ones minimizing the overall error. This approach can become computationally demanding when k and the training set size are large. But it is of major advantage in problem such as inverse inference where the number of samples is very small, and its systematic applications minimizes the likelihood of statistical artefacts. Learning curves To assess the predictive performance of a ML model, we need to know not only the prediction error (ε, which can be characterized by the mean absolute error (MAE) or root mean squared error (RMSE) of prediction) for a specific training set, but also predictive errors for varied sizes of training sets. Therefore, we can monitor how much progress we have achieved after some incremental changes to the training set size (N) so as to extrapolate to see how much more training data is needed to reach a desirable accuracy. The plot of ε versus N relationship is called the learning curve (LC), and examples are shown in Fig. 1. (note that only test error, i.e., MAE for the prediction of new data in test set, is shown; training errors are always zero or minute for noise-free training data). Multiple factors control the shape of learning curve, one of which is the choice of representation. If the representation cannot uniquely encode the molecule, i.e., there may exist cases that two different molecules share the same input vector x i but with different molecular properties, then it causes ambiguity to the ML algorithm (see more details in section 4.1) and may consequently lead to no learning at all, as illustrated by the dashed curve in Fig. 1, with distinguishable flattening out behavior at larger training set sizes, resulting in poor ML performance. In the case of a unique representation, according to (Fasshauer and McCourt 2016), it can be proved that for kernel based approximation, when the training set size N is sufficiently large, the predictive error is proportional to the so-called "fill distance" or mesh norm h X , defined as where "sup" stands for the supremum (or the least upper bound) of a subset, x is again the representation of any training instance as an element of the training set X, Ω represents the domain of studied systems (i.e., potential energy surface domain for chemistry problems). Clearly from the definition, fill distance describes the geometric relation of the set X to the domain Ω and quantifies how densely X covers Ω. Furthermore, fill distance intrinsically contains a dimension dependence d, that is, h X scales roughly as N −1/d if x are uniform or random grid points in a d dimensional space. Apart from the exponent, there should also be a prefactor, thus the leading term of the overall predictive error can be described as b * N −a/d , where a in the exponent is a constant. Therefore, to visualize the error vs. N, a log-log scale is the most convenient for which the learning curve can be represented by a linear relationship: log(ε) ≈ log(b) − a d log(N), thus a/d quantifies the rate of learning, while the prefactor log(b) is the vertical offset of the learning curve. Through a series of numerical calculations of learning a 1D Gaussian function as well as ground state properties of molecules with steadily improving physics encoded in the representation, it has been found (Huang and von Lilienfeld 2016) that the offset log(b) is a measure of target property similarity, which is defined as the deviation of proposed model (corresponding to the representation used) from the true model (Huang and von Lilienfeld 2016). While in general, we do not know the true function (machine learning would be meaningless if we did) we often do have considerable knowledge about relative target similarity of different representations. Applying the findings above to chemistry problems, we can thus obtain some insight in how learning curves will behave. Several observations can be explained: First, the learning rate would be almost a constant or changes very little when different unique representations are used, as the rate depends primarily on the domain spanned by molecules considered in the potential energy surface. Secondly, for a series of isomers it is much easier to learn their properties in their relaxed equilibrium state than in a distorted geometry. The limitation that the learning rate will not change much for random sampling with unique representations seems to be an big obstacle towards more efficient ML predictions, meaning that developing better representation (to lower the offset) can become very difficult even if substantial effort has been invested. However, is it possible to break this curse, reaching an improved learning curve as illustrated by the pink line in Fig. 1? We believe that this should be possible. Note how the linear (log-log) learning curve is obtained for statistical models. This implies that there must be 'redundancy' in the training data; and if we were able to remove those redundancies a priori, we might very be able to boost the performance and observe superior LCs, such as the pink line in Fig. 1with large learning rates. In such a case, statistics is unlikely to hold and the LC may be just a monotonically decreasing function, possibly also just a damped oscillator, rather than a line. Strategies for rational sampling will be elaborated in detail in section 5. Multi-level learning By default, we assume for each x i ∈ X there exists one corresponding y i ∈ Y in the training examples. It makes perfect sense if Y is easy to compute, i.e., in the circumstance that a relatively low accuracy of Y suffices (e.g., PBE with a medium sized basis set). It is also possible that a highly accurate reference data is required (e.g., CCSD(T) calculations with a large basis set) so as to achieve highly reliable predictions. Unfortunately, we can only afford few highly accurate x and y's for training considering the great computational burden. In this situation one can take great advantage of the y's with lower levels of accuracy which are much easier to obtain. Models which shine in this kind of scenario are called multi-fidelity, where reference data based on a high (low) level of theory is said to have high (low) fidelity. The nature of this approach is to explore and exploit the inherent correlation among data sets with different fidelities. Here we employ Gaussian process as introduced in section 2 to explain the main concepts and mathematical structure of multi-level learning. The most important term in multi-fidelity theory is the covariance between y (1) and y (2) , which represents the inherent correlation between data sets with different levels of fidelity and is derived as Cov(y (1) , y (2) ) = K 12 = ρCov(X, X) = ρK 1 due to the same independence restriction. Now the multi-fidelity structure can be written in the following compact form of a multivariate Gaussian process: where K 11 = K 1 , K 22 = K 2 , K 12 = K 21 due to symmetry. The importance of ρ is quite evident from the term K 12 ; specifically, when ρ = 0, the high fidelity and low fidelity models are completely decoupled and there will be no improvements of the prediction at all by combining the two models. The next step is to make prediction of y (2) * given the corresponding input vector x * , two levels of training data {X, y (1) } and {X, y (2) }. To this end, we first write down the following joint density: where (2) * ); then following similar procedures as in section 2.1, the final predictive distribution of y (2) * |X * , X, y (1) , y (2) is again a Gaussian N (ȳ (2) * , Var), whereȳ We note in passing that since there are two correlations function K 1 and K 2 , two sets of hyper-parameters regarding the kernel width and an extra scaling parameter ρ have to be optimized following the similar approach as explained in section (2.4). This algorithm has already successfully been applied to the prediction of band gaps of elpasolite compounds with high accuracy (Pilania et al 2017). But it can be naturally extended to other properties. So far, not much work has been done using this algorithm, its potential to tackle complicated chemical problems has yet to be unraveled by future work. ∆ -Machine learning A naive version of multi-fidelity learning is the so called ∆-machine learning model. Its performance is useful for the prediction of various molecular properties (Ramakrishnan et al 2015a). In this model, N 1 is equal to N 2 , the low and high-fidelity models are respectively called baseline and target. The baseline property (y (b) ) is associated with baseline geometry as encoded in its representation x (b) ), and target property y (t) is associated with target eometry x (t) , respectively. The workhorse of this model is Note that we did not use the target geometry at all for the reason that 1) it is expensive to calculate; 2) it is not necessary for the test molecules. The ∆ -ML model has been shown to be capable of yielding highly accurate results for energies if a proper baseline model is used. Other properties can also be predicted with much higher precision compared to traditional single fidelity model (Ramakrishnan et al 2015a). What is more, this approach can save substantial computational time. However, the ∆ -machine learning model is not fully consistent with the multi-fidelity model. The closest scenario is that we set K 1 = K 2 when evaluating kernel functions in equation (31), but this will result in something still quite different. There are further issues one would like to resolve, including that (i) the coupling between different fidelities is not clear and that the correlation is rather naively accounted for through the ∆ of the properties from two levels, assuming a smooth transition from one property surface (e.g., potential energy surface) from one level of theory to another. This is questionable and may fail terribly in some cases; (ii) it requires the same amount of data for both levels, which can be circumvented by building recursive versions. Representation The problem of how to represent a molecule or material has been a topic dating back to many decades ago and the wealth of information (and opinions) about this subject is well manifested by the collection of descriptors compiled in Todeschini and Consonni's Handbook of molecular descriptors (Todeschini and Consonni 2008). According to these authors, the molecular descriptor is defined as "the final result of a logic and mathematical procedure which transforms chemical information encoded within a symbolic representation of a molecule into a useful number or the result of some standardized experiment". Whilst the majority of these descriptors are graph-based and used for quantitative structure and activity relationships (QSAR) applications (typically producing rather rough correlation between properties and descriptor), our focus is on QML models, i.e. physics based, systematic and universal predictions of well-defined quantum mechanical observables, such as the energy von Lilienfeld (2018). Thus, to better distinguish the methods reviewed herewithin from QSAR, we prefer to use the term "representation" rather than "molecular descriptor". Quantum mechanics offers a very specific recipe in this regard: A chemical system is defined by its Hamiltonian which is obtained from elemental composition, geometry, and electron number exclusively. As such, it is straightforward to define the necessary ingredients for a representation: It should be some vector (or fingerprint) which encodes the compositional and structural information of a given neutral compound. The essentials of a good representation There are countless ways to encode a compound into a vector, but what representation can be regarded as "good"? Practically, a good representation should lead to a decent learning curve, i.e., error steadily decreases as a function of training set size. Conceptually, it should fulfill several criteria, including primarily uniqueness (non-ambiguity), compactness and being size-extensive (von Lilienfeld et al 2015). Uniqueness (or being non-ambiguous) is indispensable for ML models. We consider a representation to be unique if there is no pair of molecules that produces the same representation. Lack of uniqueness would results in serious consequences, such as ceasing to learn at an early stage or no learning at all from the very beginning. The underlying origin is not hard to comprehend. Consider two representation vectors x 1 and x 2 for two compounds associated with their respective properties y 1 and y 2 . Now suppose x 1 = x 2 while y 1 = y 2 (no degeneracy is assumed). One extreme case is that only these two points are used when training the ML model, obviously we will encounter a singular kernel matrix with all elements being 1; huge prediction errors will result and basically there is no learning. Even if molecules like these are not chosen for training it should be clear that such a representation introduces a severe and systematic bias. Furthermore, when trying to predict y 1 and y 2 after training, the estimate will be the same as the input to the machine is the same. The resulting test error is therefore directly proportional to their property difference. The compactness requires atom index permutation, rotational and translational invariance, i.e., all redundant degrees of freedom of the system should be removed as much as possible while retaining the uniqueness. This can lead to a more robust representation, meaning 1) the size of training set needed may be significantly reduced; 2) the dimension of the representation vector (thus the size) is minimized, a virtue which becomes important when the necessary training set size becomes large. Being size-extensive is crucial for prediction of extensive properties, among which the most important, the energy. This leads to the so-called atomic representation or local representation of an atom in a compound. The local unit atom can also consist of bonds, functional groups or even larger fragments of the compound. As pointed out in section 2.2, this type of representation is the crucial stepping stone for building scalable machine learning models. Even intensive properties such as HOMO-LUMO gap which typically do not scale with system size, can be modeled within the framework of atomic representations, as illustrated using the Re-Match metric (De et al 2016). For specific problems, such as force predictions, an analytic form of representation is desirable for analysis and rapid evaluation, and for subsequent differentiation (with respect to nuclear charges and coordinates) so as to account for response properties. Rational design It is not obvious how to obtain an optimal representation. In order to obtain a good representation, one has to gain intensive knowledge about the system and structure-property relationship. Use of simplified approximations to solutions of Schrödinger's equation are particularly powerful. The most approximative, yet atomistic, models of SE are universal force fields (FF) which typically reproduce the essential physics for certain system classes, such as bio-organic molecules, reasonably well. Namely, the atom-pairwise two-body interactions in force-fields typically decay as 1/R n (R being the internuclear distance and n being some integer), while 3and 4-body parts behave as periodic functions of angle and dihedral angle (modern force field approaches also include 2-to (n − 1)-body interaction in n-body interactions). FFs are essentially a special case of the more general many-body expansion (MBE) in interatomic contributions, i.e., an extensive property of the system (e.g., total energy) is expanded in a series of many-body terms, namely, 1-, 2-and 3-body terms, · · · , i.e., where E (n) is the n-body interaction energy, R IJ is the interatomic distance between atom I and J, θ IJK is the angle spanned by two vectors R IJ and R IK . Other important properties can also be expressed in a similar fashion. By utilizing the basic variables in MBE, including distance, angles and dihedral angles in their correct physics based functional form (for instance, the aforementioned 1/R n dependence of 2-body interaction strength) one can already build some highly efficient representations such as BAML and SLATM (vide infra). This recipe relies heavily on pre-conceived knowledge about the physical nature of the problem. Numerical optimization It is possible that for some systems and properties, one does not know which features are of primary importance. And it is not an option to try all features one-by-one considering that there are so many possibilities. In such a situation, the least absolute shrinkage and selection operator (LASSO) can offer suitable relief. LASSO is basically a regression analysis method. Consider a simple linear model: the property of a system is a linear functions of its features, i.e., y = Xc, where X is a matrix with each of the N rows being the descriptor vector x i of length D for each training data points, c is the D-dimensional vector of coefficients, and y is the vector of training properties with the i-th property being y i . Our task is to find the tuple of features that yields the smallest sum of squared error: ||y − Xc|| 2 2 . Within LASSO, it is equivalent to a convex optimization problem, i.e., where the use of L 1 norm of regularization term is pivotal, i.e., smaller L 1 norm can be obtained when larger λ is used, thereby purging features of lesser importance. This approach has been exemplified for the prediction of relative crystal phase stabilities (rock-salt vs. zinc-blende) in a series of binary solids (Ghiringhelli et al 2015). Unfortunately, this approach is limited in that it works best for rather lowdimensional problems. Already for typical organic molecules, the problem becomes rapidly intractable due to coupling of different degrees of freedom. Under such circumstances, it appears to be more effective to adhere to the aforementioned rational design based heuristics, as manifested by the fact that almost all of the ad-hoc representations in the literature are based on manual encoding. An overview of selected representations Over the years, numerous molecular representations have been developed by several research groups working on QML. It's not our focus to enumerate all of them, but to list and categorize the popular ones. Two categories are proposed, one is based on many-body expansions in vectorial or tensorial form, such as Coulomb matrix ( Many-body potential based representation The Coulomb matrix (CM) representation was first proposed in the seminal paper by (Rupp et al 2012). It is a square atom-by-atom matrix with off diagonal elements corresponding to the nuclear Coulomb repulsion between atoms, i.e., CM IJ = Z I Z J /R IJ for atom index I = J. Diagonal elements approximate the electronic potential energy of the free atom, and is encoded as −0.5Z 2.4 I . To enforce invariance of atom indexing, one can sort the atom numbering such that the sum of L 2 and L 1 norm of each row of the Coulomb matrix descends monotonically in mag-nitude. Symmetrical atoms will result in the same magnitude. A slight improvement over the original CM can be achieved by varying the power low of R IJ Huang and von Lilienfeld (2016). Best performance is found for an exponent of 6, reminiscent of the leading order term in the dissociative tail of London dispersion interactions. Thus, the resulting representation is also known as London matrix (LM). The superiority of LM is attributed to a more realistic trade-off between the description of more localized covalent bonding and long-range intramolecular non-covalent interactions (Huang and von Lilienfeld 2016). Fig. 2 Two body interaction is not enough to capture the physics of a pair of homometric molecules. In the figure, the energy of the two molecules are approximated as summation of LJ potentials with (dashed lines) or without 3-body ATM potentials (solid line) and plotted as a function of f , the scaling factor of all coordinates of the two molecules. LJ, ATM stands for Lennard-Jones and Axilrod-Teller-Muto vdW potential, respectively. The letters s and l labels the two existing different bond lengths, standing for 'short' and 'long'. The atom represented by a yellow filled circile with cross means out of plane. In spite of the great virtue of uniqueness encoded in CM, it generally suffers from a high offset of learning curve (see Fig. 3).In contrast, the bag-of-bond (BoB) representation (Hansen et al 2015), a bagged (vectorial) stripped down version of the CM, turns out to result in learning curves with lower off-set than CM (see Fig. 3). The BoB representation is a 1-D array, constructed as the concatenation of a series of bags (1-D arrays as well), each corresponds to a specific type of atomic pair, e.g., all C-O pairs (covalently and non-covalently bonded) in the molecule are grouped into the bag labeled as CO; similarly for all other combinations of elemental pairs. Each bag thus includes a set of nuclear Coulomb repulsion values. Each bag is then sorted in descending order. In cases that the same type of bag for two molecules has not the same size the smaller bag is padded with zeros. Through bagging the performance is improved in comparison to the CM matrix. But inevitably, crucial higher-order information, such as the angular part, is missing. Due to its exclusive reliance on sorted two-body terms, BoB is not a unique representation, as also manifested by the deterioration of its slope in the learning curve for large training set sizes (see Fig. 3). This loss of information can also be illustrated for a pair of homometric molecules (same atom types, same set of interatomic distances) as displayed in Fig 2. If we make a plot of the potential energy (approximated as a sum of Lennard-Jones potentials) curve of both planar and tetrahedral molecules as a function of the scaling factor f of all coordinates, we will end up with the same curve due to a spurious degeneracy imposed by lack of uniqueness. The BoB representation would not distinguish between these two molecules. Only after addition of higher order many-body potential terms (e.g., the 3-body Axilrod-Teller-Muto potential), the spurious degeneracy is lifted. Based on this simple example, an important lesson learned is that collective effects which go beyond pairwise potentials are of vital importance for the accurate modeling of fundamental properties such as energies. While adhering to the ideas of bagging for efficiency, a representation consisting of extended bags can be constructed, each may contain interatomic interaction potentials up to 3-and 4-body terms. BAML was formulated in this way, where 1) all pairwise nuclear repulsions are replaced by Morse/Lennard-Jones potentials for bonded/non-bonded atoms respectively; 2) The inclusion of 3-and 4-body interactions of covalently bonded atoms is achieved using periodic angular and torsional terms, with their functional form and parameters extracted from the Universal Force Field (UFF) (Huang and von Lilienfeld 2016;Rappe et al 1992). BAML achieves a noticeable boost of performance when compared to BoB or CM. Interestingly, the performance is systematically improving upon inclusion of higher and higher order many-body terms, as the proposed energy model is getting more and more realistic, i.e., increasing similarity to target. Meanwhile, and not surprisingly the uniqueness issue, existing in two-body representations such as BoB, is also resolved (see Fig. 4.4.1). The main drawback of BAML, however, is that it requires pre-existing force fields, implying a severe bias when it comes to new elements or bonding scenarios. It would therefore be desirable to identify a representation which is more compact and ab-initio in nature. The so-called SLATM representation (Huang and von Lilienfeld 2017) enjoys all these attributes. It has two variants: a local and a global one. The basic idea of SLATM is to represent an atom indexed I in a molecule by accounting for all possible interactions between atom I and its neighboring atoms through many-body potential terms multiplied by a normalized Gaussian distribution centered on the relevant variable (distance or angle). So far, 1-, 2-and 3-body terms have been considered. The 1-body term is simply represented by the nuclear charge, while the two-body part is expressed as 1 2 where δ (·) is set to normalized Gaussian function δ (x) = 1 σ √ 2π e −x 2 , g(r) is a distance dependent scaling function, capturing the locality of chemical bond and chosen to correspond to the leading order term in the dissociative tail of the London potential g(R) = 1 R 6 . The 3-body distribution reads 1 3 where θ is the angle spanned by vector R IJ and R IK (i.e.,θ IJK ) and treated as a variable. h(θ , R IJ , R IK ) is the 3-body contribution depending on both internuclear distance and angle, and is chosen in form to model the Axilrod-Teller- Muto Axilrod and Teller (1943); Muto (1943) vdW potential Now we can build the atomic version aSLATM for an atom I through concatenation of all the different many-body potential spectra involving atom I as displayed in equation (35) and (36). As for the global version SLATM, it simply corresponds to the sum of the atomic spectra. ). Note that the size and composition for molecules in all the three datasets are comparable, i.e., the dimensionality d's of these systems are similar, hence almost the same learning rates is observed for all representations with no (or less) suffer from uniqueness issue. For QM7b dataset, a much lower offset is shown as the relevant molecules are much more relaxed than those in QM9 and 6k isomers, thus given any representation, its target similarity is larger for this dataset compared to others. SLATM and aSLATM outperforms all other representations discussed so far, as evinced by learning curves shown in Fig. 4.4.1. This outstanding performance is due to several aspects: 1) almost all the essential physics in the systems is covered, including the locality of chemical bonds as well as many-body dispersion; 2) the inclusion of 3-body terms significantly improves the learning. 3) the spectral distribution of radial and angular feature now circumvents the problem of sorting within each feature bag, allowing for a more precise match of atomic environments. Most recently, the FCHL representation has been introduced (Faber et al 2017a). It amounts to a radial distribution in elemental and structural degrees of freedom. The configurational degrees of freedom are expanded up to three-body interactions. Four-body interactions were tested but did not result in any additional improvements. For known data-sets FCHL based QML models reach unprecedented predictive power and even outperform aSLATM and SOAP (see below). In the case of the QM9 dataset, for example, FCHL based models of atomization energies reach chemical accuracy after training on merely ∼1'000 molecules. Density expansion based representation Within the Smooth Overlap of Atomic Positions (SOAP) (Bartók et al 2013) idea of a representation, an atom I in a molecule is represented as the local density of atoms around I. Specifically, it is represented by a sum of Gaussian functions with variance σ 2 within the environment (including the central atom I and its neighboring atoms Q's), with the Gaussian functions centered on Q's and I: where r is the vector from the central atom I to any point in space, while R Q is the vector from atom I to its neighbour Q. The overlap of ρ I and ρ J then can be used to calculate a similarity between atoms I and J. However, this similarity not rotationally invariant. To overcome this, we can integrate out the rotational degrees of freedom for all 3-dimensional rotationsR, and thus the SOAP kernel is defined, To enforce the self-similarity to be normalized, the final SOAP similarity measure takes the form of The integration in equation (39) can be carried out by first expanding ρ I (r) in equation (38) in terms of a set of basis functions composed of orthogonal radial functions and spherical harmonics and then collect the elements in the rotationally invariant power spectrum, based on which k can be easily calculated. The interested reader is referred to (Bartók et al 2013). SOAP has been used extensively and successfully to model systems such as silicon bulk or water clusters, each separately with many configurations. These elemental or binary systems are relatively simple as the diversity of chemistries encoded by the atomic environments is rather limited. A direct application of SOAP to molecules where there are substantially more possible atomic environments, how-ever, yields learning curves with rather large off-sets. This is not such a surprise, as essentially the capability of atomic densities to differentiate between different atom pairs, atom triples, and so on, is not so great. This shortcoming remains even if one treats different atom pairs as different variables, as was adopted in (De et al 2016); averaging out all rotational degrees of freedom might also impede the learning progress due to loss of relevant information. To amend some of these problems, a special kernel, the RE-Match kernel (De et al 2016), was introduced. And most recently, combining SOAP with a multi-kernel expansion enabled additional improvements in predictive power (Bartók et al 2017). Training set selection The last section of this chapter deals with the question of how to select training sets. The selection procedure can have a severe effect on the performance. The predictive accuracy appears to be very sensitive on how we sample the training molecules for any given representation (or better ones). Training set selection can actually be divided into two parts: (1) how to create training set. The general principle is that the training set should be representative, i.e. it follows the same distribution as all possible test molecules in terms of input and output. This will formally prevent extrapolation and thereby minimize prediction errors. (2) how to optimize the training set composition. The majority of algorithms in literature deal with (2), assuming the existence of some large dataset (or a dataset trivial to generate) from which one can draw using algorithms such as ensemble learning, genetic evolution, or other "active learning" based procedures (Podryabinkin and Shapeev 2017). All of these methods have in common that they select the training set from a given set of configurations based only on the unlabeled data. This is particularly useful for "learning on the fly" based ab initio molecular dynamics simulations Csányi et al (2004), where expensive quantum-mechanical calculation are carried out only when the configurations are sufficiently "new". Step 1 stands out as a challenging task and few algorithms are competent. The most ideal approach is of course an algorithm that can do both parts within one step, the only competent method we know is the "amons" approach. We will elaborate on all these concepts below. Genetic optimization To the best of our knowledge, the first application of a GA for generation and study of optimal training set compositions for QML model was published in (Browning et al 2017). The central idea of this approach is outlined as follows. For a given set (S 0 ) containing overall N molecules, the GA procedure consists of three consecutive steps to obtain the "near-optimal" subset of molecules from S 0 for training the ML model (Browning et al 2017): (a) randomly choose N 1 molecules as a trial training set s 1 ; repeat M times. This forms a population of training sets, termed the parent population and labeled asŝ (1) = {s 1 , s 2 , . . . , s M }. (b) An ML model is trained on each s i , and then tested on a fixed set of out-of-sample molecules, resulting in a mean prediction error e i , which is assigned to s i as a measure of how fit s i is as the "near-optimal" training set and dubbed "fitness" . Therefore, the smaller e i is, the larger the fitness is. (c)ŝ (1) is consecutively evolved through selection (to determines which s i 's inŝ (1) should remain in the population to produce a temporarily refined smaller sett (1) ; a set s i with larger fitness means higher probability to be kept in t (1) ), crossover (to updateŝ (1) fromt (1) and the newŝ is labeled asŝ (2) with each set s i inŝ (2) obtained through mixing the molecules from two s i 's int (1) ), mutation (to change molecules in some s i 's inŝ (2) randomly to promote diversity inŝ (2) , e.g., replace -CH 2 -fragment by -NH-for some molecule. (d) Go to step (b) and repeat the process until there is no more change in the population and the fitness ceases to improve . We label the final updated trial training set asŝ. It's obvious that the molecules inŝ should be able to represent all the typical chemistry in all molecules in S 0 , such as linear, ring, cage-like structure and typical hybridization states (sp, sp 2 , sp 3 ) if they are abundant in S 0 . Once trained onŝ, the ML model is guaranteed to yield typically significantly better results as the fitness is constantly increasing. This is not useful since the GA "tried" this already; the usefulness has to be assessed by the generalizability ofŝ as training set to test on a new set of molecules not seen in S 0 . Indeed, as shown in (Browning et al 2017), significant improvements in off-sets can be obtained when compared to random sampling. While the remaining out-of-sample error is still substantial, this is not surprising due to the use of less advantageous representations. One of the key-findings in this study were that upon genetic optimization (i) the distance distributions between training molecules were shifted outward, and (ii) the property distributions of training molecules were fattened. Amons We note that the naive application of active learning algorithms will still result in QML models which suffer from lack of transferability, in particular when it comes to the prediction of larger compounds or molecules containing chemistries not present in the training set. Due to the size of chemical compound space this issue still imposes a severe limitation for the general applicability of QML. These problems can, at least partially, be overcome by exploring and exploiting the locality of an atom in molecule , resulting from the nearsightedness principle in electronic systems (Prodan and Kohn 2005;Fias et al 2017b). We consider a valence saturated query molecule for illustration, for which we try to build an "ideal" training set. As is well known, any atom I (let us assume a sp 3 hybridized C) in the molecule is characterized by itself and its local chemical envi-ronment. To a first order approximation, we may consider its coordination number (CN for short) to be a distinguishing measure of its atomic environment and we can roughly say that any other carbon atom with a coordination number of 4 is similar to atom I, as their valence hybridization states are all sp 3 . Another carbon atom with CN = 3 in hybridization state of sp 2 would be significantly different compared to atom I. It is clear, however, that CN as an identifier of atomic environment type is not enough: An sp 3 hybridized C atom in methane molecule (hereafter we term it as a genuine C-sp 3 environment) is almost purely covalently bonded to its neighbors, while in CH 3 OH, noticeable contributions from ionic configurations appear in the valence bond wavefunction due to the significant electronegativity difference between C and O atoms. Thus one would expect very different atomic properties for the sp 3 -C atoms in these two environments as manifested, for instance, in their atomic energy, charge, or 13 C-NMR shift. Alternatively, we can say that oxygen as a neighboring atom to I has perturbed the ideal sp 3 hybridized C to a much larger extent in CH 3 OH than the H atom has in methane. To account for these differences, we can simply include fragments which contain I as well as all its neighbors. Thus we can obtain a set of fragments, for each of which the bond path between I and any other atom is 1. Extending this kind of reasoning to the second neighbor shell, we can add new atoms with a bond path of 2 relative to atom I in order to account for further, albeit weaker, perturbation to atom I. As such, we can gradually increase the size of included fragments (characterized by the number of heavy atoms) until we believe that all effects on atom I have been accommodated. The set of unique fragments can then be used as a training set for a fragment based QML model. Note that we saturate all fragments by hydrogen atoms. These fragments can be regarded as effective quasi-atoms which are defined as atom in molecule, or "am-on". Since amons repeat throughout chemical space, they can be seen as the "words" of chemistry (target molecules being "sentences") or as "DNA" of chemistry (target molecules being genes and properties their function). Given the complete set of amons, any specific, substantially larger, query molecule can be queried. Used in conjunction with an atomic representation such as aSLATM or FCHL, amons enable a kind of chemical extrapolation which holds great promise to more faithfully and more efficiently explore vast domains chemical space . To demonstrate the power of amons, we show the example of predicting the potential energy of a molecule present as an inset in Fig. 5.2. With amons as the training set, chemical accuracy (1 kcal/mol) is reached after training on only 40 amons (with amons being not larger than 6 heavy atoms). Sampling amons at random, the slope of the learning curve is substantially worse. Conclusion We have discussed primarily the basic mathematical formulations of all typical ingredients of quantum machine learning (QML) models which can be used in the Fig. 4 Comparison of the learning curves obtained for one molecule (see the inset) from random selection of training set and amons, respectively. For each red scatter point, errors were averaged over 100 random samplings. context of quantum mechanical training and testing data. We explained and reviewed why ML models can be fast and accurate when predicting quantum mechanical observables for out-of-sample compounds. It is the authors' opinion that QML can be seen as a very promising approach, enabling the exploration of systems and problems which hitherto were not amenable to traditional computational chemistry methods. In spite of the significant progress made within the last few years, the field QML is still very much in a stage of infancy. This should be clear when considering that the properties that have been explored so far are rather limited and relatively fundamental. The primary focus has been on ground state or local minimum properties. Application to excited states still remain a challenge (Ramakrishnan et al 2015b), just as well as conductivity, magnetic properties, or phase transitions. We believe that new and efficient representations will have to be developed which properly account for all the relevant degrees of freedom at hand.
2018-07-16T11:57:33.000Z
2018-07-11T00:00:00.000
{ "year": 2018, "sha1": "3d53f30e1380f44255b0f4d1775ee4fbacf21e6b", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1807.04259", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "3d53f30e1380f44255b0f4d1775ee4fbacf21e6b", "s2fieldsofstudy": [ "Computer Science", "Physics" ], "extfieldsofstudy": [ "Physics", "Chemistry" ] }
207002618
pes2o/s2orc
v3-fos-license
A randomized, double‐blind, placebo‐controlled, dose‐finding trial with Lolium perenne peptide immunotherapy Abstract Background A novel subcutaneous allergen immunotherapy formulation (gpASIT+™) containing Lolium perenne peptides (LPP) and having a short up‐dosing phase has been developed to treat grass pollen–induced seasonal allergic rhinoconjunctivitis. We investigated peptide immunotherapy containing the hydrolysate from perennial ryegrass allergens for the optimum dose in terms of clinical efficacy, immunogenicity and safety. Methods This prospective, double‐blind, placebo‐controlled, phase IIb, parallel, four‐arm, dose‐finding study randomized 198 grass pollen–allergic adults to receive placebo or cumulative doses of 70, 170 or 370 μg LPP. All patients received weekly subcutaneous injections, with the active treatment groups reaching assigned doses within 2, 3 and 4 weeks, respectively. Efficacy was assessed by comparing conjunctival provocation test (CPT) reactions at baseline, after 4 weeks and after completion. Grass pollen–specific immunoglobulins were analysed before and after treatment. Results Conjunctival provocation test (CPT) response thresholds improved from baseline to V7 by at least one concentration step in 51.2% (170 μg; P = .023), 46.3% (370 μg), and 38.6% (70 μg) of patients receiving LPP vs 25.6% of patients receiving placebo (modified per‐protocol set). Also, 39% of patients in the 170‐μg group became nonreactive to CPT vs 18% in the placebo group. Facilitated allergen‐binding assays revealed a highly significant (P < .001) dose‐dependent reduction in IgE allergen binding across all treatment groups (70 μg: 17.1%; 170 μg: 18.8%; 370 μg: 26.4%). Specific IgG4 levels increased to 1.6‐fold (70 μg), 3.1‐fold (170 μg) and 3.9‐fold (370 μg) (mPP). Conclusion Three‐week immunotherapy with 170 μg LPP reduced CPT reactivity significantly and increased protective specific antibodies. | INTRODUCTION Advances in allergen immunotherapy (AIT), particularly in subcutaneous (SCIT) and sublingual immunotherapy (SLIT), aim to further reduce safety concerns for severe systemic reactions (SRs) and anaphylaxis as well as to increase real-life effectiveness, particularly by improving compliance and acceptance among patients through shorter treatment with a more convenient product. 1 To achieve these goals, novel therapeutics have been developed to overcome the limitations of natural allergens' intrinsic features. Recent investigations on peptide immunotherapy focus on synthetic peptide immunoregulatory epitopes (SPIREs) containing T cell-reactive short peptides 2 and longer continuous overlapping peptides (COPs) 3 of up to 80 amino acids. 4 Sets of long COPs that encompass all potential T-cell epitopes without IgE conformations induce IgG 4 but also evoke late asthmatic responses at high concentrations. 4,5 Mixtures containing grass allergens from the Pooideae subfamily have been shown to possess no advantage over single grass allergen extracts, which produced completely cross-reactive IgG 4 and were substituted for multiple grass subfamilies. 6,7 Perennial ryegrass (Lolium perenne, L. perenne) contains group 1, 2/3, 4, 5, 11, 12 and 13 allergens. 6 Lolium perenne, like the other members of the Pooideae subfamily, possesses strong cross-allergenicity, which is attributable to the high homology of groups 1, 2/3 and 5. 8 In this trial, different lengths of L. perenne peptides (LPPs) obtained from enzymatic hydrolysis were administered subcutaneously in a short up-dosing phase. We determined the optimum dose of LPP in terms of safety as well as clinical and immunological effects in patients with seasonal allergic rhinoconjunctivitis (SAR). | Trial design This randomized, parallel-group, double-blind, placebo-controlled, dose-finding trial was conducted at 23 outpatient study centres. Patients were screened in mid-August 2014, and enrolled partici- Table S1 in this article's Online Repository. | Study medication The adjuvant-free immunotherapy peptides used in this trial were extracted from whole ryegrass pollen by enzymatic digestion and formulated for subcutaneous injections according to good manufacturing practice requirements (see Online Repository Methods) as described by Shamji et al. 9 ASIT biotech s.a. (Brussels, Belgium) provided labelled LPP and placebo treatment kits (per visit and treatment number). | Planned interventions and timing Patients received 10 subcutaneous injections of placebo or of increasing doses of peptides at 5 visits (V2-V6) to participating study centres within 4 weeks. The first injection at each visit was given in one arm and, if no major local or systemic allergic reaction occurred within 30 minutes, the second injection was given in the other arm. Patients stayed at the study centre for another 30 minutes and were monitored closely. Injection volumes increased for all patients according to Table 1. Wheals and redness reactions were measured 30 minutes after each injection and recorded by the patient in a diary on the next 3 evenings. SRs were classified according to the German anaphylaxis guideline. 10 Investigators issued 3 tablets of rescue medication (cetirizine dihydrochloride, 10 mg per os, once daily) at each visit to all patients to relieve mild local reactions after injections if necessary. Doses were adjusted as follows: if a wheal measuring 5-8 cm in diameter appeared within 30 minutes after an injection or if an SR grade I occurred, the same dose was repeated for the following injection. If the wheal diameter was >8 cm 30 minutes after an injection or if an SR grade II occurred, the dose was reduced by one step for the next injection. Patients were to be excluded from further participation in the treatment if an SAE or SRs grade III or IV occurred. | Conjunctival provocation test Conjunctival provocation tests (CPTs) were conducted 11 and recorded 12 as described before. The allergen extract ALK-lyophilized grass (ALK-Abell o, Wedel, Germany) was used in concentrations of 100, 1000 and 10 000 SQ-U/mL. CPT responses ≥ stage II according to the Riechelmann scale 11 were considered positive. If baseline CPT responses at V1 and V2 differed by one concentration stage, the higher concentration step was used for further analyses. CPTs were performed at baseline, V6 and V7. At V2 and V6, CPTs were conducted before the study medication was administered. The CPT score was calculated as follows: 0 = no reaction at all, 1 = reaction at 10 000 SQ-U/mL, 2 = reaction at 1000 SQ-U/mL and 3 = reaction at 100 SQ-U/mL. To calculate the mean composite score, CPT scores of all grass allergen concentrations used in the individual tests were combined as described before. 13,14 Conjunctival provocation test (CPT) results are a predictive surrogate marker for SAR severity, as reduced CPT reactivity after M € OSGES ET AL. | 897 preseasonal SLIT predicted significantly fewer seasonal SAR symptoms, less rescue medication use and an increased number of well days. 14 2.5 | Study endpoints | Efficacy endpoints The primary efficacy endpoint was defined as the proportion of patients whose CPT reactivity to the different allergen extract concentrations decreased from baseline to V7. The secondary efficacy endpoints included the proportion of patients whose CPT reactivity to the different allergen extract concentrations decreased from baseline to V6, composite and CPT score reductions, as well as immunological changes. | Immunological responses Sera were collected from all patients at the screening visit (V1) and at the follow-up visit (V7, after finishing treatment). Immunoglobulin analyses measured grass pollen-specific IgG (sIgG), IgG 4 (sIgG 4 ) and IgE (sIgE) levels using the ImmunoCap â system (Pharmacia AB, Uppsala, Sweden). The production of blocking antibodies was assessed using a functional assay. [15][16][17] Relative allergen-IgE complex binding to CD23 detected in the presence of patient and indicator serum was expressed as the percentage of binding observed in a reference condition with indicator serum only. The production of blocking antibodies was reflected by a decrease in complex binding. | Statistics The sample size was calculated under the assumption that a maximum of 40% of placebo group patients and 75% or more of the actively treated patients would improve. 18 Given a 5% error and a power of 90%, Wilson's method estimated a group size of 46. Statistical analyses were performed using SPSS version 22 (IBM Corp., Armonk, NY, USA), and data were described in means and standard errors of the mean. P values vs placebo were obtained using the two-tailed Fisher's exact test or the two-tailed Mann-Whitney U test, with P < .05 considered as significant. A group sequential analysis was conducted under the null hypothesis that there would be no difference between the treatment groups regarding the proportion of patients with a reduction in CPT reactivity to a certain concentration of grass pollen allergen between baseline and V7. | Reduction in CPT reactivity from baseline to V7 In the mPP set, exploratory analyses showed the most prominent decrease in CPT reactivity from baseline in the group receiving CPT reactivity showed a significant decrease (mPP set: P = .004; mITT set: P = .008) in comparison with placebo ( Figures 2B and S1B). | Patients no longer reacting to conjunctival provocation In the mPP set, the percentages of patients who no longer reacted to conjunctival provocation were 39.0% (370-and 170-lg groups), 27.9% (70-lg group) and 18.0% (placebo) after treatment completion (Figure 3). In the mITT set, the proportion of patients no longer reacting to conjunctival provocation at V7 was highest in the group receiving 170 lg and lowest in the placebo group ( Figure S2). | Immunological changes An increase in sIgE levels was observed from baseline to V7 in the groups receiving LPP. At V7, these levels were significantly higher in the LPP groups than the sIgE level in the placebo group (P < .021) ( Figure 4A, Table S2). Grass pollen-specific IgG levels also increased in the LPP groups from V1 to V7. At V7, sIgG levels were significantly higher in the LPP groups than the level in the placebo group (P < .009). Specific IgG levels in the placebo group remained unchanged ( Figure 4B, Table S2). Grass pollen-specific IgG 4 levels increased from V1 to V7 in the LPP groups but remained unchanged in the placebo group. At V7, these levels were significantly higher in the LPP groups than the sIgG 4 level in the placebo group (P < .001) ( Figure 4C, | DISCUSSION The aim of this study in patients with allergic rhinitis was to estab- (Table S2). A study comparing SCIT and SLIT in terms of immunogenicity deduced that the maximum changes in sIg and blocking antibodies were reached after 3 months of treatment. Facilitated allergen binding (FAB) inhibition after 1 month was less than 5% for SCIT and nonexistent for SLIT. 30 Nevertheless, in our study, LPP immunotherapy led to FAB inhibition that after 4 weeks was 26.4% greater than that at baseline (370-lg group). Shamji et al 16 The absence of a significant correlation between blocking antibodies and CPT scores in individual treated groups in this study may be explained by the fact that the CPT data were categorical and in a low range (À1, 0, 1, 2, 3) while those of specific IgG 4 and FAB were continuous. Blocking antibodies have been shown to correlate with clinical response following long-term treatment. 16 The tolerability and safety of the peptide treatment are as important as its clinical or immunological efficacy. The safety profile of the peptides used in this study was comparable to that of conventional SCIT: a Cochrane meta-analysis of SCIT showed that 19% of patients experience SRs. 33 This trial showed a lower percentage rate for SRs than all the above trials, with 10.1% of the patients experiencing such events. Although that figure corresponds to 1.36% of all injections, it is important to note that this trial followed a rapid up-titration protocol, which is more likely to elicit SRs than dose maintenance phases having longer up-dosing schedules. Most TEAEs occurred at doses below 100 lg and no late SRs were observed. As for the tolerability in terms of local reactions, all patients in our study reported mild local erythema and wheals at the injection site within the first 30 minutes at least once ( Figure S6)
2018-04-03T03:45:01.368Z
2017-12-22T00:00:00.000
{ "year": 2017, "sha1": "8f03487e8c4e111a6b2e394146f032575535590d", "oa_license": "CCBYNCND", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/all.13358", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "8f03487e8c4e111a6b2e394146f032575535590d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
174809719
pes2o/s2orc
v3-fos-license
Physical workload and glycemia changes during football matches in adolescents with type 1 diabetes can be comparable Aims To analyze physical performance and diabetes-related outcomes in adolescents with type 1 diabetes (T1DM) during two semi-competitive football matches utilising precise physical activity monitoring. Methods The study was conducted during an annual summer camp for adolescents with T1DM. After physical examination and glycated hemoglobin measurement, 16 adolescent players completed Cooper’s 12-min running test and, in the following days, took part in two football matches while wearing heart rate (HR) monitors coupled with global positioning system (GPS) tracking. Results Both matches were comparable in terms of covered distances, number of sprints, achieved velocities and heart rate responses. During both games, capillary blood lactate increased significantly (Match 1: 1.75 ± 0.16–6.13 ± 1.73 mmol/l; Match 2: 1.77 ± 0.18–3.91 ± 0.63 mmol/l, p = 0.004). No significant differences in blood glucose were observed between the matches (p = 0.83) or over each match (p = 0.78). Clinically significant hypoglycemia (< 54 mg/dl) occurred in two children during the first match. None of the players experienced severe hypoglycemia. Despite similar workloads, players consumed significantly less carbohydrates during Match 2 [median difference: − 20 g (25–75%: − 40 to 0), p = 0.006]. Conclusions HR monitoring and GPS-based tracking can effectively parameterize physical activity during a football match. In T1DM patients, exercise workload and glycemic changes during similar matches are comparable, which provides an opportunity to develop individual recommendations for players with T1DM. Electronic supplementary material The online version of this article (10.1007/s00592-019-01371-0) contains supplementary material, which is available to authorized users. Introduction Physical activity offers great benefits for patients with type 1 diabetes (T1DM), and thanks to advances in pharmacotherapy and implementation of new technological solutions and education, different sports disciplines are now becoming more accessible for them. T1DM patients currently participate in all types of sports activities, even extreme sports [1][2][3]. However, T1DM patients who engage in long or intensive exercise still face a challenge of high glycemic variability and, more importantly, an increased risk of hypoand sometimes hyperglycemia, which can limit an athlete's motivation and performance. Therefore, it is crucial to prevent acute and potentially serious complications of these conditions (severe hypoglycemia and ketoacidosis) to enable T1DM patients to safely participate in sport trainings and competitions. Managed by Antonio Secchi. Sporting activities are particularly important for children and adolescents with T1DM, for whom physical activity is an integral element of somatic development [4]. However, maintaining normoglycemia during exercise in this age-group is very challenging due to high frequency of impromptu activities and changeable workloads. Although continuous subcutaneous insulin infusion (CSII) with modern personal insulin pumps has improved precision of adjusting insulin doses to patients' physical effort and the use of continuous glucose monitoring systems has provided detailed insight into glucose dynamics, maintaining safety during exercise still requires caregivers' and patients' commitment and thoughtful physicians' advices. Several guidelines for T1DM patients regarding exercise have been recently published [5][6][7][8]. Although the basic aspects of activity management (i.e., insulin dose reduction and additional carbohydrates consumption) are well founded, many recommendations lack extensive scientific evidence and are based on singular studies or experts' opinions [7]. It has been proposed that besides the above-mentioned conventional factors, multitude of additional circumstances may affect glycemic response to exercise, including the amount of circulating insulin ("insulin on board"), the glucose concentration trends, composition and volume of a meal consumed before exercise, and also the intensity, type and duration of exercise [8]. Pathophysiology of T1DM can partly explain why the generally accepted rules do not always allow patients to achieve and maintain normal blood glucose levels. In most patients, insulin concentration in blood can be too high with respect to glucose utilization, because insulin level cannot be suddenly reduced during physical exertion as it is in healthy people [9]. Sometimes situation is opposite: The amount of circulating blood insulin, especially during maximum, anaerobic efforts, may be too low which may result in prominent hyperglycemia. Moreover, studies have shown that each patient has specific physical activity-related needs, and this is why recommendations should be individually tailored [10]. However, to date, these adjustments were made mostly based on patient or practitioner experience [12]. To power up the creation of more effective guidelines, we propose to improve the quality of collected data on physical activity by precise quantification of exercise workloads. For in the field sports practicing, this might be achieved by using heart rate (HR) monitoring and global positioning system (GPS) tracking, which provide objective information on workload intensity and kinematics (covered distance, maximum and mean velocity). Coupled with blood glucose measurements or data from continuous glucose monitoring (CGM), these systems could help in designing repeatable training plans which enable that predictable glycemic excursions can be managed accordingly. In children, such approach may also uncover previously unrecognized patterns of exercise-glycemia relations. GPS tracking and HR monitoring are already widely used in professional football (US soccer) [11], which is the sport that teenagers in many countries practice most commonly. The intensity of physical effort during a football match is hard to predict, and retrospective analysis after such a match is also difficult to perform, which makes blood glucose control during football trainings and competitions a considerable challenge. The aim of the study was to analyze physical performance and diabetes-related outcomes in adolescents with T1DM during semi-competitive football matches. Therefore, we performed a proof-of-concept study assessing feasibility of using GPS-tracking and HR-monitoring technology during two football matches played by teenagers with T1DM and we attempted to characterize the intensity of physical activity and glycemic excursions during the games. Materials and methods The study was conducted during an annual summer camp for children and adolescents with T1DM. Males aged 12-17 years were recruited to play two football matches under HR monitoring and GPS tracking. Children remained under medical supervision for the duration of the camp (July 11 to July 24, 2017). The study protocol was approved by the Ethics Committee of the Medical University of Lodz (NO RNN/195/17/KE) and therefore was performed in accordance with the ethical standards laid down in the 1964 Declaration of Helsinki and its later amendments. Informed written consent was obtained from parents or participants > 16 years old, and informed assent from the rest of the children. All participants were treated with functional intensive insulin therapy by CSII or multiple daily injections (MDIs). During the camp and while playing the matches, all children performed self-measurements of capillary blood glucose to adjust insulin doses. They also used continuous glucose monitoring (Guardian Connect CGM, Medtronic, Northridge, CA), sensor-augmented insulin pumps (MiniMed Real-Time, MiniMed VEO, MiniMed 640G, Medtronic, Northridge, CA) or flash glucose monitoring (Libre, Abbott Diabetes Care, Alameda, CA) to continuously record glucose levels in the interstitial tissue and to have access to glucose concentration trend arrows when necessary. Physicians obtained medical history and performed a physical examination for all participants. Body height and weight were measured to calculate body mass index (BMI). Capillary blood samples were collected for HbA1c assessment [D-10 Hemoglobin A1c Program (Bio-Rad Laboratories, Hercules, CA, Bio-Rad, Marnes-la-Coquette, France)]. At the beginning of the camp, all participants completed Cooper's 12-min running test with chest-strapped heart rate (HR) monitors according to the protocol previously described to assess maximum heart rate and physical capacity [13,14]. Two semi-competitive football matches (each lasting 80-min + a 10-min break) were organized on the 70 × 90 m football field. The matches were played within 4 days, at the same time of a day. Twenty-two participants agreed to take part in the study. Two teams of 11 children (nine playing and two in reserve) were appointed. Out of 18 players from the initial team, 16 (excluding goalkeepers) wore HR monitors coupled with GPS tracking, which allowed for continuous tracking of each player's position and movement (Polar Team Pro System, Polar Electro OY, Kempele, Finland). Data collected for each participant included distance covered, mean and maximum velocity achieved, number of sprints performed and heart rate response. Sprints were defined as accelerations of ≥ 2 m/s 2 . Retrospective analysis of physical activity workload was performed with the Polar Flow, Polar Electro OY (Kempele, Finland) software. Diabetes control during matches Preparation for the matches and the matches themselves were supervised by six medical staff members. Before the breakfast at 8.00 in the morning, children administered insulin according to carbohydrates exchange factor and correction factor without any dose reduction. The matches began 2.5 h later. Before the matches, during the breaks, and at the end of each match, blood glucose (BG) was measured using a Contour Plus One Glucose meter (Ascensia Diabetes Care, Basel, Switzerland). The BG target range set in the study protocol was 100-250 mg/dl. When BG was < 100 mg/ dl, participants received 10-25 g of simple carbohydrates depending on CGM glucose trend arrows and body weight. When BG was from 100 to 150 mg/dl, oral carbohydrates were not administered unless CGM glucose trend arrows showed glucose decrease. If the BG before or at the interval of the match was > 250 mg/dl, the correction dose was administered. (Regular correction dose calculated based on sensitivity factor was reduced by 50%.) If BG was > 300 mg/ dl, blood beta-hydroxy-butyrate concentration was measured with Optium Xido™ test strips (Abbott Diabetes Care, Alameda, CA). In children treated with CSII, insulin pumps were disconnected from their bodies for the time of the matches. In children treated with MDI, basal insulin dose administered in the evening before matches was not reduced. Capillary lactate concentrations were assessed before and after each match (Lactate Scout, EKF Diagnostics, Germany) using the enzymatic-amperometric method (SensLab GmbH, Leipzig Germany). All data regarding monitored and measured parameters, insulin doses, and the amount of carbohydrates consumed were recorded during the games by the medical team in participants' monitoring charts. Statistics Player characteristics are presented as mean ± standard deviation. HRs achieved during football matches were recalculated as % of maximum HR presented during Cooper's test to avoid age-specific differences in raw HR. The authors acknowledge that the studied sample of adolescents was relatively small, which could result in underpowered analyses; however, some precautions were taken to provide valid results. The changes in BG and lactate concentration were assessed using ANOVA with repeated measures and paired design, which accounted for the fact that the same players participated in both games. Comparisons between the games were made with paired t test or Wilcoxon's paired test, and the effects were presented as mean (± SD) or median (+ interquartile range) difference for each player. This allowed for the assessment of repeatability of exercise workload during two matches for the same player. Results Out of 16 monitored players, two suffered minor injuries, and as a result, 14 adolescents took part in both matches and provided complete data eligible for analysis. Their mean age was 14.9 ± 1.4 years and mean diabetes duration was 7.2 ± 3.9 years. Eleven (79%) of them were treated with CSII and the remaining three used MDI. Their mean HbA1c concentration was 7.2 ± 0.6% (52 ± 0.32 mmol/mol), indicating good glycemic control. The players' mean BMI z-score was 0.57 ± 0.87. During Cooper's test, participants presented average physical capacity as measured by distances (in meters), and they covered over 12 min expressed in sex-and age-adjusted z-scores (mean z-score − 0.04 ± 0.57, which corresponded to 49th ± 20 percentile for the reference population). Physical capacity did not correlate with glycemic control expressed as HbA1c (r = − 0.13, p = 0.66). A significant correlation was found between physical capacity and age (r = − 0.53, p = 0.05) and between physical capacity and BMI z-score (r = − 0.7, p = 0.006). Maximum HR reached by players during Cooper's test ranged from 189 beats/min to 221 beats/ min; two players surpassed their Cooper's test maximum HR during either of matches. Mean HR achieved during matches was from 62 to 84% of players' individual maximum Cooper's test HR, and maximum HR registered during matches were comparable to individual Cooper's test maximum HR values (Fig. 1). Both matches provided comparable workloads for each player in terms of distance covered, number of sprints performed, mean and maximum velocity achieved and HR response ( Table 1, Supplementary Fig. 1). The only relationship between the measured parameters of workload during matches, age, BMI z-score, glycemic control measured by HbA1c and physical capacity estimated by Cooper's test was a positive correlation between age and maximum velocity achieved during the first match (r = 0.62, p = 0.017). During both matches, a significant rise in capillary blood lactate was observed [mean baseline 1.76 mmol/l (SE: 0.15) to mean after matches 5.02 (SE: 0.96), p = 0.004]. For each player, the increase was similar in both matches (p = 0.24) (Fig. 2). No significant differences in BG were observed when BG values before, at half of the match and after the match were compared between the two matches (p = 0.83) or when BG values before, at half and after were compared for each match individually (p = 0.78) (Fig. 3). Glucose alert values (≤ 70 mg/dl) were observed in four children during the first match and in two others during the second one (p for proportion = 0.01). Clinically significant hypoglycemia events (< 54 mg/dl) were noted in two players during the first match and in none during the second match. No player suffered severe hypoglycemia during the matches or over 24-h follow-up. CGM data were incomplete, which made statistical analysis impossible. Despite similar comparable workload characteristics of the matches, players consumed significantly less carbohydrates during the second match (median difference for each player: − 20 g [25-75%: − 40 to 0], p = 0.006) (Fig. 4). No correlations were found between exercise-and diabetes-related parameters ( Table 2). Discussion This is the first study to evaluate physical activity workload during football matches in T1DM adolescents with GPS tracking. Football is a mixed type, i.e., aerobic-anaerobic physical activity and the energy expenditure of an individual player may vary depending on age, physical fitness and position on the field [15]. Without a precise measurement of the [16,17]. GPS tracking is used by professional sport clubs to estimate different aspects of workload during training sessions and matches. It also allows physiologists and sports medicine specialists to assess the fitness level of football players in order to better adapt personal training plans [18]. No studies assessed precisely the intensity of physical exertion during football matches in adult or adolescent T1DM patients so far. Thus, the only point of reference is the report on competitive adolescent players without diabetes [15]. Our patients covered about 75% of distance reported for competitive adolescent players without T1DM [15]. GPS tracking enabled us to accurately measure not only the covered distances, but also achieved velocities, accelerations and HR. Average HR obtained by the players indicated intense physical effort during both matches and corresponded well with the results obtained in healthy teenagers [19]. We showed that for each player both matches provided comparable workloads. Both matches could be considered as mixed aerobic-anaerobic activity periods based on the rise in capillary blood lactate concentrations [20]. Furthermore, a corresponding number of performed sprints suggest that anaerobic component was similar in both games. The partly anaerobic character of activity may explain why only isolated episodes of hypoglycemia were recorded during both matches, as it was shown that anaerobic exercise increases and rather not decreases BG levels. We must be also aware that the same physical effort can cause different glycemic changes in individual patients [10]. Overall, glycemia during both matches remained stable (p = 0.78); the differences between blood glucose dynamics between the matches were also not significant (p = 0.28). SE standard error of the mean From the perspective of diabetes management, the sparring events differed only in the amount of simple carbohydrates consumed by players. This difference did not depend on any measured performance metrics. Thus, it can be speculated that smaller carbohydrate consumption during the second game could have resulted from the players' adaptation to physical activity during the camp. In addition, the medical staff had more experience and less fear of hypoglycemia during the second match and could serve players somehow smaller simple carbohydrate portions (i.e., falling into the lower doses recommended by protocol). In both matches, the average amount of carbohydrates consumed was in the range recommended for mixed training lasting longer than 60 min under low blood insulin concentration conditions [21]. Overall, while the two matches provided comparable physical workloads, they also resulted in similar glycemic outcomes in individual players. Moreover, most of the BG values were in the target range set in the study protocol (100-250 mg/dl). This observation suggests that in case of young T1DM football players developing individual recommendations aimed at optimization of glycemic control during matches is possible based on the parameters of training units. Information on the intensity of the planned exercise or sport is important because individualized approach is required for modifications of insulin therapy proposed by diabetologists for periods of physical activity. GPS tracking systems are not available for most football players on a daily basis. However, thanks to demonstrated repeatability of the matches, it may be possible to analyze a series of trainings or games and provide a player with customized guidelines. Furthermore, HR monitoring may be incorporated into algorithms that will modify the insulin delivery [22]. The strength of this study is that for the first time in patients with T1DM the GPS tracking was used during football matches, which enabled precise measurement of players' physical activity workload. Performing both football matches in a short time interval and under the same general conditions provided an opportunity to better parameterize physical activity of individual players. This study has some limitations. Firstly, not all football match-related activities can be tracked by GPS readings. Thus, workload associated with jumping, kicking a ball or tackling actions could only be reflected by HR monitoring [23]. We were also unable to perform glucose variability analysis based on CGM during the matches. The players used different types of continuous glucose monitoring systems, and moreover, the data from them were partially lost when the pumps were disconnected to prevent patients' injury or equipment damage during games. Due to a small number of participants and differences in their characteristics (age, BMI, physical fitness, possibly also personal motivation for playing), the study lacked statistical power to pinpoint which measures of workload are most closely related to glycemic changes. A study including only competitive adolescent football players with T1DM is currently underway to mitigate this limitation. Despite the difficulties in adjusting insulin therapy to physical exertion, children with diabetes should be encouraged to practice sports. The mixed nature of physical exercise, repetitive physical effort and low risk of hypoglycemia make football a sport that can be recommended for children and adolescents with T1DM. Furthermore, Cvetkovic et al. [24] showed that recreational football and high-intensity interval training elicited improvements in all muscular and cardiorespiratory fitness measures in adolescents, which may bring T1DM patients additional health benefits. Perspectives Estimation of physical effort is essential for T1DM patients to properly adjust insulin dosage and carbohydrate intake to patient's needs before, during and after practicing sports. One of the biggest challenges for young T1DM athletes is adaptation of therapeutic decisions to competitions, as the competition days may differ significantly from routine trainings due to the emotions and stress hormones interplay with glucose homeostasis. However, to our knowledge no study attempted to characterize glycemia dynamics during competitive or even semi-competitive play. Therefore, our most important result seems to be the observation that both matches were, for each individual, comparable in terms of kinematic parameters (distances, velocities), physical workloads (HRs), metabolic (lactate concentrations) and, to some extent, glycemic responses. This suggests that developing individual recommendations for each football player for the time of certain type of physical activity is feasible. Further, studies should be aimed at developing such personalized insulin therapy adaptation plans and testing them. Conclusions HR monitoring coupled with GPS-based tracking effectively parameterized physical activity during a football match in T1DM children. T1DM patient workload and blood glucose changes during matches were comparable, which provides the opportunity to develop individual recommendations for athletes with T1DM. Adequate titration of carbohydrates before and during a football match can allow to keep BG within or close to the target range.
2019-06-05T14:32:19.042Z
2019-06-04T00:00:00.000
{ "year": 2019, "sha1": "d13d2262220f9293e2e252ade622d6745fb00cef", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s00592-019-01371-0.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "c1d48f76f390539f2fce2878ddbb6ed94642e29c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
229282341
pes2o/s2orc
v3-fos-license
Ischemic Stroke in Patients with COVID-19 Disease: A Report of 10 Cases from Iran Ischemic stroke seems to be one of the most serious neurologic complications in patients with COVID-19 infection. Herein, we report a series of 10 ischemic stroke patients with concomitant COVID-19 disease. Out of 10, 8 had large infarcts (3 massive middle cerebral artery, 2 basilar artery, 2 posterior cerebral artery, and 1 internal carotid artery infarct territory). Two had cardiogenic embolic stroke due to atrial fibrillation. Almost half of our patients did not have a vascular risk factor. Nine did not have fever and were diagnosed with COVID-19 upon admission for stroke. Stroke occurred in the first week of respiratory symptoms with moderate pulmonary involvement. Most Patients did not have hypoxia and did not establish respiratory failure or acute respiratory distress syndrome. The blood pressures were low and hemorrhagic transformation did not occur even after antiplatelet or anticoagulant therapy. Patients had markedly increased levels of lactate dehydrogenase, C-reactive protein, and D-dimer. Three patients died. It seems that ischemic strokes in COVID-19 patients tend to occur as large infarct and can be seen in patients with mild to moderate pulmonary involvement. Introduction The evidence from SARS-CoV-2 studies implies that the virus can have central nervous system involvement [1][2][3]. Other reports suggest a prothrombotic state with COVID-19 [4]. Since the Coronavirus pandemic in 2020, there has been an ever-growing evidence of neurologic complications associated with COVID-19. A retrospective study form china with 214 COVID-19 patients reported 4 ischemic strokes [5]. The report suggests that the neurologic complications were seen in patients with severe COVID-19. Also, abnormalities in liver enzyme, hemoglobin, lactate dehydrogenase (LDH), d-dimer, Cr, and lymphopenia were more common in patients with neurologic complications. Patients with COVID-19 tend to be in a prothrombotic state [4]. On the other hand, increased prothrombin time (PT), partial thromboplastin time (PTT), and decreased platelet counts is also reported 2 DOI: 10.1159/000513279 in COVID-19 [4], which may increase the risk of hemorrhagic incidents. This is especially important in patients with cerebrovascular diseases. Because the treatment of ischemic strokes includes different regimens of antiplatelet, anticoagulant, and thrombolytic therapies; it is important to understand the relation between COVID-19 and coagulative changes in the body. So far, a lot about the virus and its behavior is unknown. Here, we aim to describe the findings in 10 patients with ischemic stroke and COVID-19. Method One week after the announcement of the outbreak of CO-VID-19 Tehran, Iran, Sina Hospital affiliated to Tehran University of Medical Sciences became one of the official referral centers for COVID-19 on February 28, 2020. This single-center retrospective case series study was carried out based on the data extracted from the Sina registry database [6] and was approved by the Iran National Committee for ethics in biomedical research. All cases in the emergency room were screened for COVID 19 regardless of their complaints. Until April 3, 2020, we had about 508 patients admitted with moderate to severe COVID-19 diagnosis amongst which, 10 also had ischemic stroke. Due to hospital guidelines, the diagnosis of COVID-19 infection was made through clinical history and chest computed tomography (CT) scan. If highly suggestive, the diagnosis was made with the chest CT findings. If suspicious, COVID-19 PCR test was performed. Brain CT scan was performed initially for all patients with stroke symptoms and only if the brain CT scan findings were inconclusive brain magnetic resonance imaging (MRI) with diffusion-weighted imaging/apparent diffusion coefficient (DWI/ADC) map sequences were performed. We admitted patients with stroke and COVID-19 and monitored them continuously for blood oxygen saturation and need for mechanical ventilation. In the case of any neurologic symptoms' aggravation, we repeated brain CT scan (data availability: anonymized data not published within this article will be made available by request from any qualified investigator). Results We had 10 ischemic strokes (no intracranial hemorrhages) out of 508 COVID-19 patients (1.9%) that correspond to 10 out of 22 (with and without COVID-19) total stroke number (45%). We had 12 patients with stroke who did not have COVID-19 and were referred to another center after diagnosis. Table 1 shows the patients characteristics. Most of our patients aged between 50 and 90 years. Only 1 patient was 27 years old. Five had cerebrovascular disease risk factors (none had previously known atrial fibrillation [AF]) and 4 were already taking anti-platelet prophylaxis prior to stroke. Most patients had experienced respiratory symptoms for 0-7 days prior to stroke. However, only one of them had sought medical attention and was in home quarantine receiving hydroxychloroquine and azithromycin with the diagnosis of a mild COVID-19 disease. Four patients did not have any symptoms of COVID-19 and were diagnosed with the infection upon stroke admission. When present, cough and dyspnea were the most common respiratory symptoms that were aggravated at the time of admission for stroke. In the emergency room (), only 1 patient had a 39.5°C fever and the vital signs were stable except for pt#6 who had increase heart rate and respiratory rates (120 beat per minute, respiratory rate: 40/min). The peripheral capillary oxygen saturation in most patients was acceptable (above 90%). Three had peripheral capillary oxygen saturation: 88% that were intubated in the ER. Nonetheless, during the disease, 3 more needed mechanical ventilation due to decreased levels of consciousness. Of interest, the blood pressures were relatively low in our patients (90/60-160/90, median 130/80 mm of mercury [mm Hg]). Two patients had irregular heart rates due to new-onset AF. Upon admission, most of our patients were ill and had various degrees of decreased consciousness. There were no seizures nor temporary loss of consciousness. The focal neurologic deficits were severe (Table 1). We made the diagnosis of COVID-19 infection based on chest CT scan reported by a radiologist. Most of them had bilateral moderate to severe ground glass opacities in more than 2 lobes. Two patients had mild pulmonary involvement. However, severe alveolar involvement suggestive of acute respiratory distress syndrome was not present. The brain CT scan findings showed mostly large artery involvement with large infarct size, but there was no hemorrhagic transformation in the course of the disease. Three had a malignant left middle cerebral artery (MCA) stem infarct, 2 had a top of basilar artery thrombosis, 2 right posterior cerebral artery infarct, 1 had a left internal carotid artery thromboembolic stroke, and 2 had cariogenic embolic stroke due to AF (Fig. 1). The lab data showed that the total white blood cell count was normal to elevated (6.2-18.7 × 10 3 ) and 3 patients had lymphopenia (absolute lymphocyte count: 620, 714, and 580). The hemoglobin levels were not dramatically decreased ranging from 12.7-15.4 mg/dL. The platelet counts and Cr levels were normal (except for pt#6 who later developed acute tubular necrosis [ATN]). PT and PTT were normal. The liver enzymes and erythrocyte sedimentation rate levels were mildly increased. All patients had elevated C-reactive protein (mean: 50.54 and SD: 48.17), D-dimer (mean: 2,746 and SD: 3 Table 1). None of our patients were eligible for thrombolytic treatment. For acute ischemic stroke secondary prevention, our patients were treated based on American Heart Association/American Stroke Association guideline 2019 [7]. Seven of our patients received antiplatelet. Two patients with AF received anticoagulants after 1 week. The 27-year-old patient with extensive basilar and vertebral artery thrombosis, received a combined anti-platelet, anticoagulant treatment that resulted in dramatic improvement. Unfortunately, 3 patients died after less than a week of stroke one with malignant MCA stroke and one with top of basilar artery stroke. Discussion There are reports showing that thromboembolic events are increased in patients with COVID-19 [4] and apart from headache and dizziness, cerebrovascular accidents (CVAs) are amongst the common neurologic complications in this disease. It has been postulated that state of hypercoagulation along with endothelial injury following massive inflammatory response in COVID-19 could be potential contributor to developing ischemic stroke [8]. Given that CVA is also very common population-wide, it is understandable that it could be a common comorbidity when we have this high number of CO-VID-19 infection. Our stroke numbers are comparable to the Chinese report (1.9 vs. 1.7%, respectively) [4]. The main outstanding characteristics of our stroke patients were the size of the infarct, the larger size artery involvement, and the relative absence of conventional cerebrovascular risk factors. This was noteworthy especially in our 27-year-old patient with total vertebrobasilar artery occlusion. This patient was quite an outlier. The only risk factor he possessed was smoking and curiously one can assume that prothrombotic state associated with CO-VID-19 infection might have played an important role in his disease. The lab tests for genetic thrombophilia conditions will be performed after the anticoagulant regimen. It appears that COVID-19 disease can be associated with an increased thrombotic state that can be associated Cerebrovasc Dis DOI: 10.1159/000513279 with large artery thrombosis. A case series study in New York has reported 5 COVID-19 patients under 50 years with large-vessel stoke [9]. Another interesting finding was the normal to mildly elevated blood pressures in our patients. We usually anticipate a rather high blood pressure in the first day of stroke. Several mechanisms could contribute to low blood pressure in acute ischemic stroke including cardio emboli, heart failure, gastrointestinal bleeding, and sepsis [10]. In general, patients who are normotensive at the presentation of acute ischemic stroke tend to have cardio emboli source [11]. It seems that in the context of COVID-19, cardiogenic shock or sepsis could be the plausible causes of low blood pressure. In the previous reports [4], the severe neurologic complications were more common in the severe COVID-19 cases. Nevertheless, in our patients, the pulmonary involvement was mostly mild to moderate and none of our patient reached a severe respiratory distress pattern prior to stroke. Even the arterial oxygen saturations were acceptable and the late need for mechanical ventilation was due to decreased levels of consciousness. Also, in our patients the absence of fever and the fact that the COVID-19 disease was diagnosed upon admission was noteworthy. Therefore, it is of utmost importance to note that most patients were not aware of having COVID-19 disease and did not primarily complain of respiratory symptoms. In our hospital, 2 separate emergency rooms exist: 1 designated to COVID patients and the other to regular patients. Most of our patients came to the non-COVID ER. This once again emphasizes on screening all patients for COVID-19 during the epidemic since many patients may lack telltale signs of the infection such as fever. Furthermore, performing a simultaneous chest/brain CT scan seems to be a convenient method in patients with CVA. According to previous reports, we expected abnormalities in liver enzyme, hemoglobin, LDH, d-dimer, Cr, and lymphopenia [5]. Similarly, we noted that the significant lab findings were abnormal such as C-reactive protein, LDH, and D-dimer levels. But our patients did not have decreased hemoglobin, extremely low lymphocyte count or extremely high liver enzymes as was reported in a Chinese report [4]. Aside from a thrombus indicator, the Ddimer is known to be an acute phase reactant. Therefore, one can assume that the D-dimer rise is basically due to a severe underlying COVID-19 infection. However, we were not convinced that the viral infection was severe enough to justify the rise. It would be enlightening to compare the D-dimer levels in all COVID-19 patients with and without thrombotic events. Unfortunately, the D-dimer levels were assessed in our hospital only in pa-tients with a thromboembolic disease and we did not have the necessary data to investigate this matter. The mortality rate was high as it was expected in old age patient with massive stroke or with COVID-19 infection. There was no clear evidence of DIC or increased PT and PTT (unlike previous reports) [4] and we did not encounter a hemorrhagic transformation after antiplatelet and anticoagulant therapy. Anticoagulation with heparin has been used in critically ill patients with COVID-19. On the other hand, in acute large ischemic infarcts, there is always a hesitation of using anticoagulation for the fear of hemorrhagic transformation. One may assume that the situation is different with COVID-19 patients with stroke, but we need more reports to confirm the safety or benefit of using anticoagulants in the acute phase of stroke in these patients. Unfortunately, we did not have an eligible case for recombinant tissue plasminogen activator injection to observe the interaction of this treatment with the assumed coagulopathy of COVID-19 disease. On the other hand, large artery thrombosis may be a suitable candidate for thrombectomy which we did not perform in these cases. There might be some possible limitations in this study. First, we did not screen patients based on their COVID PCR test. At the beginning of the COVID-19 pandemic, because of substantial number of COVID-19 patients, we did not have enough PCR tests for screening. Therefore, the initial diagnosis of COVID-19 was based on the chest CT findings. Second, this study was a retrospective study, and the patient's data were reliant on documentation in clinical histories. Third, given this study was an uncontrolled case series, there was not a comparison group to differentiate the acute ischemic stroke features in CO-VID-19 patients and non-COVID-19 patients. Conclusion In our series, about 1.9% of patients with moderate-tosevere COVID-19 had concomitant ischemic stroke. Ischemic strokes in COVID-19 patients tend to occur as large infarcts mostly due to large artery thrombosis and can be seen in patients with mild to moderate pulmonary involvement who are not aware of having COVID disease. Antiplatelet and anticoagulant therapy was not harmful in our patients, however, recombinant tissue plasminogen activator administration should be investigated in COVID-19 patients to understand the potential benefit or harm of this treatment in relation to CO-VID-19 coagulative/thrombotic complications.
2020-12-17T06:16:33.564Z
2020-12-15T00:00:00.000
{ "year": 2020, "sha1": "471ee4d3ef0d5cbca6f84f5cf4a680289559b69d", "oa_license": null, "oa_url": "https://www.karger.com/Article/Pdf/513279", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "4221d1d93b293e73f69a20a37799206a239e016f", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
235633012
pes2o/s2orc
v3-fos-license
Comparison of intraocular lens power calculation in simultaneous and sequential pterygium and cataract surgery Objective: To evaluate the effect of pterygium excision on intraocular lens (IOL) power and refraction. Methods: The present study was carried out on patients with combined cataract and pterygium excision (combined group) and pterygium surgery first and cataract surgery after one month (sequential group). Parameters such as mean keratometry (K) values, axial length, IOL power, and corneal astigmatism were compared pre and postoperatively in the combined and sequential groups. Results: 70 eyes of 70 patients were included in the present study. The mean age of the participants in the combined group was 70.46±10.12, whereas that in the sequential group was 68.68±11.22 (p=0.243). The mean horizontal length of the pterygium in the combined group was 2.64±0.17 mm and 2.57±0.17 mm in the sequential group. The mean postoperative K values (p=0.03) and IOL power (p=0.04) in the combined group were significantly higher than the preoperative values. The estimated postoperative refractive error in the combined group was -0.50±1.00 D and 0.25±0.5 D in the sequential group (p=0.04). On the other hand, the postoperative refraction in the sequential group was predictable. Corneal visibility was diminished on the nasal side in almost all the patients in the combined group as compared to the sequential group. Conclusion: The postoperative refraction errors were positively correlated with the length of pterygium in the combined group. The unpredictability of these errors recommends sequential surgery in cases with concurrent pterygium and cataract. Introduction Pterygium is a triangular-shaped conjunctival overgrowth extended to the cornea. It is a common ocular disease associated with ocular irritation and visual impairment and could induce astigmatism. The current standard treatment for this condition is surgical excision, although high recurrence rate is yet an unresolved issue [1]. The surgery significantly increases the spherical power of the cornea, while a significant decrease was observed in astigmatism [2]. Pterygium is a common condition in elderly patients and often accompanied by cataract. The exposure to ultraviolet radiation leads to the cooccurrence of pterygium and cataract, and hence, a high incidence of both conditions is noted in tropical countries. A study from central India demonstrated that the prevalence of pterygium was associated with the exposure to sunlight as well as outdoor activities [3]. Nirmalan et al. have shown an increased prevalence of cataract in rural population of India [4]. Intraocular lenses (IOLs) are artificial medical devices that replace the natural lens of the eye, which has turned opaque and, hence, is removed during cataract surgery. However, the power of these IOLs to be implanted is a major concern in patients undergoing combined cataract and pterygium surgery. The technological advancements in cataract removal have raised the expectations of vision quality. Nonetheless, accurate IOL power is obligatory for improved visual and refractive outcomes. Kamiya et al. assessed the predictability of IOL power calculation after simultaneous pterygium and cataract surgery with IOL implantation in a retrospective analysis and concluded that the myopic shift of refraction postoperatively occurred due to the steepening of cornea [5]. Koc et al. studied the effect of the size of pterygium on IOL power in patients who underwent bilateral pterygium surgery without having cataract. The study suggested that IOL power should be 0.50 D smaller than the calculated IOL power when the size of the pterygium is > 2.44 mm [6]. To the best of our knowledge, no systematic studies have been carried out on the predictability of IOL power in either the combined or sequential cataract and pterygium surgery in a randomized pattern. Thus, the present study aimed to assess the refractive accuracy of the calculated IOL power. Materials and Methods Sample size: To find a 25% difference between the simultaneous and sequential groups with 80% power, 5% significance level, and allowing 10% loss to follow-up, the necessary sample size was determined to be 35 eyes in each group. Patient selection and study design: This study was carried out according to the protocols of the Declaration of Helsinki. The ethical approval was obtained from the medical ethics committee of the hospital. An informed consent was also obtained from all the participants. This prospective, comparative, and randomized case series included patients with combined cataract and pterygium excision (Combined group) and pterygium surgery first and cataract surgery after one month (Sequential group). The duration of the study was from May 2017 to July 2018. Exclusion criteria were the following: previous history of ocular trauma or surgery, recurrent pterygium, temporal pterygium, combined nasal and temporal pterygium, corneal ectasia, and pterygium < 2.00 mm and > 3 mm. Preoperative assessment included best-corrected visual acuity, slit lamp examination, and keratometry (K) on autorefractor keratometer (Topcon KR-800, Topcon Medical Systems Inc., Oakland, NJ, USA). This keratometer performs K reading in the central 3.2 mm of the cornea. Both flat (K1: angle of flat axis, K2: angle of steep axis) and steep axis measurement was performed. Hand-held tonometer (Perkins, Haag-Streit, UK Ltd., Harlow, UK), retinal evaluation, and Ascan biometry (Axis-II PR Biometer, Quantel Medicals, Cournon-d'Auvergne, France) were used to assess the axial length measurement and to calculate the IOL power. The IOL power was calculated with an Aconstant of 118.0 using the SRK II formula. Simple randomization method was followed by toss method. The heads were assigned to the combined and tails to the sequential group. Surgical technique: Pterygium excision All the patients were operated on by a single surgeon. The lignocaine jelly (Xylocaine Jelly 2% AstraZeneca India Ltd.) was instilled in the conjunctival sac 10 min before the surgery. The size of the pterygium was measured with digital calipers from the limbus to the apex. A vertical incision was facilitated over the body of pterygium 2 mm behind the limbus. The head of the pterygium was dissected from the cornea with blunt dissection. The subconjunctival tissue from under the body of the pterygium was removed. The bleeding points were cauterized with wet field cautery. The body of the pterygium was kept retracted on to the bare sclera, and the area underneath was dried with a cotton bud. A free conjunctival autograft was excised from the superior bulbar area. The graft size was obtained by measuring the area of exposed sclera with the calipers. Then the graft was positioned over the bare sclera in the nasal area with limbus-to-limbus orientation for 10 minutes by applying gentle pressure with a sponge. After drying, the redundant margins of the graft were excised with Vannas scissors, and the lid speculum was removed without subsequent suturing. The eye was bandaged for 24 h. Cataract surgery All the surgeries were performed by the same surgeon. An equivalent of 0.5% proparacaine hydrochloride eye drops were instilled topically two times every 10 min before the surgical procedure. A 20-G side-port incision was created on the appropriate side as required. Then, viscoelastic (2% hydroxypropyl methylcellulose, Appavisc, Appasamy Ocular Devices, Puducherry, India) was injected through the side port using a 23-G blunt tip cannula. Next, a 2.8-mm clear corneal temporal incision was performed. Subsequently, continuous curvilinear capsulorhexis was facilitated using Utrata forceps under viscoelastic conditions. Hydrodissection was performed using balanced salt solution (BSS). Hydrodelineation was carried out in patients with posterior polar cataracts. The nucleus was managed by the direct chop method at the following settings: power 90% (linear), aspiration flow rate 34 cc/ min, and vacuum 350 mmHg. These parameters were not modified in any of the cases until the last fragment was emulsified. A thorough cortical clean-up was accomplished by irrigation and aspiration. The anterior chamber was filled with ophthalmic viscosurgical device single-piece hydrophilic IOL (Acryfold, Appasamy Ocular Devices) with a 6-mm optic diameter, dual haptics, 12.5-mm overall length, biconvex optic design, and square edge design. Subsequently, the anterior chamber was washed to clear the viscoelastic device, and stromal hydration of the side port and the main incision were carried out in the presence of BSS. All surgeries were completed without adverse events. Emmetropic power was selected in both groups, and the amount of postoperative ametropic power was noted. The prediction spherical error was calculated by subtracting the predicted postoperative refraction from the postoperative spherical equivalent at one month. Combined pterygium and cataract surgery The procedure was the same as the one described above. Postoperative estimate refractive error This parameter was calculated by subtracting the projected postoperative refraction (spherical) from the postoperative refractive error (sphere) at 1month interval. Statistical analysis: The data were entered in Excel sheet (Software version 14.1.0 [110310]/ 2011) (Microsoft Corporation, Redmond, WA, USA), and statistical analyses were performed using SPSS version 13.0 (SPSS Inc., Chicago, IL, USA). Chi-square test and t-test were used for categorical and continuous variables, respectively. Values were expressed as mean ± standard deviation (SD) and percentage as appropriate. A p-value < 0.05 was considered statistically significant. Results A total of 70 eyes of 70 patients were included in the present study. During surgery, the mean age of the participants in the combined group was 70.46 ± 10.12 (range, 60-90) years, whereas that in the sequential group it was 68.68 ± 11.22 (range, 59-90) years (p=0.243). The mean horizontal length of the pterygium in the combined group was 2.64 ± 0.17 (range, 2.10-3) mm and 2.57 ± 0.17 (range, 2.20-3) mm in the sequential group. The pre-and postoperative mean K values, axial length, IOL power, and corneal astigmatism are presented in Table 1. In the combined group, the mean postoperative K values (p=0.03) and IOL power (p=0.04) were significantly higher than the preoperative values, while no significant change was detected in the axial length. Furthermore, the corneal astigmatism was significantly decreased from 2.43 ± 2.0 preoperatively to 1.25 ± 1.0 postoperatively (p=0.02) in the combined group, while no change was observed in the sequential group. The estimated postoperative refractive error in the combined group was -0.50 ± 1.00 (range, ±0.25-1.5) D and 0.25 ± 0.5 (range, ±0.25-0.75) D in the sequential group, exhibiting a significant difference (p=0.04). Two patients showed postoperative refraction of +1.5 D sphere, and one patient had -1.25 D sphere in the combined group. On the other hand, the postoperative refraction in the sequential group was only predictable. The size of the pterygium and change in the IOL power are depicted in Fig. 1,2. Unpredictability was noted in the IOL power in the combined group as compared to the sequential group. With increase in the length of pterygium, a high variability was detected in the IOL power. Corneal visibility was diminished on the nasal side in almost all the patients in the combined group as compared to the sequential group. Discussion The predictability of refraction after combined cataract and pterygium surgery is challenging due to altered central corneal curvature leading to modified IOL power. It is imperative to decide whether simultaneous cataract and pterygium surgery or sequential will be beneficial in a particular case or not. The present study suggested that sequential pterygium and phacoemulsification surgery with implantation of monofocal IOL is an effective method than the combined technique if the size of the pterygium was > 2.00 mm. Koc et al. and Kamiya et al. showed changes in the horizontal K value on the steeper side, which caused a myopic shift in the postoperative IOL power [5,6]. To the best of our knowledge, this is the first prospective study that assessed the effect of pterygium excision on IOL power and refraction in the combined and sequential approach. Kamiya et al. studied the predictability of IOL power in patients who underwent simultaneous primary pterygium excision and phacoemulsification with monofocal IOL implantation in a retrospective record review method [5]. However, the predictability of IOL power in patients who underwent simultaneous primary pterygium excision and phacoemulsification with monofocal IOL implantation was analyzed in a retrospective study, wherein 82% of the eyes had ±1.0 D of the targeted refraction [6]. Furthermore, Koc et al. analyzed the size of pterygium and its effect on biometry in patients with unilateral pterygium [6]. Since cataract surgery was not performed, the biometric data were compared to that of the normal eye. The current study was carried out on patients with both pterygium and cataract. The size of pterygium was estimated as 2-3 mm in both groups. Kim et al. have shown horizontal K value 3 months post pterygium excision increased by 0.15D in eyes with pterygia smaller than 2 mm and by 0.95 D in eyes with pterygium larger than 2 mm [7]. However, Koc et al. considered size of the pterygia more than 2.40 mm in their study on biometric changes after pterygium surgery [6]. Nejima et al. have also quantified topographic changes after a large pterygium excision [8]. The precise way to measure K-reading is the optical biometer. Since it was unavailable in the rural setup, the K-readings were obtained on autorefractor keratometer in the central 3.2 mm of the cornea. It was considered that the horizontal diameter of the cornea had a size of 12-12.5 mm more than > 3 mm of the pterygium, falling in the K-reading measuring zone. Therefore, patients with a pterygium size > 3 mm and combined nasal and pterygium were excluded from this study. Topographic changes after recurrent pterygium surgery are unpredictable [9,10]. Walland et al. have shown astigmatism of 15D in patients having recurrent surgery [9]. Thus, patients who had undergone recurrent pterygium surgery were excluded from the study group. In the sequential group, an interval of 1 month was maintained between pterygium and cataract surgery. The study by Kim et al. did not reveal any difference in the K value between 1 and 3 months postoperatively in patients with primary pterygium [7]. Nejima et al. suggested that the eyes with the advancing edge of the pterygium between one-third of the diameter of cornea and pupillary margin (4 mm) restoration of corneal curvature will recover in 3 months [8]. In our study, the size of pterygium was below 4 mm. The changes in corneal curvature have been reported previously [5,6,11]. The present study showed there was a significant difference in the Size of the pterygium
2021-06-26T05:18:34.469Z
2021-04-01T00:00:00.000
{ "year": 2021, "sha1": "5c69e159e76cdb7f2ff598382766a0b121196c8b", "oa_license": "CCBY", "oa_url": "https://doi.org/10.22336/rjo.2021.31", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "5c69e159e76cdb7f2ff598382766a0b121196c8b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
256406099
pes2o/s2orc
v3-fos-license
Chaotic characteristics and attractor evolution of friction noise during friction process Friction experiments are conducted on a ring-on-disk tribometer, and friction noise produced during the friction process is extracted by a microphone. The phase trajectory and chaotic parameters of friction noise are obtained by phase-space reconstruction, and its attractor evolution is analyzed. The results indicate that the friction noise is chaotic because the largest Lyapunov exponent is positive. The phase trajectory of the friction noise follows a “convergence-stability-divergence” pattern during the friction process. The friction noise attractor begins forming in the running-in process, and the correlation dimension D increases gradually. In the stable process, the attractor remains steady, and D is stable. In the last step of the process, the attractor gradually disappears, and D decreases. The friction noise attractor is a chaotic attractor. Knowledge of the dynamic evolution of this attractor can help identify wear state changes from the running-in process to the steady and increasing friction processes. Introduction Relative motion between two bodies causes friction and contact surface wear, which may exert negative effects on the reliability, security and usage of equipment [1,2]. Distinguishing and predicting wear states is thus important to extend the life of a mechanical system. However, identifying the wear states of complicated nonlinear dynamic processes is difficult. In recent years, many domestic and foreign researchers have published studies on the mechanisms and factors affecting friction noise [3]. Chen et al. [4], for example, performed measurements of the noise induced by friction, and used the waveform and power spectrum to analyze the friction noise sound pressure level. The results of this indicated that friction-induced noise was generated by relative sliding friction and vibration motion. Chen and Zhou [5] applied the concept of friction coefficient and observations of scar topography to analyze the mechanism of friction noise. The results of this work showed that fluctuations in friction force were the main mechanism of this noise. Lars and Staffan [6] studied a spiral-shaped modification of the surface topography of a brake disk to reduce noise; results suggested that a spiral pattern could strongly reduce squealing. A basic study of friction noise caused by fretting was conducted by Jibiki et al. [7], and the results of this research indicated that friction noise was generated by certain cycles of fretting. The sound pressure level increased with increasing fretting stroke and frequency, which is related to the average sliding velocity and the wear loss. Chen et al. [8] studied the relationship between friction noise and friction surface characteristics under reciprocating sliding. The group found that the worn morphology with high intensity level noise presented obvious flake fractures, and that these fractures were important causes of friction noise. Le Bot et al. [9] carried out an experiment on friction noise during dry contact under light pressure, and results demonstrated that friction noise increased with sliding speed and roughness increasing. Disk surfaces with different groove structures were investigated on a pad-disk tester by Wang et al. [10,11], who demonstrated that a disk surface with 45 degree grooves could strongly reduce friction noise. The main purpose of studying friction signals is to identify wear states in the friction process. However, the friction process is extremely complex, and accurately identifying and predicting wear conditions from previous studies on friction noise is difficult. Therefore, several researchers have sought to determine the nonlinear dynamics of friction signals to recognize wear states. Zhu et al. [12] extracted the friction force and vibration signals from a pin-disk experiment to realize the chaotic characteristics of a tribological system, and utilized the spectrum and fractal dimension methods to study these signals quantitatively. The results of this research showed that friction signals presented chaotic natures, and that the tribological system was a chaotic system. Zhou et al. [13] studied the chaotic characteristics of friction temperature in the friction process and found that the temperature signal featured chaotic characteristics that could be utilized to recognize changes in wear states. Sun et al. [14,15] applied chaos theory to study the friction vibration signals extracted from a pin-disk test, and results revealed that these signals demonstrated chaotic characteristics. Wear states can be identified by the evolution of friction vibration attractors. Liu et al. [16] conducted spherical-on-disk running-in tests, and chaotic attractors were used to analyze cross correlations between tangential and normal vibrations. The results of this work demonstrated that the chaotic attractors of vibration signals converged as running-in continued and could be utilized to describe changes in wear states. Oberst and Lai [17] utilized a recurrence plot to reveal the chaotic characteristics of brake squeal noise. Friction noise contains information reflecting wear states, and friction noise signals can be collected in real time without affecting the normal friction process. Therefore, nonlinear dynamics theory may be utilized to prove that friction noise presents chaotic characteristics and that chaotic attractors undergo dynamic evolution. Friction noise thus presents a new route through which wear states can be monitored on-line, identified and predicted. Based on the rotational movement of a ring-disk under oil lubrication, the aim of this paper is to illustrate the complex chaotic characteristics and attractor evolution of friction noise signals, knowledge of which is instructive in revealing wear states. In Section 2, the experimental apparatus and specimens are introduced, the tests are conducted, and the original time series of the friction noise signals are obtained. In Section 3, the processed time series of the friction noise signals are obtained by means of reducing sampling and de-nosing. In Section 4, the evolution of the phase trajectories of friction signals is presented by reconstructing the phase space. In Section 5, the correlation dimension D and largest Lyapunov exponent are calculated. Finally, in Section 6, a discussion and analysis are provided, and the main conclusions of this work are given. Tribometer description The friction experiments were carried out on a rotating tribometer. Friction noise signals can be extracted in real-time via the ring-disk rotating process. The experimental device is composed of a power system, a loading system, a clamp system, and a data acquisition and an analysis system. The equipment is shown in Fig. 1. The upper sample ring was installed on a ring holder by a locking pin and a locking bolt, while the lower sample disk was mounted on a disk holder by a locating pin illustrated in Fig. 2. The friction torque was measured by a torque sensor, which was also attached to the disk holder. Friction noise signals were measured by a microphone PCB fixed 35 mm from the center of the friction pairs. The sensitivity of the microphone PCB was 45 mV/Pa, and its dynamic range was 15-122 dB. Signals were extracted and stored by a CoCo80 analyzer. Test samples and test conditions The test samples included ring and disk specimens. The upper sample was a ring made of GCr15 bearing steel and with a Rockwell hardness of 61 HRC after quenching treatment. Its outside diameter was 34 mm, and its inner diameter was 24 mm. The surface of this ring was first processed by turning, and then ground and polished using sandpapers of 800#, 1200#, 1500#, and 2000# in sequence. The roughness Ra of this specimen was measured, and the mean of three measurements was considered the final Ra of the specimen. The ring specimens showed a roughness of Ra=0.040−0.043 μm. The lower sample was a disk made of AISI 1045; its Rockwell hardness was 44 HRC without heat treatment and its diameter was 46 mm. The disk had an initial roughness of Ra=5.520− 6.000 μm after turning. The equivalent radius of the friction pair was 14.5 mm, and the nominal contact area was 455.53 mm 2 . Table 1 shows the experimental conditions of five tests. The tests were conducted under atmospheric conditions (at 16−22 °C ; relative humidity, 48%−58%). Prior to testing, the specimens were cleaned with ethanol (97% pure) using an ultrasonic cleaner. A volume of 0.2 ml of 15W-40 lubricating oil was dropped onto the working surface of the lower sample, and good contact of the specimen surfaces was ensured | https://mc03.manuscriptcentral.com/friction during installation. The sampling frequencies of the sound level sensor and the friction torque were 12.8 kHz and 300 Hz, respectively, and the background noise was measured before the tests. To ensure the repeatability of the test results, five tests were carried out at least three times to account for the randomness of the friction noise. Experimental signal processing The voltage signals in the experiments were converted to sound pressure levels in decibels using the equation where, U is the voltage of the test data, U 0 is the voltage of the background noise, P is the sound pressure level of the test data after conversion, P 0 is the sound pressure level of the background noise after conversion, and sty is the sensitivity of the microphone. where P c is the reference value of sound pressure, P c = 2 × 10 −5 Pa, Lp is the sound pressure level of the test data, and Lp 0 is the sound pressure level of the background noise. The equation used to calculate the sound pressure level after removal of the background noise is Lp 10 log (abs(10 10 )) where Lp 1 is the sound pressure level of the friction noise without the background noise. Generally speaking, the friction noise of the friction system includes other insignificant noise that may influence the dynamic characteristics of the former. Therefore, filtering and denoising of the friction noise signals was conducted by empirical mode decomposition (EMD). EMD was put forward by Huang et al. [18] to filter nonlinear and unstable time series. Using EMD, time series can be decomposed into a set of finite and intrinsic mode functions (imf) with different components. Then, specific imf components selected from the original components are reset to a new time series to analyze and calculate the data during follow-up work [19,20]. Details of the EMD algorithm are as provided as follows. The maximum and minimum values of a time series x(t) are determined, and the upper envelope curve x max (t) and lower envelope curve x min (t) of the original time series are calculated by the cubic spline functions. The mean of the upper and lower envelope curves is denoted as m(t). The original time series x(t) minus the mean m(t) yields a new series h(t) without lowfrequency components. The relevant functions are shown in Eqs. (4) and (5). In general, the first h(t) is not necessarily a stationary data series. Therefore, it should be calculated using Eqs. (4) and (5) repeatedly until it meets the criteria of Eq. (6) with a standard value of SD=0. Assuming r 1 is the new x(t), imf 1 , imf 2 , …, and imf n are calculated successively until the last time series r n remains undecomposed. Thus, the original time series can be presented in terms of imf and a residual component r, as shown in Eq. (8). where r n is the residual component representing the trend or mean of the original time series x(t). The imf component is an oscillating function with different amplitudes and frequencies. Each imf component presents two characteristics [21]. In the data domain of each imf component, the numbers of the maximum values must be equal to the number of the zero crossings or present a difference of one at most. The average value of the envelope curves must always be equal to zero. Taking the friction noise signal in test 4 as an example, the original signal is decomposed into 11 different imf components and a single r. Figure 4 shows the time-and frequency-domains of the original signal, imf 1 , imf 2 , imf 5 , r, and the reconstructed signal. Several peaks in the power spectra of the original signal and imf 1 and imf 2 components may be observed. By contrast, the power spectrum of imf 5 is relatively smooth. Therefore, the new signal is reconstructed by components from imf 5 to imf 11 and r. The power spectrum of the reconstructed signal is smoother than that of the original signal. increases in the running-in process, remains relatively stable in the steady-state process, and then increases sharply in the increasing friction process. At this point, the specimens were damaged and the tests were ended. Phase-space reconstruction The phase space is a geometric space that reveals the states of a system. In general, a nonlinear dynamic system presents a very high phase-space dimension. However, in practice, data from the tests are considered to present a single-variable time series obtained from the interaction of different parameters in the system. Thus, the test data should be reconstructed into a high-dimensional space to gain more dynamic information. Takens [22] presented a method wherein a 1D chaotic time series is extended into a 3D or higher-dimensional phase space by phase space reconstruction. The aim of this method is to expose more information on the system hidden in the time series. Time difference method is usually utilized to reconstruct the phase space for 1D time series of x 1 , x 2 , x 3 , …, x n . A number selected from the original time series is as one of the components of the vector every τ times to construct a group of vectors, i.e, 2 ( [ , , , , ] i.e, where m is the embedding dimension (the dimension of the reconstructed phase space), X i is the vector of the reconstructed phase space, τ is the delay time, n is the length of the original time series, and N is the number of vectors in the reconstructed phase space. Selecting an appropriate τ and m is very important for reconstructing the phase space. When τ is too small, x(t) and x(t+τ) cannot be independent of each other because their values are close to each other. When τ is too large, the relationship between x(t) and x(t+τ) becomes random, and the chaotic attractor cannot be accurately determined. In this regard, the autocorrelation function method is used to determine an optimal τ [23]. For a set of single-variable time series {x(i)}, the definition of the autocorrelation function method is The function constructed from τ and C(τ) is given by Eq. (10), where τ is the optimal delay time when C(τ) falls to the value of (1-1/e)·C(0). Taking the friction noise of test 4 as an example, Fig. 6 illustrates the relationship between τ and C(τ). Here, C(τ) falls with increasing τ. For example, the coordinates of the first point are (22, 0.6184), the τ of the time series is 22 s. The precondition of choosing the m is m≥2d + 1 (d is the dimension of the dynamic system). For an infinite time series without noise, m is just larger than the smallest integer value of D. In an finite time series with noise, however, m should be much larger than D. If m is too small, the attractor could undergo self-intersection because of folding. Thus, selecting a relatively large m is necessary in theory. Unfortunately, increases in m also increase the calculation burden for geometric invariants (e.g., D and the Lyapunov exponent) in practical applications. Moreover, the influences of noise and rounding errors significantly increase. Therefore, an optimal m is selected by the saturated correlation dimension method [13]. The advantage of this method is that both D and m can be calculated at the same time. The dimension is calculated by means of the G-P algorithm, which was proposed by Grassberger and Procaccia [24]. The formula of the G-P algorithm is 1 1 , where H(·) is the Heaviside step function. That is when r approaches infinity, D is defined as 0 ln ( ) lim ln Reconstructing the phase space of a 1D time series is necessary prior to this calculation. The τ can be obtained by the autocorrelation function method described above, and the lnC(r)−lnr curves are plotted in double-logarithmic coordinates for each m. Ideallinear intervals are then selected from the curves and fitted by the least-squares method [25], the slopes of which reflect D. Then, m is plotted as the horizontal coordinate, and D is plotted as the vertical coordinate. The slope values are the D corresponding to the m plotted in the curved figure. Finally, the optimal m and D are obtained when D becomes stable. Taking the friction noise in test 4 as an example, D and m are calculated by the saturated correlation dimension method. Figure 7(a) shows that doublelogarithmic curves are obtained when the integers of m are from 11 to 30. Figure 7(b) displays the relationship between D and m obtained by fitting through the least-squares method. When m is less than 25, D increases with increasing m. When m is greater than or equal to 25, D remains stable over a small range. The optimal m and D of the friction noise in test 4 are 25 and 0.8977, respectively. Table 2 shows τ and the optimal m of friction signals from five tests. Phase-space trajectory and attractor evolution The τ and optimal m of friction noise signals were calculated by Takens' theorem [22] in Section 4.1. However, observing the evolution of phase-space trajectory visually in 3D space is impossible because of the high dimension of the evolution process. Therefore, the high-dimensional space is projected onto a 3D space by principal component analysis (PCA; Appendix A) [17,26]. Three main vectors selected from the reconstructed high-dimensional space are drawn in the 3D graph to observe the evolution of the phase-space trajectory expediently. The reconstructed vector X is given in Eq. (9). The inner product matrix Y of the vector matrix X is presented as follows: The three main eigenvalues λ 1 , λ 2 , and λ 3 are chosen from all of the eigenvalues of matrix Y and then calculated and arranged in descending order as λ 1 , λ 2, λ 3 , …, λ n (λ 1 ≥ λ 2 ≥ λ 3 ≥ … ≥ λ n ≥ 0). Then, a new N × 3 order vector matrix α is obtained by projecting the reconstructed m-dimensional vector matrix X to the three matrix directions V 1 , V 2 and V 3 , which are calculated according to λ 1 , λ 2 and λ 3 , respectively. 1 2 3 , , , , Each row of matrix α represents the coordinate of a point. A total of N points, which are drawn in 3D coordinates to present the 3D phase-space trajectory, are reconstructed. Taking the friction noise signals in test 4 as an example, the τ and m of the friction noise are 22 and 25, respectively. The friction noise signal is divided into sections every 20,000 data points, and the points are drawn in 3D space by phase-space reconstruction. All of the phase trajectories are divided by continuous and non-overlapping signals. Because of space limitations, only some figures of the trajectories are selected in this work to present the evolution law of the friction noise signal. The phase trajectories of the friction noise are given in Fig. 8. During the running-in process of the friction noise signal, the trajectory of the friction noise begins to converge, and the volume of the phase trajectory is very large. The radius of the trajectory is very large in the initial friction stage of 0-15 min, as shown in Fig. 8(a). In the 60-75 min stage ( Fig. 8(b)), the curvature radius of the trajectory gradually converges to a central point as the friction process continues. The attractor of the friction noise forms in this stage. In the 75-315 min stage, the trajectory of the friction noise converges to a smaller point, and the trajectory circles reciprocally. The curvature radius remains steady within a small range, as shown in Figs. 8(c)-8(g), and the attractor of the friction noise is stable. In the final process, the phase trajectory of the friction noise begins to diverge, and the curvature radius increases. Thus, the phase trajectory escapes from the space presented in Fig. 8(h), and the attractor of the friction noise disappears. The evolution of the phase-space trajectory of the friction noise can be defined as "convergence-stability-divergence", which corresponds to the "forming-keeping-disappearing" pattern of the evolution of the friction noise attractor. This pattern also corresponds to the complete friction process of the system. The friction noise first increases, and then declines to a stable value in the 0-75 min stage; this stage is considered the running-in process. The friction noise remains steady in the 75-315 min stage, which is also known as the steady-state process. Finally, the friction noise increases rapidly in the final process. During the complete friction process, the friction noise attractor gradually forms, then remains stable for a long period of time, and then finally disappears. The evolution of the attractor can be determined from the evolution of the phase trajectories. In the running-in process, the phase trajectory becomes convergent, and the attractor begins to form. During the stable-state process, the trajectory is maintained in a specific space, corresponding to the stable stage of the attractor. Finally, the phase trajectory diverges and escapes from the specific space, which corresponds to the disappearance of the attractor. Thus, the phase trajectory evolution of the friction noise is highly consistent with the evolution of the attractor. The attractor may be considered a running-in attractor because it forms in the running-in process. Evolution of the correlation dimension D is a type of fractal dimension and is sensitive to the time behavior of the system. To reflect the characteristics of the chaotic dynamic system, D is usually used to quantitatively describe the complexities of the chaotic signals on a small scale [17]. In the present case, D can be utilized to characterize the complexity of friction noise signals [17,27]. Coinciding with the time stages of the phase trajectory described in Section 4.2, the time series of the friction noise signal is divided into continuous and non-overlapping windows every 15 min (including 20,000 data points). Figure 9 illustrates the evolution of D of the friction noise in five tests as calculated by the saturated correlation dimension method discussed in Section 4.1. In the attractor-forming process, D increases from a low value. In the attractor-keeping process, D remains stable. In the attractor-disappearing process, D declines. The evolution curve of D in the friction process corresponds to an "inverted bathtub curve", and can be generalized as "increasing-steadyingdeclining" consistent with the "forming-keepingdisappearing" formation process of the friction noise attractor and the "running-in, steady process, increasing friction" pattern of the complete wear process. Evolution of the Lyapunov exponent The basic characteristic of chaotic motion is that the system is extremely sensitive to the initial value. The trajectories generated by two initial values that are close to each other separate exponentially with increasing time. This phenomenon can be quantitatively described by the Lyapunov exponent. The Lyapunov exponent represents the average exponential rate of convergence or divergence between adjacent orbits in a phase space. When the Lyapunov exponent of a system is less than zero, the phase volume of the system is contractive in this direction. When the Lyapunov exponent is greater than zero, the phase volume is expansive and folding in this direction. The long-term behavior of an uncertain system is unpredictable. Therefore, the system is chaotic [28]. If a system has a chaotic attractor, it presents three features: (1) There is at least one positive Lyapunov exponent, (2) at least one of the exponents is zero, and (3) the sum of the exponential spectrum is negative. In practice, the computational burden for all Lyapunov exponents is very large when the dimensions of the system are very high. Thus, determining chaotic characteristics through the largest Lyapunov exponent is appropriate [28]. The exact Lyapunov exponent of a general time series cannot be obtained according to the dynamic equation. The wolf reconstruction method [28] is thus used to calculate the maximum Lyapunov exponent of such a time series. The calculation process of this measure is as follows: The τ and m are respectively obtained by the autocorrelation function and the saturated correlation dimension methods. According to Takens' theorem, the new time series Y(t i ) = (x(t i ), x(t i+τ ), … , x(t i+(m-1)τ )), (i = 1, 2,…, N) is obtained by reconstructing the phase space of the original time series. Assuming that Y(t 0 ) is the initial point in the phase space, Y 0 (t 0 ) is the point nearest to Y(t 0 ), and L 0 is the distance between these two points, the time evolution of these two points is tracked from t 0 until the distance is larger than ε at t 1 . Then, a point Y 1 (t 1 ) near the point Y(t 1 ) is determined to ensure that the distance between the two points satisfies the function. To ensure that the angle between L 1 and L′ 1 is as small as possible, the process described above is repeated until the end time of Y(t). Assuming M is the iteration number, the largest Lyapunov exponent [29] is given as Eq. (19). Coinciding with the time stages of the phase trajectory in Section 4.2, the time series of the friction noise signal is divided into continuous and nonoverlapping windows every 15 min (including 20,000 data points). The largest Lyapunov exponent of every stage is calculated by Eq. (19). Figure 10 illustrates changes in λ 1 of the friction noise over five tests. Because the λ 1 of each test is greater than zero, the friction noise signals are chaotic and the attractors of the friction noise signals can be considered chaotic attractors. Conclusions Experiments with different rotating speeds and loads were performed on a ring-on-disk tribometer, and friction noise signals produced during the friction process were extracted by a microphone PCB. The phase trajectories were projected onto a 3D space by PCA, and D and the largest Lyapunov exponents were calculated based on phase-space reconstruction. The following conclusions were confirmed. (1) The chaotic attractor is a set of infinite points in phase space. The system state of chaotic motion always converges to a certain attractor in that phase space. Thus, the evolution of the phase trajectory can reflect the evolution of the chaotic attractor. The processes of convergence, stabilization and divergence characterize the evolution of the phase trajectories of friction noise signals, and the evolution of these phase trajectories corresponds to the evolution of the attractors via a pattern called "forming-keeping-disappearing". The formation process of the attractor can characterize the chaotic behavior of the friction system. (2) D increases during the running-in process, remains relatively steady during the stable process, and then decreases during the friction increasing process. The change process of D conforms to the pattern of an "inverted bathtub curve". Changes in D are consistent with the evolution processes of chaotic attractors and the phase trajectories. (3) The attractors of friction noise signals are also called running-in attractors because they form during the running-in process. In addition, the attractors are chaotic at the points where the positive largest Lyapunov exponents are obtained during the friction process. (4) Friction noise includes information that can be utilized to characterize the dynamic behavior of a friction system. Therefore, the largest Lyaponov exponent can describe the chaotic characteristics of the friction process, and the phase trajectory and D of the friction noise can describe the friction process and evolution of attractors. The results of this study help reveal changes in wear states. As the evolutionary consistency of the chaotic characteristics of friction force and friction noise was not examined in this work, future research on the friction process will include this topic. PCA is a statistical process that utilizes an orthogonal transformation to transform a set of observations of possibly correlated variables into a set of values of linearly uncorrelated variables. It is one of the simplest methods based on true eigenvector multivariate analyses and can reveal the inner structure of data. For a multivariate data set in a high-dimensional space, PCA can provide a lower-dimensional picture or a projection of this object when observed from its most informative viewpoint by applying only the first few principal components so that the dimension of the transformed data is decreased. Mathematically, the definition of this transformation is a set of p-dimensional vectors of w (k) = (w 1 , ... , w p ) (k) that map each row vector x (i) of X to a new vector of principal component scores t (i) = (t 1 , ... , t m ) (i) , shown by where Σ represents the singular values of X, U represents the left singular vectors of X, and W represents the right singular vectors of X. In terms of this factorization, the matrix X T X is Truncation of a matrix M or T by SVD produces a truncated matrix that is the nearest possible matrix of rank L. Therefore, PCA can concentrate most of the signals into the first few principal components, which can be obtained by dimension reduction; later principal components may be affected by noise and disposed of without great loss of information. Open Access: The articles published in this journal are distributed under the terms of the Creative Commons Attribution 4.0 International License | https://mc03.manuscriptcentral.com/friction (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
2023-01-31T15:03:59.505Z
2017-05-29T00:00:00.000
{ "year": 2017, "sha1": "6b3b787786cdca023dda4e441a7e90529cbcd468", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s40544-017-0161-y.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "6b3b787786cdca023dda4e441a7e90529cbcd468", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [] }
17874647
pes2o/s2orc
v3-fos-license
The Spectral Properties of (-)-Epigallocatechin 3-O-Gallate (EGCG) Fluorescence in Different Solvents: Dependence on Solvent Polarity (-)-Epigallocatechin 3-O-gallate (EGCG) a molecule found in green tea and known for a plethora of bioactive properties is an inhibitor of heat shock protein 90 (HSP90), a protein of interest as a target for cancer and neuroprotection. Determination of the spectral properties of EGCG fluorescence in environments similar to those of binding sites found in proteins provides an important tool to directly study protein-EGCG interactions. The goal of this study is to examine the spectral properties of EGCG fluorescence in an aqueous buffer (AB) at pH=7.0, acetonitrile (AN) (a polar aprotic solvent), dimethylsulfoxide (DMSO) (a polar aprotic solvent), and ethanol (EtOH) (a polar protic solvent). We demonstrate that EGCG is a highly fluorescent molecule when excited at approximately 275 nm with emission maxima between 350 and 400 nm depending on solvent. Another smaller excitation peak was found when EGCG is excited at approximately 235 nm with maximum emission between 340 and 400 nm. We found that the fluorescence intensity (FI) of EGCG in AB at pH=7.0 is significantly quenched, and that it is about 85 times higher in an aprotic solvent DMSO. The Stokes shifts of EGCG fluorescence were determined by solvent polarity. In addition, while the emission maxima of EGCG fluorescence in AB, DMSO, and EtOH follow the Lippert-Mataga equation, its fluorescence in AN points to non-specific solvent effects on EGCG fluorescence. We conclude that significant solvent-dependent changes in both fluorescence intensity and fluorescence emission shifts can be effectively used to distinguish EGCG in aqueous solutions from EGCG in environments of different polarity, and, thus, can be used to study specific EGCG binding to protein binding sites where the environment is often different from aqueous in terms of polarity. Introduction EGCG ( Figure 1A), a major catechin in green tea, exhibits antioxidant [1,2], antimutagenic [3], anticancer [4][5][6], antiallergic [7,8], and antiatherosclerotic [9,10] properties. EGCG is carried by serum albumin [11] and has been identified as a novel inhibitor of heat shock protein 90 (HSP90) [12], a cytoplasmic chaperone protein, which has recently received much attention as a drug target for treatment of cancer [13,14]. As a chaperone protein, it stabilizes and maintains many client proteins and assists with normal protein folding and trafficking. These functions are essential in cell division and are being widely studied as a target for treatment of cancer [15]. To facilitate studies of the interaction of HSP90 with EGCG and analogs a direct binding assay would be useful and a frequently used very sensitive approach involves fluorescence spectroscopy. It is significantly more efficient to study binding of a ligand to a protein if the ligand is fluorescent using fluorescence polarization [16]. When excited at λ Ex =280 nm, catechin ( Figure 1B), one portion of EGCG, has two fluorescence emission maxima, one peak at 314 nm and another peak ranging from 446 nm to 470 nm [17]. Another fragment of EGCG, gallic acid ( Figure 1C), when excited at λ Ex =280 nm, has one fluorescence emission maximum ranging from 335 nm to 362 nm depending on the solvent [18]. Since these two fragments of EGCG are fluorescent, it is reasonable to hypothesize that EGCG is also fluorescent, and its fluorescence depends on the solvent. However, this hypothesis requires experimental verification since the two fragments when combined can quench each other's fluorescence. We hypothesized that EGCG is fluorescent when excited at approximately λ Ex =280 nm and that its emission maxima are dependent on solvent. The EGCG fluorescence at the maximum of fluorescence excitation Ex max =331 nm/maximum of fluorescence emission Em max =455 nm or 550 nm was previously reported in a mixture of AN and aqueous solution significantly different from the cytoplasmic environment but solvent effects were not characterized [19]. In our study, however, it was found that EGCG fluoresces when excited at much shorter wavelengths. Here we report EGCG fluorescence in four solvents, 1) EtOH, a protic solvent, 2) AB at pH=7.0, as a model for the aqueous cytoplasmic environment, 3) DMSO, an aprotic polar solvent widely used for solubilization of waterinsoluble organic compounds in biomedical research, and 4) AN, an aprotic solvent widely used for liquid chromatography characterization of organic molecules. The rationale for this choice of solvents is that binding EGCG to a protein such as HSP90 [12] or to serum albumin [11] is likely to result in transition of EGCG from a predominantly aqueous environment to a less polar milieu which may result in dramatic changes in fluorescence [20]. Being able to distinguish EGCG in these environments would provide an important tool for studying EGCG binding to proteins and offer the possibility of a direct binding assay using a target protein. Fluorescence of EGCG was measured at 10 µM with Hitachi F-7000 spectrofluorometer (Hitachi High-Technologies Co.) in a 1 cm quartz cell thermostated at 20°C. The slit width was 5 nm, PMT voltage 700 V, scan speed 240 nm/min. The absorbance and fluorescence spectra were exported to and plotted with Origin 9 software (OriginLab Co). EGCG was prepared as a 30 mM stock in DMSO, aliquoted with an Eppendorf Repeater® plus at 2 µL to avoid decomposition due to the freeze-thaw cycling and stored at -20°C. Before experiments, 4 µL of DMSO were added to a vial to bring EGCG concentration to 10 mM before 1/1000 dilution to the experimental concentration of 10 µM. Spectral characteristics were measured at 10 µM in AN, an aprotic solvent, and in AB containing KCl (150 mM), HEPES (10 mM) intended to mimic the aqueous cellular environment and at pH=7.0. We chose to measure fluorescence in a pH-buffered AB at a neutral pH=7.0 rather than distilled deionized water because ambient CO 2 makes pH uncertain that may affect fluorescence and EGCG stability. We started initial measurements in DMSO, an aprotic solvent and included AN since DMSO has a cut-off at 265 nm (determined as absorbance of 1.00 in a 1 cm cell vs. water) rendering measurements of absorbance at shorter wavelengths impossible. Spectral measurements in DMSO were done in 100% DMSO, while solutions in AN and AB contained 0.1% DMSO. Therefore, 0.1% DMSO was added to the AN and AB blanks. It is noteworthy that even 0.1% DMSO can present a problem for UV absorbance and fluorescence measurements because the cut-off of 0.1% DMSO in EtOH is 229 nm, and in AB is 223 nm (not shown). All chemicals were purchased from Sigma-Aldrich Co. The purity and structure of EGCG were confirmed by NMR and LCMS. 18.2 MΩ water (Milli-Q, Millipore) was used for all experiments. DMSO Absorbance of 10 µM EGCG in DMSO at UV max =280 nm was 0.092±0.008 (n=6), 0.121±0.009 (n=4), and at UV max =279 nm it was 0.095±0.008 (n=4) in three independent experiments (Figure 2A, dotted line; Table 1). When excited at UV max =280 nm, the intensity of fluorescence was 5,137 au at Em max =353 nm ( Figure 2B; Table 1). Additional 3D fluorescence scan with the excitation from λ=220 to 300 nm, and emission from λ=320 to 400 nm revealed a peak at Ex max =276 nm/Em max =351 nm with FI=5,599 au. A smaller peak at λ Ex~2 35-237 nm was detected with the 3D scans in both EtOH and AB but this smaller peak, if existed, could not be detected in DMSO due to the cut-off at 265 nm (see Methods). AN The inability to detect EGCG fluorescence in DMSO at λ Ex~2 35-237 nm due to the cut-off effects of DMSO (an aprotic solvent) to contrast with protic solvents such as AB and EtOH, made us use AN, a non-hydrogen bonding polar solvent with a cut-off of 190 nm similar to water (191 nm). Absorbance of 10 µM EGCG in AN at UV max =231 nm was 1.712±0.004 (n=4) (Figure 2A, Table 1). For comparison with other solvents used, the emission spectrum of EGCG in AN was measured at λ Ex =275 nm ( Figure 2B), and two distinct emission peaks were detected, the larger peak being at λ346 nm and the smaller and wider peak being at approximately λ400 nm. A 3D fluorescence scan with the excitation from λ200 to 290 nm, and emission from λ300 to 350 nm detected a peak at Ex max =272 nm/ Em max =343 nm with FI=3,021 au. Interestingly, this peak dominated over another small peak with Ex max~2 31 nm and Em max~3 44 nm with FI~400 au which could be separated only computationally (Table 1). Two excitation maxima of EGCG Two excitation maxima of EGCG fluorescence were found in AB, EtOH, and AN. One smaller peak can be observed when EGCG is excited at approximately 235 nm with Em max at 396 nm in AB, ~344 nm in AN, and 373 nm in EtOH (Table 1). In DMSO, the smaller peak cannot be distinguished due to high absorbance of DMSO at wavelengths shorter than 265 nm. Another single peak of much higher emission intensity compared to the smaller peak for each given solvent was found when EGCG is excited between 275 and 280 nm with Em max between 350 and 390 nm in AB, EtOH, and DMSO (Table 1, Figure 2B). The distinct maxima of the fluorescence excitation in all solvents tested point to two distinct dipoles in EGCG and are important for further characterization of the UV spectra of EGCG and its derivatives. In the following discussion, we pay more attention to the larger peaks because 1) their higher fluorescence intensities are more practical for EGCG-protein binding studies, and 2) the shorter excitation wavelengths of the smaller peaks are impractical due to the cut-off properties of many organic solvents, as DMSO shows in our case. Emission maxima of EGCG depend on solvent polarity The emission peak in AB shifted to a longer wavelength compared to EtOH (Table 1, Figure 2B). This corresponds with the increased polarity of water compared to EtOH (orientation polarizability ∆f=0.320 and 0.298, respectively [20,21], Table 2). The intensity of the fluorescence in AB was significantly quenched compared to EtOH (Table 1, Figure 2B). We did not elucidate the exact mechanism of this quench. DMSO has a smaller orientation polarizability ∆f of 0.263 [20,21] compared to AB and EtOH (Table 2) and a smaller Stokes shift (Table 1, Figure 2B). This is in a good agreement with solvent polarity and the fluorescence emission shift [20]. The fact that the fluorescence intensity in EtOH, a protic solvent, is only approximately one fourth of that in DMSO, an aprotic solvent, may argue in favor of the H + -dependent quench of fluorescence in AB. In AN, excitation at 275 nm resulted in two distinct emission maxima (Table 1, Figure 2B) suggesting non-specific solvent effects on EGCG fluorescence. Together with the fact that EGCG fluorescence in AN does not follow the Lippert equation (Figure 3), it indicates that at least two electronically distinct species may be formed due to interaction of EGCG and AN. Table 2 and DISCUSSION for explanations). The Stokes shift (ν A -ν F )*10 -3 (cm -1 ) was plotted against the orientation polarizability Δf for the different solvents from Table 1 according to the following calculation: (ν A -ν F )*10 -3 (cm -1 ) = 10 4 / Ex(nm) -10 4 /Em max (nm) ( Stokes shifts of the larger fluorescence peaks in AB, EtOH, and DMSO follow Lippert-Mataga equation We found that Stokes shifts of EGCG fluorescence depend on solvent polarity (Table 2, Figure 3). Stokes shifts of the larger fluorescence peaks in AB, EtOH, and DMSO (but not AN) follow the Lippert-Mataga relation (Figure 3, open circles) since they fall into almost perfect line with R 2 =0.95 (Figure 3, open circles). If EGCG fluorescence in AN followed the Lippert-Mataga equation [20], one single emission maximum was found between 365 nm (Em max in EtOH) and 388 nm (Em max in AB) because orientation polarizability ∆f of AN (0.304) is between those of EtOH (0.298) and AB (0.320) [20,21] (Table 1). Interestingly, the Stokes shifts for two larger peaks in AN lie above and below the best linear fit for AB, EtOH, and DMSO at approximately the same distance. Additional theoretical and experimental investigation are necessary to explain if this observation is coincidental or follows natural law. Abnormal EGCG fluorescence in AN taken together with the fact that UV max of EGCG in AN follows different pattern than UV max of EGCG in AB, EtOH, and DMSO (Table 1, Figure 1A) points to non-specific AN effects on EGCG fluorescence. Thus, previously reported EGCG fluorescence at Ex max =331 nm/ Em max =455 and 550 nm in a mixture of AN and aqueous solution with uncertain pH [19] is difficult to interpret. We demonstrated that EGCG is a fluorescent molecule and, importantly, its fluorescence is significantly dependent on the polarity of solvent. Interaction of EGCG with a binding pocket of a protein is likely to transfer EGCG from aqueous environment to one with different polarity that is expected to significantly change fluorescence intensity and shift emission maxima. We suggest that both changes in fluorescence intensity and fluorescence emission shifts can be used to study interaction of EGCG with HSP90 or other proteins. In addition, high EGCG fluorescence is useful for studies of binding to proteins with fluorescence anisotropy approach [16].
2016-05-12T22:15:10.714Z
2013-11-22T00:00:00.000
{ "year": 2013, "sha1": "f5d8476fa8d4b8f52d1b84da92535b3e1cbcb013", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0079834&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f5d8476fa8d4b8f52d1b84da92535b3e1cbcb013", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
242061881
pes2o/s2orc
v3-fos-license
Carbon Particle-Doped Polymer Layers on Metals as Chemically and Mechanically Resistant Composite Electrodes for Hot Electron Electrochemistry This paper presents a simple and inexpensive method to fabricate chemically and mechanically resistant hot electron-emit-ting composite electrodes on reusable substrates. In this study, the hot electron emitting composite electrodes were manufactured by doping a polymer, nylon 6,6, with few different brands of carbon particles (graphite, carbon black) and by coating metal substrates with the aforementioned composite ink layers with different carbon-polymer mass fractions. The optimal mass fractions in these composite layers allowed to fabricate composite electrodes that can inject hot electrons into aqueous electrolyte solutions and clearly generate hot electron-induced electrochemiluminescence (HECL). An aromatic terbium (III) chelate was used as a probe that is known not to be excited on the basis of traditional electrochemistry but to be efficiently electrically excited in the presence of hydrated electrons and during injection of hot electrons into aqueous solution. Thus, the presence of hot, pre-hydrated or hydrated electrons at the close vicinity of the composite electrode surface were monitored by HECL. The study shows that the extreme pH conditions could not damage the present composite electrodes. These low-cost, simplified and robust composite electrodes thus demonstrate that they can be used in HECL bioaffinity assays and other applications of hot electron electrochemistry. Introduction Hot electron electrochemistry, which provides tools to work beyond the electrochemical window of water restricting the traditional electrochemistry aqueous solutions, has been explored already for quite a long time.The principles of utilizing hot and hydrated electrons in analytical applications have been earlier studied by using thin insulating filmcoated electrodes [1][2][3][4][5].Semiconductor electrodes and thin-insulating oxide-coated electrodes are particularly attractive in this field because the electrons and holes can remain separated in energy in the conduction and valence band, respectively.The unusual ability to directly inject/emit electrons into solutions forming solvated electrons is the key point in the properties of these relatively new electrode materials.Solvated electrons, and hydrated electrons in water, are the chemist's perfect reducing agent in many ways.Recently, we made efforts to develop low-cost replacements [6,7] for chemically quite non-resistant oxide-coated aluminium electrodes [8][9][10] and typically a bit too expensive oxide-coated silicon electrodes [7,8] for hot electron injection into fully aqueous electrolyte solutions [1][2][3][4][5]. We have earlier shown that by using thin-film manufacturing technologies quite sophisticated electrode chips can be made on glass chips e.g. by utilizing aluminium sputtering and atomic layer deposition of alumina or from silicon chips [11,13,14].Such electrodes can be used in disposable manner for e.g.bioaffinity assays that are important in real-world point of care testing [12,15,16].In these assays, the lowest determination limits are typically obtained by using aromatic Tb (III) chelates as labels, however many organic luminophores [5,[17][18][19] or Tris(bipyridine)ruthenium(II) [Ru(bpy) 3 2+ ] -type labels [20,21] can also be used when lower assay sensitivity is sufficient.These labels are typically excited with sequential oneelectron reduction and oxidation steps either by red-ox, or ox-red routes depending on the (1) redox properties of the luminophores or ligands of the complexes, and (2) the stability of luminophore or ligand radicals in the aqueous solution [4,18,19,22,23]. We have previously shown that hydrated electrons can be obtained by hot electron injection into aqueous solutions.These conclusions were based on the measurements using various hydrated electron scavengers with the known reaction rate constants obtained from pulse radiolysis studies.The hot-and hydrated electrons allow to carry out difficult oneelectron reduction reactions in aqueous solutions that are usually not obtainable using traditional electrochemistry and active electrodes.These highly reducing intermediates also enable efficient production of strongly oxidizing radicals by one-electron reduction from precursors such as hydrogen peroxide (hydroxyl radical), peroxydisphosphate (phosphate radicals) and peroxydisulfate (sulfate radicals) [1,17,23].Thus, strongly reducing, but also simultaneously strong oxidizing conditions can be created by the hot-and hydrated electrons. Our group has utilized hot electron injection into aqueous solution only in generating hot electroninduced electrochemiluminescence (HECL) of our labels for bioaffinity assays, but there are many other application areas for hot electron electrochemistry.Solvated electrons can be utilized in organic chemistry [24] and in inorganic chemistry [25,26] and e.g. in disinfection of potable water and treatment of waste waters [27][28][29].Most common methods for generation of hydrated electrons have so far been (1) high energy irradiation of water (either high energy electrons or photons [25], (2) photoemission of electrons from electrodes [30], (3) photoionization and photodetachment of solutes [31][32][33], (4) dissolution of wide-band gap inorganic crystals containing trapped electrons [34]. Our recent developments, composite electrodes, have been so far made by using polymer materials, such as polystyrene [6,7] and ethyl cellulose [35] as a matrix that can easily be dissolved into common organic solvents and can then immediately be doped with suitable conducting particles by simply mixing and sonicating with ultrasound.The doped polymer is finally spin-coated upon a conductive substrate such as on a metal or a strongly doped semiconductor disc.Thus, the final structure of C/CPDP (conductor/ conducting particle doped polymer) composite electrode has been created [6,7,35].In these composite electrodes, the final electron injection to solution is typically occurring through an ultrathin polymer layer naturally formed on top of the conducting particles during manufacture process [7].However, it is possible that those conducting particles in direct contact with the electrolyte on the surface may in addition inject hot electrons by field emission into electrolyte solution, either as such, or through a hydrogen gas barrier generated by hydrogen evolution [36]. This time we made efforts to manufacture composite electrodes in a very simple way that could produce composite electrodes usable also in many organic solvents, and mixtures of aqueous electrolyte solutions and solvents miscible in water, to inject hot electrons into these solvents or solvent mixture solutions.Nylon 6,6 was chosen as a polymer since it is known to have high mechanical strength, rigidity, good stability under heat and high chemical resistance [37]. Nylon 6,6 is one of the most commonly used polyamides in industries and in 1938, it was first used in toothbrush filaments production [38].It consists of two monomers, containing hexamethylenediamine and adipic acid.As the melting point is related to the degree of hydrogen bonding between the chains, therefore, nylon 6,6 has a sharp melting point of 264 o C due to the density of amide groups leading to hydrogen bond formation.In addition, the symmetrical structure of even-even monomers of nylon 6,6 and its amide groups allow the hydrogen bonds to be formed in any direction that the chains are facing, piling up on top of each other.This phenomenon leads to faster crystallization rate and processing window.Though characteristically, nylon has the ability to absorb significant amount of water in general [37], the higher crystallinity of nylon 6,6 also helps its lower moisture absorption, and thus affects its modulus and tensile strength. Nylon 6,6 also shows much higher fatigue resistance, advantageous abrasion resistance and coefficient of friction.Usually, the volume resistivities of dry nylon are between 10 14 -10 15 Ω•cm.Dry nylon 6,6 has dielectric strength of 24 kV/mm (short time) and 11 kV/mm (step-by-step) [37].In terms of chemical resistance, nylons have been proven as an excellent material, as polyamides are known to be particularly resistant towards nonpolar materials e.g.hydrocarbons.However, strong acids and phenols can disrupt the hydrogen bonding and even can dissolve nylons and nylon 6,6 can be dissolved e.g. in ethylene glycol above 160 o C. Nylon 6,6 is being industrially synthesized, and it is currently an easily obtainable low-cost material for the present applications. In the present study, nylon 6,6 was first dissolved in hot formic acid and then doped either with graphite particles or carbon black particles.We used two types of graphite particles and two types of carbon black particles and first examined the optimal composition of each of the DP (doped polymer) layers of C/DP electrodes.The DP mixture was then dispensed on a round metal substrate and very simply cured on a hot plate at a suitable temperature. Finally, we studied the analytical performance of the present composite electrodes for detecting Tb (III) chelate and Ru(bpy) 3 2+ chelate on the basis of their cathodic HECL.Our aromatic Tb (III) chelates cannot be excited on the basis of traditional electrochemistry at active metal electrodes due to the insufficient electrochemical window available because of hydrogen and oxygen evolution reactions, but are showing strong chemiluminescence in the presence of hydrated electrons and highly oxidizing radicals [3,4].The excitation of Ru(bpy) 3 2+ chelate by hydrated electrons and oxidizing radicals [39] as well as by hot electron injection into aqueous solution has been studied in detail earlier [23]. Chemical and reagents Nylon 6,6 pellets from Sigma-Aldrich was used as a matrix polymer of composite ink layers and four types of individual conducting particles were studied (Table 1). The solvent for making the composite material i.e. ink was 100 % formic acid (Fisher Scientific).The composite material mixtures were made by using Cole-Parmer ultrasonic homogenizer. Fabrication of electrodes Round brass substrates were made with a diameter of 12 mm from technical brass sheet (soft brass 0.002 thick 44 gauge 12" × 30") purchased from K&S precision metals.Then masks made of Teflon adhesives (Irpola Oy, Turku) were added on top of the substrates as masks revealing a round area of 0.64 cm 2 in the middle of the substrate for dispensing the composite ink.Each of the composite inks were then dispensed in the wells surrounded by teflon and the electrode was dried on a hot plate at a temperature of 90 o C for two hours (Fig. 1a).The electrodes were let to cool before use.After the use, these composite inks can be easily washed off or removed using common organic solvent, and then the substrates can be reused for fabricating new composite electrodes.In this study, oxide-covered aluminium discs as electrodes, cut from 99.9% pure aluminium (Merc art.1057, batch 721 k4164557) were used for comparison with the composite electrodes. The most significant part of the electrode fabrication process is the preparation of the composite ink.In order to prepare the composite inks, carbon particles and nylon 6,6 were weighed in the vials and kept at 90 o C for one week together with concentrated formic acid.Prior to the dispensing each composite ink variant containing different mass ratios of dry matters and solvent were sonicated for 10 minutes (amplitude 40%, 45s on-off-cycle; Cole-Parmer ultrasonic homogenizer).The dry matter consisted of nylon 6,6 and carbon particles where the weight of nylon 6,6 was always 80% of that of total dry matter.Different mass ratio of dry matter (50, 75, 100 and 150 mg/mL of solvent) were tested in order to find the optimum consistency of the ink based on the formulated ink's viscosity and ability of injection of hot electrons.Finally, the total dry material concentration was selected to be 75 mg/mL in the study experiments.Interestingly, the replacement of carbon particles by ~100nm particle sized silver nano powder (Sigma-Aldrich) produced composite electrodes that could not produce a measurable amount of HECL. The measuring cell was composed of two parts that could be screwed together.The lower part provided electrical contact to the brass substrate discs and upper part provided teflon vial for dispensing the electrolyte solution onto the working/composite elec-trode area, and a platinum wire as a counter electrode (Fig. 1b).The electrolyte solution volume in the measurements was always 150 μL. Measuring instruments and measurement procedure The instrument setup consisted of an in-laboratory built coulostatic pulse generator, a photomultiplier tube module (Perkin Elmer MH1993 1364-H-064) for optical detection, a photon counter (Stanford Research Systems SR-400), a preamplifier (Stanford Research Systems SR-445), Nucleus Inc MCS-II multiscaler card and two PC units.For DC excitation a laboratory DC voltage source was used instead of coulostatic pulse generator measurements.The sheet resistivity of different composite electrodes was measured with Jandel RM3000 test unit equipped with a cylindrical fourpoint probe head to avoid contact resistance. As mentioned in section 2.1, all the measurements were conducted in 0.20 M borate buffer containing 0.10 M of sodium sulfate as supporting electrolyte, using a solution volume of 150 μL.The HECL measurements begun with measuring reference value using 99.9% pure aluminium discs, where the cathodic excitation pulses were generated using the pulse generator when constant charge voltage pulses of -41.8V was delivered with a pulse charge of 31.5 μC and a pulse frequency of 50 Hz.Simultaneously, optical detection was performed using the aforementioned photomultiplier, assisted with the amplifier, through an interference filter (i.e. for the electrochemiluminescent labels Tb (III), fluorescein, Results and Discussion 3.1 The effect of the weight fraction of conducting particles When the weight fraction of conductive particles of the total solids of the composite coatings was studied, it was observed that the coatings with higher weight fraction than 40% became too brittle to be usable in case of carbon black particles, but graphite particles allowed the experiments up to 70% of carbon particles (Fig. 3). It was observed that, when the starting point was 10% of conductive particles of the total mass of the compositions, all the electrodes were working for the purpose, but about two orders of higher magnitude HECL intensity could be obtained with graphite particles with much higher weight fraction of the conductive particles.In case of graphite 1, the optimum was at about 60% of graphite and with graphite 2 con-sists of smaller particle size, the optimum weight fraction was already at about 45% of graphite of total solids.However, with smaller carbon particle sizes a couple of orders of higher magnitude HECL intensity was obtained.Carbon black 2 having the smallest particle size, i.e. 20 nm (Table 1), appeared to be the best conductive particle source for these composite films upon the metal electrodes, as a whole.However, both carbon black 1 and carbon black 2 showed the optimal weight fraction of carbon particles to be at about 30% where they gave the same intensity (photon count) as the best result of all. Measurement of sheet resistance in composite films Sheet resistance measurements of the composite films were carried out using 4-point probe head from the composite films fabricated on top of the non-conductive polyester film.The sheet resistance decreased as a function of increasing weight fraction of carbon as expected.The best performance in electrical con-ductivity was obtained with compositions having a sheet resistivity of about 100 Ω/ □ (Fig. 4).The composites containing more than 50% of carbon black cracked upon drying and could not be used in these measurements. There are three important areas for current transport in the composite film.First, charge transfer from the metal substrate to the carbon particles at the metalcomposite interface.Secondly, electron transport through the composite film via resonance tunneling between the carbon particles in the film, and finally electron transfer or emission into the electrolyte solution at the composite film-electrolyte interface.From Fig. 4, in terms of sheet resistance, the overall performance of carbon black 1 composite film seems to be the best, compared to the other three composite films. Characterization of the composite electrodes The composite electrodes were characterized by different techniques such as scanning electron microscopy (SEM, Zeiss Supra 40) for surface topology and composition imaging, 2D stylus profilometry (Bruker Dektak XTL) and 3D optical profiling (Filmetrics Profilm 3D/ Model: 205-0792) for analyzing the surface measurements or textures i.e. surface roughness, height variations.In Fig. 5, the surface characterization results of carbon black 1 composite electrode are presented, where the composite ink contains 70% nylon 6,6 and 30% of carbon black 1 particles.The 2D step height of the nylon electrodes from 6 measurements is on an average 23.66 ± 1.20 μm. As shown in Fig. 5 and Fig. 6, the surface of the electrode in different locations and at different magnifications seems to be quite homogeneous, with very small variation. Based on the characterization analysis of the nylon composite electrodes, it can be said that despite having trivial uneven area at micro-level, the electrode is pretty good for the HECL experimentation as proven in Fig. 3 and Fig. 4, and also in the following sections. Blank emission from composite electrodes and the effect of peroxydisulfate as a co-reactant When hot electron injection into aqueous electrolyte solution is used to excite luminescent labels in bioaffinity assays, it is often beneficial to add peroxydisulfate (S 2 O 8 2− ) as a co-reactant [1][2][3][4][5]23].Peroxydisulfate reacts near diffusion-controlled rate, producing highly oxidizing sulfate radical in reaction with the hydrated electrons [25]: The reduction potential of sulfate radical has been reported to be as high as 3.4 V vs. standard hydrogen electrode (SHE) [25].Thus, when both hydrated electrons and sulfate radicals are present, highly reducing and oxidizing conditions are simultaneously created. The blank emission in the buffer solution was measured using composite electrodes with maximum emission with Tb (III)-L from Fig. 3.Ten replicates were measured for each of the compositions without any wavelength discrimination and by using 545-nm interference filter (Fig. 7).The blank emission is based on either high field solid state electrochemiluminescence in the insulating nylon 6,6 layer or electrogenerated chemiluminescence at the solidelectrolyte interface.The carbon black 2 as conducting dopant particles produced less blank emission than carbon black 1, when no wavelength discrimination was utilized.However, in the case the composite electrode containing 30 weight % carbon black 1 and 70% nylon 6,6 the blank emission seemed to differ from presently interesting 545 nm more than in case carbon black 2 that allowed the most efficient HECL generation from our Tb (III) chelate beacon.For the sake of comparison, oxide-coated aluminium (0.3 mm thickness; 30 mm width) electrodes were also used in the measurements.Fig. 8 displays that oxide-covered 99.9% aluminium (Merck) shows much stronger background emission during cathodic pulses than any of the presently fabricated composite ink coated electrodes.Thus, if the present electrodes would be used for detection of short-lived emissiondisplaying organic luminophores the present aluminium brand electrodes would have much poorer performance than composites containing optimal amount of carbon black particles. The durability of the electrodes 3.5.1 Stability of composite electrodes during pulse polarization experiments The TR-HECL intensity was followed for 10000 excitation cycles with all the composite electrode types and during this time no destruction of the electrodes were observed and the TR-HECL remained at practically constant level during this time Fig. 9 displays TR-HECL as a function of only the first 4000 ordinal number of excitation pulses for visualization purpose. From Fig. 9, it can be seen that the performance of the carbon black 1 composite electrode in terms of HECL intensity (photon counts) is the best during cathodic pulse polarization experiments.The durability for graphite 1 and graphite 2 composite electrodes are quite similar.On the other hand, the carbon black 2 composite electrode started to crack. The effect of pH on TR-HECL in composite electrodes The pH of buffer solutions was adjusted either with sulfuric acid or sodium hydroxide and the HECL intensity was measured as a function of pH of the electrolyte solution.All our multidentate aromatic Tb(III) chelates are decomposed at low pH due to the protonation of the chelating side arms (oxygen and nitrogen in the side arms).Thus, TR-HECL at low pH is due to two reasons.First, decomposition of the chelate, and secondly, conversion of hydrated electrons to atomic hydrogen which is not sufficiently good one-electron reductant for the excitation reactions [4]. On the other hand, at high pH the formation of hydroxo complexes start to decompose the chelate [4].When the same electrodes used at certain pH were measured again in Tb (III)-L solution at pH 9.2 (borate buffer), each electrode exhibited the same TR-HECL intensity within the reproducibility margins as those electrodes not exposed to the more acidic/alkaline conditions.This indicates that the electrodes tolerated the use of both very low and very high pH solutions without destroying the composite ink coating (Fig. 10). In addition, the effect of pH was also studied by using fluorescein and Ru(bpy) 3 2+ luminophores as depicted in Fig. 11 and Fig. 12 respectively.In principle, both red-ox and red-ox excitation routes are possible for fluorescein on thermodynamic grounds, but we have earlier found evidence that the ox-red route is again the predominating excitation pathway for this luminophore [19].The results with fluorescein clearly showed that the present composite electrodes work well at highly basic solutions with luminophores able to emit HECL at highly alkaline conditions. In the case of Ru(bpy) 3 2+ , the measurement with the same electrode was again repeated at pH 9.2 (borate buffer) after measurement at certain specific pH (Fig. 12).Ru(bpy) 3 2+ is excited almost solely by ox-red mechanism since its one-electron reduced form is very unstable in aqueous solution and is disintegrated.On the other hand, one-electron oxidized form of the chelate has a long lifetime in aqueous solution and therefore gives a good steppingstone for excitation reaction by hot/hydrated electrons [23]. Thus, all the molecular probes/beacons presently applied in the absence of peroxydisulfate ions are first one-electron oxidized by hydroxyl radicals generated from dissolved molecular oxygen followed by the excitation step by hot/hydrated electron [5]. The metal-to-ligand excited triplet state is always the emitting species in case of Ru(bpy) 3 2+ , regardless whether the excited state first formed in electron transfer excitation step is metal to ligand excited singlet state ( 1 MLCT*) or metal to ligand charge transfer excited triplet state ( 3 MLCT*).If the 1 MLCT* is initially formed intersystem crossing occurs and 3 MLCT* is obtained which finally emits light [23,39].Unfortunately, the luminescence lifetime of 3 MLCT* emission is too short to be utilized for TR-HECL measurements due to the time constants of the present instrumentation and the present electrolytic cells. The Ru(bpy) 3 2+ measurements also showed that the present composite electrodes did tolerate the use both in highly acidic, or basic conditions.However, the repeated measurements in normal buffer solution (pH 9.2) following the pH treatment/exposure showed that the composite electrodes were still usable after the use at extreme conditions. Analytical applicability of the nylon 6,6 composite electrodes The composite film containing 30% of carbon black 2 exhibited the highest TR-HECL intensity (Fig. 3).Thus, the analytical applicability of nylon 6,6, composite electrodes doped by carbon black 2 particles was studied by measuring TR-HECL calibration curves of Tb (III)-L, both in the absence and presence of peroxydisulfate ions.Linear calibration plots spanning over several orders of magnitude of Tb (III)-L concentration were obtained (Fig. 13).Thus, the electrodes can be used e.g. in bioaffinity assays utilizing Tb (III) chelate labels. Electron transfer through the composite film We assume that current transport through the composite layers has three steps: (1) current injection from metal to composite layer, (2) current transport in the composite layer by resonance tunneling and (3) current injection to the electrolyte solution (Fig. 14).The electron injection to the composite film can be based on the direct contact of carbon particles to the base metal, but part of the current is in all likelihood produced by tunneling mechanism variants to the particles inside of the polymer matrix but in close proximity of the metal interface. Inside the composite film the electrons are trans-ported via carbon particles mainly by resonance tunneling [40,41] and field-assisted direct tunneling [42] and, finally, electrons are injected into the aqueous electrolyte solution by field-assisted direct tunneling or field emission [43] from the carbon particles located at the surface of composite films.This step is still under investigation in our lab, but for many practical purposes it is not important whether the hot electron injection is based on the direct field-assisted tunnel emission or field emission from the composite electrode. Usable cell for HECL measurements can be achieved simply by dispensing a suitable volume of studied solution inside the hydrophobic ring (Fig. 14) and putting e.g. an ITO-glass anode or alternatively a plastic sheet coated with carbon nanotubes as an anode on top of the hydrophobic ring acting as a spacer that defines the volume of the cell with its thickness together with the circle area inside the hydrophobic ring.In this way the light emission can be measured through optically transparent anodes. Conclusions The novel field of hot electron electrochemistry is presently largely unexplored.The most significant differences with traditional aqueous electrochemistry at active electrodes are: (1) the stability limits of water can be exceeded, (2) one electron reductions can be made in situations where traditional electrochemistry leads to concerted two-electron transfers, and (3) the reductions can be made of the distance at least several tens of nanometers away from the electrode thus allowing e.g.efficient excitation of labels in immunometric immunoassays.The present composite material withstanding a very wide pH range and many types of solvents allows the production of robust composite electrodes for injection of hot electrons (i) into aqueous solutions, and (ii) also into many other solvents or solvent mixtures due to the good chemical durability of nylon 6,6.Hot-and hydrated/solvated electrons can be easily obtained with these electrodes in any laboratory to induce various one-electron reductions in aqueous solutions not obtainable at active metal electrodes on the basis of traditional electrochemistry.For instance, metal ions at unusual oxidation states not obtainable at active metal electrodes can be created at the present composite electrodes as in the case in pulse radiolysis [24], many types of luminophores can be excited by reaction cycles involving one-electron redox steps [1][2][3][4][5].Different types of organic pollutants can be disintegrated by hot and hydrated electrons [5,32,[44][45][46]; also some specific organic synthesis can be carried out with hydrated electrons [47].Moreover, bacteria can be exterminated by introducing hot and hydrated electrons in an aqueous media needing disinfection [27,29,48].All in all, several promising application fields seem to exist where the present electrodes could be utilized. Fig. 1 . Fig. 1. a) Fabrication of composite electrode using nylon-carbon black composite ink b) Measuring cell during experiment, containing both working electrode (composite ink on brass substrate) and counter electrode (platinum wire); the presence of luminophore in this experimental electrolyte solution causes green light emission. Fig. 2 . Fig. 2. A schematic of two possible reaction pathways (ox-red and red-ox) during HECL measurement using electrochemiluminescent label in composite electrodes. Fig. 4 . Fig. 4. Sheet resistances of composite films on top of a polyester film, measured by using a 4-point probe. Fig. 14 . Fig. 14.Electron transport through the composite film.(1) Current injection from metal to composite layer, (2) Current transport in the composite layer and (3) Current injection to the electrolyte solution.Pulse voltage is typically -20 -50 V. Table 1 . Characteristics of carbon-based conducting particles
2021-11-04T15:24:16.389Z
2021-10-27T00:00:00.000
{ "year": 2021, "sha1": "0c9ea6afd9c9d38d28436131b725d09c2ca79688", "oa_license": null, "oa_url": "https://doi.org/10.33961/jecst.2021.00640", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "a4a0167150642b6a3c510967e0492d4a8febcb6b", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [] }
146120636
pes2o/s2orc
v3-fos-license
Cyclotron orbit knot and tunable-field quantum Hall effect From a semiclassical perspective, the Bohr-Sommerfeld quantization of the closed cyclotron orbits for charged particles such as electrons in an external magnetic field gives rise to discrete Landau levels and a series of fascinating quantum Hall phenomena. Here, we consider topologically nontrivial physics from a distinct origin, where the cyclotron orbits take nontrivial knotting structures such as a trefoil knot. We present a scenario of a Weyl semimetal with a slab geometry, where the Fermi arcs on the opposing surfaces can cross without interfering with each other and form a knot together with the bulk Weyl nodes, and in an external magnetic field, the resulting chiral Landau levels. We provide a microscopic lattice model to realize a cyclotron orbit with an unconventional geometry of a trefoil knot and study the corresponding quantum oscillations. Interestingly, unlike the conventional ring-shaped cyclotron orbit, a trefoil knot is self-threading, allowing the magnetic field line along the cyclotron orbit to contribute to the overall Berry phase and therefore altering the external magnetic field for each quantization level. The cyclotron orbit knot offers an arena of the nontrivial knot theory in three spatial dimensions and its subsequent physical consequences. Integer quantum Hall effect 1 and the subsequentlydiscovered zoo of miscellaneous topological phases 2-10 represent an active research area in condensed matter physics. The physics idea, however, may trace back further to the insightful semiclassical quantization of the cyclotron orbits 11,12 . Following the semiclassical equations of motion, charged particles such as electrons cycle in the plane normal to the magnetic field around the constant energy contours, which then become quantized according to the Bohr-Sommerfeld quantization condition. In two dimensions, all electron degrees of freedom are quenched into these cyclotron orbits with discrete and degenerate energy values -the Landau levels -in a magnetic field. The spacing and degeneracy of the Landau levels are proportional to the magnetic field strength, giving rise to the oscillations in material electronic properties as a function of the applied magnetic field such as De Haasvan Alphen effect, Shubnikovde Haas effect and other fascinating facets of the quantum Hall effect. A relatively young sibling in the topological material family is the topological Weyl and Dirac semimetals in three dimensions 9,10 . Around the Weyl nodesselected points in their Brillouin zone, the low-energy electronic excitations disperse linearly, resembling the Weyl fermions in models for high energy physics. Also, the surface electronic states consist of exotic open Fermi arcs 9,10 . In the presence of a magnetic field, the Weyl fermions become quantized as the chiral Landau levels that disperse along or against the magnetic field depending on the Weyl fermions' chirality, exhibiting the chiral anomaly phenomenon 13 . These chiral Landau levels, together with the Fermi arcs on the top and bottom surface in a slab geometry, assemble a novel type of cyclotron orbits in the Dirac and Weyl semimetals, dubbed as the Weyl orbit 14,15 , see Fig. 1(a) for illustration. The Weyl orbit extends in and promotes the Landau quantization to three spatial dimensions, and the corresponding quan-tum oscillations 16 and quantum Hall transports [17][18][19] have been established experimentally. The search for topological phenomena in condensed matter physics 8,20 has received much inspiration from the mathematical studies on topology such as knot theory, which investigates the nonequivalent classes of closed loops and the corresponding invariants, e.g., the Jones polynomial, in higher dimensional spaces. Indeed, a series of two-dimensional topological orders are characterized by topological quantum field theory on nontrivial quasi-particle world-line knots in 2+1-dimensional spacetime 20 . Also, the topologically distinct fermionic excitations in the form of connected links and knots in momentum space are discovered in nodal link [21][22][23][24][25] and nodal knot 24,26 metals. In this work, we present our discovery of a new topology origin of quantum materials and phenomena, where the cyclotron orbit employs a nontrivial knot topology, such as a trefoil knot ( Fig. 1(c)), in three spatial dimensions. Commonly, a cyclotron orbit is self-evading, since the couplings between nearby Fermi surfaces generally widen the gap and move them further apart, especially in a magnetic field, making crossings -a key ingredient of knots -difficult to realize. The Weyl orbits in the Dirac and Weyl semimetal slabs offer a solution to this difficulty, as the Fermi arcs on the top and bottom surfaces are spatially separated and can form crossings without interfering with each other. We provide a microscopic lattice model example where we realize a Weyl orbit with a trefoil knot geometry, see Fig. 1(b) and (d) 27 . In three spatial dimensions, such a cyclotron orbit knot is not adiabatically connected to a conventional ring-shaped cyclotron orbit including the conventional Weyl orbit in Fig. 1(a). The nontrivial topology of the cyclotron orbit knot also has profound physical consequences. For instance, unlike a ring-shaped loop, a knot is self-threading. Therefore a magnetic field line along a cyclotron orbit knot con-tributes nontrivially to the overall Berry phase around the orbit. Tuning this flux allows the contribution from other sources such as the magnetic field to differ in order to reach a specific Landau quantization. Using our microscopic lattice model, we study the behavior of the quantum oscillations associated with the cyclotron orbit knot. In particular, we introduce a perturbation that creates an effective magnetic field that aligns with the electronic velocity and study its impact on the subsequent quantum Hall effect. Without loss of generality, we consider an electronic tight-binding model on a three-dimensional hexagonal lattice for concreteness: where i, j, k are the coordinates in the xy plane and σ are the Pauli matrices defined on the two pseudo-spins s, s =↑, ↓. The first two terms are hoppings in the xy plane between the nearest neighbors and the next nearest neighbors, t ik = ±it , and the next two terms are nearneighbor hoppings between the nearest layers, t izkz = ±it z , where the ±i = exp (3iφ ik ) phases depend on the azimuthal angle φ ik from i to k 28 . The last two terms are a coupling between the two pseudo-spins s, s =↑, ↓ and a chemical potential. In the rest of the paper, we set t = −1.0, t = −0.5, t z = −1.0, t z = −0.5, ∆ = 0.4, µ ↑ = −5.1, and µ ↓ = −3.5 unless noted otherwise. All lattice constants are set to 1. First of all, this model system is a Weyl semimetal. To see this, we transform Eq. 1 into the momentum space where τ are the Pauli matrices on the even-odd layers. k z ∈ [0, π) is the momentum along theẑ direction, and are the Fourier transform of the t, t and µ s terms. There are six Weyl nodes on the k z = π/2 plane at (k x , k y ) = (0, −1.96) with σ z ↑ and (k x , k y ) = (0, −0.834) with σ z ↓ and their counterparts after the C 3 rotations 29 . The band gap closes at these Weyl nodes, and the lowenergy dispersion around them is linear. Interestingly, for a model system with a finite thickness L z along thê z direction, the surface Fermi arcs consist of three Fermi arcs on the top surface and three on the bottom surface. The (k x , k y , z) location of the Fermi arcs and the bulk states at the Weyl nodes are shown in Fig. 1 L z = 29, thick enough to separate the Fermi arc states on the opposing surfaces. Therefore, the constant energy contour employs a trefoil knot [ Fig. 1(c)], which engages all six Weyl nodes and six Fermi arcs at once. In the presence of an applied magnetic field alongẑ, the electrons cycle around the cyclotron orbit, whose shape in three spatial dimensions is related by a 90-degree rotation in the xy plane. Therefore, we have established a cyclotron orbit with a trefoil knot geometry that is topologically distinctive and not adiabatically connected with the conventional ring-shaped cyclotron orbits. We note the important role the Weyl orbit physics plays in attaining cyclotron orbit knots: the top and bottom Fermi arcs are spatially separated and can safely traverse each other without being gapped out even in the presence of a variable magnetic field. Together with the bulk chiral Landau levels that descend from the Weyl nodes and weave the two surfaces back together, we can form crossings that are the cornerstones for knots and links. The construction of trefoil-knot-shaped cyclotron orbit can be straightforwardly generalized to cyclotron orbits with is calculated using the recursive Green's function method. We focus on the chemical potential at the Weyl nodes and consider a slab with thickness Lz = 29. A small imaginary part δ = 0.001 is added to the energy to account for a finite level width. Inset: a linear fit to the peak position within the 200 < 1/B < 500 range reveals a quantum oscillation period of ∆(1/B) 4.87. more complex knots and links. Next, we study the Landau quantization of the cyclotron orbit knot and consider the Hamiltonian in Eq. 1 with a slab thickness of L z = 29 in the presence of a magnetic vector potential A = (−By, 0). Physical quantities such as the density of states (DOS) at the energy of the Weyl nodes can be obtained using the recursive Green's function method for sufficiently large system sizes. The results on the DOS versus the inverse magnetic field are summarized in Fig. 2. We observe a single quantum oscillation period of ∆(1/B) = 1/B n+1 − 1/B n 4.87. Since the bulk chiral Landau levels are parallel to the magnetic field, the quantum oscillations of a Weyl orbit is determined by the area S k of the combined Fermi arcs from the top and bottom surfaces 15 . Interestingly, after projection and combination of the Fermi arcs [ Fig. 1(d)], the area within the inner contour is S k2 ∼ 4.78%×S BZ of the surface Brillouin zone area S BZ , and the area within the outer contour (including S k2 ) is S k1 ∼ 15.2% × S BZ . That ∆(1/B) (S k1 + S k2 ) −1 = 5.0 indicates the magnetic flux enclosed in the inner contour contributes to the overall Berry phase twice. Indeed, straightforward counting suggests that the cyclotron orbit encloses the inner region twice upon each cycle. More rigorously, the contributions to the overall Berry phase Φ total = 4 i=1 α i Φ i obeys α 1 = α 2 = α 3 = α 4 /2, since the adiabatic change as illustrated in Fig. 3(c) of a magnetic field loop with flux Φ indicates the equivalence Since a trefoil knot winds around itself, the magnetic field line along the cyclotron orbit knot now contributes a non-zero Berry phase. On that account, our numerical results on quantum oscillations are fully consistent with the trefoil knot geometry of the cyclotron orbit. Then, we discuss an interesting physical consequence of the nontrivial knot geometry of a cyclotron orbit -the tunable-field quantum Hall effect. Conventionally, the magnetic field line along the ring-shaped cyclotron orbit does not contribute to the overall Berry phase [ Fig. 3(b)]. In contrast, a trefoil knot is self-threading, and thus the magnetic field line along the cyclotron orbit knot contributes nontrivially to the overall Berry phase, see Fig. 3(d). By controlling the contribution from such flux, we can modify the Landau quantization condition for other sources of the Berry phase, such as the applied magnetic field. The conventional spin-orbit interaction introduces a v-dependent effective magnetic field that couples to the electron spin; here, we need to introduce a v-dependent effective magnetic field B ef f that couples to the electron orbital angular momentum. One of the electron semiclassical equations of motion d r/dt = v k indicates that the cyclotron orbit is parallel to electron velocity at every instance and guarantees an effective magnetic field B ef f v, long overlooked due to the presence of only B × v term in the equations of motion 11,12,30 , is along the cyclotron orbit irrespective of the shape details and contributes to the tunable-field quantum Hall effect. In the case of a knotting Weyl orbit, since the Fermi arcs locally responsible for the crossings are on the surfaces, we limit our attention to the effective magnetic fields in the xy plane. For instance, along theŷ direction, we may introduce a v-dependent effective magnetic field with a perturbation H ∝ v y k + ηzx −v y k − ηzx 2iηz [x, v y ] for small η, where v y = i [y, H] is the electron velocity in theŷ direction. However, it leads to a sin (k 3 ) − sin (k 2 ) azimuthal angle dependence and vanishes after taking into account the − 1 2ŷ ± √ 3 2x directions. Therefore, we introduce an additional factor of cos (k 1 ), which is ∼ 1 and a good approximation near the Weyl nodes and the crossing point where v y is the most important. After similar treatment to the − 1 2ŷ ± √ 3 2x directions, we obtain in total: For slowly varying ηz, H = k H k takes a momentumspace form that only has τ y component and thus does not interfere with the quantum oscillation period, which is determined by the combined surface Fermi arcs 15 and the τ z terms in the original H k in Eq. 3. However, H does have an interesting impact on the model physics. As we include the perturbation H into the original Hamiltonian H in Eq. 1, the magnetic field B corresponding to each Landau level changes. We track the shift of the DOS quantum oscillation peak ∆ 1/B peak = 1/B peak (η) − 1/B peak (η = 0) as a function of η, and the results are summarized in In fact, the tunable-field quantum Hall effect is not fully unheard of -the overall Berry phase of the Weyl orbit receives a contribution from the bulk chiral Landau levels that depends on the Fermi energy, the tilting direction of the applied magnetic field, as well as the thickness of the system 15,19 . The extra Berry phase in this work, however, is consequential to the nontrivial knotting topology of the cyclotron orbit and comes from a completely different origin: with or without the H perturbations, our model systems are still C 3 rotation symmetric, the Fermi energy is at the Weyl nodes, and the magnetic field is alongẑ without tilting. Indeed, this contribution is unique to the cyclotron orbit knot model. In contrast, such an in-plane effective magnetic field incurs no orbital effect for conventional cyclotron orbits in two-dimensional electron systems. Further, we repeat the calculations for the σ z =↑ sector of the original Hamiltonian H in Eq. 1, which describes a conventional Weyl semimetal with a topologically-ring-shaped Weyl orbit. Its Weyl orbit also consists of three pairs of chiral Landau levels and three pieces of Fermi arcs on each of the top and bottom surfaces. As we change the amplitude η of the perturbation, given by the σ z =↑ sector of H in Eq. 3, the DOS peaks in the quantum oscillations hardly shift, see the black diamond symbols in Fig. 4. The small deviations from ideal theory expectation may be due to the approximation in H . In conclusion, we have shown a new topological prospect where the cyclotron orbit takes a nontrivial knot geometry in three spatial dimensions and proposed the Weyl semimetal slab as an example of realization. We have also provided a microscopic lattice model, investigated its Landau quantization, and demonstrated that the unusual knotting topology of the cyclotron orbit allows the continuous tuning of the magnetic field condition for quantum Hall effect. We note that the Weyl orbits depend on the surface states, thus the cyclotron orbit knot can be realized on existing Weyl semimetals, such as the RhSi with large Fermi arcs 31 , via proper surface design 32 . Since the knots are topologically protected and invariant in three spatial dimensions, a cyclotron orbit knot is not adiabatically connected with the conventional ring-shaped cyclotron orbits 33 and allowed to have characteristic topological properties. In topological quantum field theory, the nontrivial anyonic phases of quasiparticle braiding and statistics in two spatial dimensions are consequential to the nontrivial Jones polynomial of the knots between the quasi-particle world-lines in 2+1dimensional space time 20 . The consequences of cyclotron orbit knots in three spatial dimensions on characteristic properties, such as intrinsic Berry phase contributions 30 , the scenarios beyond U (1) gauge field, etc. and their connections to the knot invariants, are interesting further directions for future exploration. 27 In comparison, the nodal link and nodal knot metals rest their topological distinctions on the nodal line topology in three-dimensional momentum space. They remain gapless in the presence of a magnetic field and disperse in the momentum space along the magnetic field. Therefore, their nodal links and knots do not translate to their cyclotron orbits in three spatial dimensions. 28 See the illustrative plot and model construction guidelines in Supplemental Materials. 29 The C3 rotation symmetry is for simplicity and not a necessary ingredient of the cyclotron orbit knotting geometry. I. CYCLOTRON ORBIT KNOT MODEL CONSTRUCTION In the main text, we present a tight-binding model of a Weyl semimetal that hosts trefoil-knot-shaped Weyl orbit, see Fig. 5 for an illustration of the model hoppings. In this section, we discuss the general guidelines for this and other cyclotron orbit models alike, via a layered construction as illustrated in Fig. 6. We assume the same slab geometry as in the main text and define theẑ direc-tion to be the surface normal. We also use the (k x , k y , z) coordinates, so that the knotting structure of the constant energy contour translates into that of the cyclotron orbit in three spatial dimensions in the presence of an external magnetic field in theẑ direction. First, we project the target knot or link structure in its simplest form -a one-dimensional loop embedded in three dimensions -onto a two-dimensional plane. The projected diagram of a nontrivial loop necessarily involves a number of crossings, see Fig. 6(a). These crossings are the obstacles towards the conventional cyclotron orbit realization. Next, we switch the partners at each crossing as in Fig. 6(b), so that the projected diagram becomes separate contours (Fig. 6(c)). These contours are realizable as the constant energy contours of 2d systems. Then, we resolve the crossing issues via the Fermi arcs on the opposing surfaces of a Weyl semimetal slab. Fig. 6(d) is the projection of the Fermi arcs onto the (k x , k y ) plane, and the red and blue portions of the Fermi surfaces are localized on the top and the bottom surfaces, respectively. This top and bottom assignments should be consistent with that of the original crossings ( Fig. 6(a)). Such a Weyl semimetal can be realized with a slab with L z = 2L + 1 layers of the 2d system from the previous step and an inter-layer coupling that is stronger between the 2n layer and 2n + 1 layer in the regions containing the blue portions, and stronger between the 2n − 1 layer and 2n layer in the regions containing the red portions, respectively, see Fig. 6(e) and (f) for illustrations, n ∈ Z. Weyl nodes emerge at the junctions between the red and blue portions. Finally, we introduce coupling between the different contours, which gaps out the nearby Weyl nodes with opposite chirality, whose projected locations are denoted in Fig. 6(d) as blue and red dots, respectively. In particular, the pair of Weyl nodes circled out annihilates, and the connections at the crossings re-establish as the original contour of the targeted knot geometry. It is also straightforward to see that for a slab with L z = 2L layers, the Fermi surfaces on the top and bottom surfaces coincide, and the corresponding Weyl orbits are topologically equivalent to the conventional ring shape. Therefore, like the conventional Weyl orbits, the cyclotron orbit knot is sensitive to surface details and can be induced from the existing Weyl semimetal materials and models with proper surface design. On the other hand, the cyclotron orbit knot and the conventional cyclotron orbits are separated by Lifshitz transitions and their distinctions should remain relatively stable against small perturbations. We will further discuss these aspects in the next section. II. CYCLOTRON ORBIT KNOT TRANSITIONS In this section, we study the stability of and transition between the cyclotron orbit knot and the conven- topmost layer is projected out leaving an odd number of layers, and the Fermi arcs on the top surface and the corresponding Weyl orbit resume the form of cyclotron orbit knots as displayed in the main text. The results for various values of are summarized in Fig. 7. This is an example that by proper alteration of the surface property, we successfully change the top surface Fermi arcs connectivity and transform the conventional cyclotron orbits ( = 0) into nontrivial cyclotron orbit knots ( = 10). In addition, we see that both types of cyclotron orbits are stable within a finite range of parameters and separated by Lifshitz transitions at intermediate values of . As another example, we start with the model system in Eq. (1) in the main text on another slab of L z = 2L + 1 layers and impose an additional potential H top = i,z=Lz,s c † izs c izs on the topmost layer. The resulting cyclotron orbit knot is clearly stable against finite perturbations, see Fig. 8. At the same time, the bulk is a topological Weyl semimetal whose chiral Landau levels are also protected against perturbations. Finally, we discuss the realization of cyclotron orbit knot. One potential scheme is to change the top surface Fermi arc connectivity of an existing Weyl semimetal via proper surface design, such as depositing an adlayer with designated 2D dispersion that couples to the original Fermi arcs, see Fig. 9. We note that the reasoning can be generalized to materials with more than three pair of Weyl nodes and surface Fermi arcs, where similar transformation can be adopted for any three chosen Fermi arcs.
2019-05-06T17:59:59.000Z
2019-05-06T00:00:00.000
{ "year": 2019, "sha1": "0a7e32b0880fd2d209eb90e1a845484282512fff", "oa_license": "CCBY", "oa_url": "http://link.aps.org/pdf/10.1103/PhysRevResearch.1.022005", "oa_status": "GOLD", "pdf_src": "Arxiv", "pdf_hash": "51d512d47a1609510aa8c9103d897934f8b608df", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
3195143
pes2o/s2orc
v3-fos-license
Measuring the Information Content of Financial News Measuring the information content of news text is useful for decision makers in their investments since news information can influence the intrinsic values of companies. We propose a model to automatically measure the information content given news text, trained using news and corresponding cumulative abnormal returns of listed companies. Existing methods in finance literature exploit sentiment signal features, which are limited by not considering factors such as events. We address this issue by leveraging deep neural models to extract rich semantic features from news text. In particular, a novel tree-structured LSTM is used to find target-specific representations of news text given syntax structures. Empirical results show that the neural models can outperform sentiment-based models, demonstrating the effectiveness of recent NLP technology advances for computational finance. Introduction Information has economic value because it allows individuals to make choices that yield higher expected payoffs than they would obtain from choices made in the absence of information. A major source of information is text from the Internet, which embodies news events, analyst reports and public sentiments, and can serve as a basis for investment decisions. Measuring the information content of text is hence a highly important task in computational finance. For investors such as venture capitals, information content should reflect a firm's intrinsic value, or potential of future growth. However, measuring the information content of text can be challenging due to uncertainties and subjectivity. Fortunately, for public companies, the stock price can be used as a metric. Our goal is thus to leverage such data on public companies to train a model for measuring the information content of news on arbitrary companies. Finance theory suggests that stock prices reflect all available information and expectations about the future prospects of firms (Fama, 1970). Based on this, empirical studies in economics and finance literature have exploited statistical methods to investigate how stock returns react to a particular news or event, which is called event studies or information content effect (Ball and Brown, 1968). A standard analysis is to measure the cumulative abnormal return (CAR) of a firm's price over a period of time centered around the event date (termed the event window) (MacKinlay, 1997). Conceptually, a daily abnormal return represents the performance of a stock that varies from the expectation, normally triggered by an event, and can be positive or negative depending on whether the stock outperforms or underperforms the expected return. A CAR is the sum of all abnormal returns in an event window, formally described in Section 2. We study how news affects a public firm's CAR for training a model to measure the information content of arbitrary financial news. It is worth noting that this is different from predicting a firm's stock price movements, which aims at maximizing trading profits. Rather, we investigate whether NLP techniques can assist the understanding of an event's economic value. In finance literature sentiment signals have been used as a standard linguistic feature for representing information content (Tetlock, 2007). We build a baseline using frequency-based features derived from sentiment words to represent news text. However, sentiment polarities are subjective and may not fully represent the message conveyed in news text. For example, event information is also influential in determining a firm's stock price. To address this, we propose a deep neural model to better present news. A typical way of modeling a sentence is to treat it as a sequence and input the sequence to a long short-term memory (LSTM; (Hochreiter and Schmidhuber, 1997) model, which is capable of learning semantic features automatically. However, information may present different impact to individual firms and therefore, we need a way to represent information conveyed in a news sentence depending on a specific firm. We propose a novel Tree-LSTM which incorporates contextual information with target-dependent grammatical relations to embed a sentence. Because of the target-dependent feature of the proposed embedding model, the representation of a sentence varies from firm to firm. We train our information content measurement model with news text collected from Reuters Business & Financial News 1 . Results show that the proposed model yields significant improvements over a baseline sentiment-based model. Different from existing event studies which focus on predefined events or firms (Davis et al., 2012), our model is general to various news and firms; and one can measure the effect of information content of news on any companies, including private companies. The contribution of this paper is two-fold: • First, we show that the information content of news text can be measured automatically. Our results demonstrate the effectiveness of state-of-the-art NLP techniques in computational finance, and introduce information content effect in finance to the ACL community. Given the ubiquitous and instantaneous nature of electronic text, text analytics is an obvious approach for information content analysis. • Second, we design a novel target-dependent Tree-LSTM-based model for representing news sentences. To our knowledge, this is the first open-domain information content effect prediction system using machine learning and NLP technologies. Cumulative Abnormal Return Formally, the abnormal return of a firm j on date t is the difference between the actual return R jt and the expected returnR jt .R jt can be an estimated return based on an asset pricing model, using a long run historical average, or it can be the return on an index, such as the Dow Jones or the S&P 500 during the same period. For example, suppose that a firm's stock price rose by 3%, and the market index increased by 5% over the same period. If the stock is expected to perform equally to what the market does, namely 5%, the stock yielded an abnormal return of -2% even though the firm's actual return is positive. The cumulative abnormal return (CAR) of a firm j in an n-day event window is defined as the sum of abnormal returns of each day: The event window is normally centered at the event date and only trading days are considered in a window. The CARs before and after the event date mimic possible information leaks and delayed response to the information, respectively. Depending on the span of an event window, CARs provide analysts with short-and long-term information about the impact of an event on a stock's price. The most common event window found in research is a three-day window (-1, 1) where an event is centered at day 0 (Tetlock, 2007;Davis et al., 2012), and the corresponding CAR is denoted as CAR 3 . In this paper we model the effect of a news release on a firm's CAR 3 . If a news release occurs during trading hours, day 0 is the news release date; otherwise, day 0 is the next trading period. As prior research demonstrates that there is no significant difference between a modeled expected return and the market return for a short-term event window (Kothari and Warner, 2004), we compute the expected returnR jt by the return of the equally-weighted market index including all the stocks on NYSE, Amex and NASDAQ. Information Content Prediction Our goal is to estimate the polarity of information content y ∈ {0: negative effect, 1: positive effect} given a sentence s in which a target firm is mentioned. Assume that there is an embedding model that maps a sentence to a feature vector g ∈ R dg . The probability of positive information content effect is defined as: p(y = 1|s) = sof tmax(Wpg + bp) (2) W p and b p are parameters. We build two methods to obtain g from text, one being a traditional method in the finance literature (Section 4) and the other being a novel neural network (Section 5). It is possible that there is more than one sentence, s 1 , s 2 , . . . , s m , mentioning a firm of interest in the same event window. In this case, a neural attention mechanism adapted from is utilized to synthesize the corresponding information embeddings g 1 , g 2 , . . . , g m . To compute the attention vector, we define: where v and W u are learnable parameters. u (i) is the score of how much attention should be put on g i , and a (i) is the normalized score. The final synthetic feature vector g substitutes for g in Equation 2 to predict the effect. Sentiment-based Representation A growing body of finance research literature examines the correlation between financial variables, such as stock returns and earning surprises, and the sentiment of corporate reports, press releases, and investor message boards (Li, 2010;Davis et al., 2012), most of which are based on purpose-built sentiment lexicon. A commonly used source is the list compiled by Loughran and McDonald (2011), 2 which consists of 353 positive words and 2,337 negative words. As a baseline we follow prior literature (Mayew and Venkatachalam, 2012) and represent the information content of a sentence based on counts derived from the lexicon of Loughran and McDonald. The feature vector consists of raw frequency counts and sentence length-normalized values of positive words, negative words, and the difference of positives and negatives. In addition, the sentence length is also considered. Deep Neural Representation As mentioned in the introduction, it is useful to model news information content beyond their sentiment signals. Deep learning has shown being effective in automatically inducing features that capture semantic information over nature language sentences. To verify the relative effectiveness of deep neural models, we first build a baseline neural network model using a popular LSTM structure (Section 5.1) and then develop a novel syntactic tree-structured LSTM that is sensitive to specific target entities (Section 5.2). Bidirectional LSTM A popular way of modeling a sentence s is to represent each word by a vector x ∈ R dx (Mikolov et al., 2013), and sequentially input its word vectors x 1 , x 2 , . . . , x |s| to a long short-term memory (LSTM; Hochreiter and Schmidhuber (1997)) model, which is a form of recurrent neural network (RNN; Pearlmutter (1989)). We take a variation of LSTM with peephole connections (Gers and Schmidhuber, 2000), which uses a input gate i t , a forget gate f t and a output gate o t in the same memory block to learn from the current cell state. In addition, to simplify model complexity, we adopt coupled i t and f t . The following equations show how the LSTM cell state c t and the output of the memory block h t are updated given input x t at time step t: The W terms are the weight matrices (W 3 and W 6 are diagonal weight matrices for peephole connections); the b terms denote bias vectors; σ is the logistic sigmoid function; and ⊗ computes element-wise multiplication of two vectors. A deep LSTM is built by stacking multiple LSTM layers, with the output memory block sequence of one layer forming the input sequence for the next. At each time step the input goes through multiple non-linear layers, which progressively build up higher level representations from the current level. In our information embedding models, we embody a deep LSTM architecture with 2 layers. One of our baseline information embedding models is a bidirectional LSTM model (Graves et al., 2013), called BI-LSTM, consisting of two 2-layer LSTMs running on the input sequence in both forward and backward directions yielding vectors h 1 , respectively. We exclude stopwords and punctuations from each sentence. The final model outputs the information embedding g by concatenating the final outputs of the two LSTMs, namely − → h |s| and ← − h |s| . Dependency Tree-LSTM A syntactic approach for modeling a sentence s is to use a tree-structured LSTM (Tree-LSTM), embedding the parse tree of a sentence (Le and Zuidema, 2015;Tai et al., 2015;Zhu et al., 2015;Miwa and Bansal, 2016). Our hypothesis is that dependency relations between words convey a certain level of information content. For example, a dependency parser can tell Facebook is the subject which did the action acquired to the object Whatsapp in the sentence Facebook acquired Whatsapp. We parse sentences with ZPar (Zhang and Clark, 2011) 3 and adopt the N-ary Tree-LSTM of Tai et al. (2015) with peephole connections to run on a binarized dependency-based parse tree. For the specific task, we leverage the structure of a binary Tree-LSTM, and develop a novel way to represent a dependency relation between two words using this structure (Section 5.2.1). We propose an algorithm to transform a dependency parse tree to a binary tree, where leaf nodes are words and internal nodes are dependency relations, so that the transformed tree can be embedded using binary Tree-LSTM (Section 5.2.2). Finally we explain how the task of information content effect measurement can benefit from the target-dependent feature of the proposed binarization algorithm (Section 5.2.3). Binary Tree-LSTM for Dependency Arcs Similar to the LSTM memory block described in Section 5.1, a binary Tree-LSTM unit (Tai et al., 2015) takes input x t at time step t and updates its cell state c t and the output of the memory block h t controlled by input gate i t , forget gate f t and output gate o t . However, instead of depending on only one previous memory block as in a sequential LSTM model, a binary Tree-LSTM unit takes two children units, namely left (l) and right (r), into consideration. In this case, there are two forget gates f l t and f r t for the left and right children, respectively, so that information from each child can be selectively incorporated. The unit activations are defined by the following set of equations: The above binary Tree-LSTM model was proposed to represent binary-branching constituents (Tai et al., 2015). In this paper, however, we show that it can be used to represent dependency arcs. For example, given the subject dependency arc sub between acquired (head) and Facebook (dependent), Figure 1a illustrates the bottom-up information propagation in a binary Tree-LSTM model, where x sub , x acquired and x F acebook are vector representations of sub, acquired and Facebook, respectively. The output of the last unit h sub(F acebook,acquired) is the dependency embedding of sub(acquired, Facebook). We call this model LEX-TLSTM. Miwa and Bansal (2016) incorporate bidirectional LSTMs into the input of a Tree-LSTM unit by concatenating x i , − → h i and ← − h i so that contextual information of individual words can be considered. Instead Input: target node n, dependency parse tree T d Output: binarized dependency tree of inputting the three vectors as a whole into a Tree-LSTM unit, we treat forward and backward 2layer LSTM units as the left and right children, respectively, while inputting a word vector as shown in Figure 1b, which we refer as CTX-TLSTM. Figure 2a shows the dependency parse tree of the tokenized sentence Facebook acquired Whatsapp for $ 19 billion, where the root word is at acquired and head words are at the upper ends of dependency branches. To adapt a dependency parse tree to a binary Tree-LSTM model, Algorithm 1 presents a recursive algorithm to binarize a dependency parse tree given a target word. Binarized Dependency Tree The algorithm starts from a given target word and creates a binary tree node to represent a dependency relation between the target word and another word, with the dependent and head placed at the left and right children, respectively. The rule for selecting the dependency relation is that the dependency with the target's head is considered first followed by that of the target's dependents. In addition, we sort the dependents of a target word in a way that left context words are always in front of right context words, and both of the left and rigth context words are ordered by the distance to the target word in descending order. For instance, the sorted dependent list of acquired in the example is Facebook, for, Whatsapp . After deciding a dependency relation to binarize, the head and dependent words become targets of the algorithm to expand the binary tree recursively until all the dependency relations are transformed. After the transformation, a binarized tree has words at leaf nodes, and each internal node represents a dependency relation as shown in Figure 2b. Target Dependent Tree-LSTM Recall that our objective is to model the information content effect on a target firm mentioned in a sentence. In the previous example, a bidirectional model would output only one representation for the sentence no matter what the target firm is. In contrast, the proposed tree transformation algorithm outputs different binarized trees when different targets are given. Figure 2b and Figure 2c show the binarized trees using Facebook 4 and Whatsapp as targets, respectively. As in BI-LSTM, we remove stopwords and punctuations from a tree. Figure 2d demonstrates the result after removing for and $ from Figure 2c. When one child is ignored, the current node is replaced by the other child. When applying a binary Tree-LSTM model on a binarized dependency tree, information is propagated from the bottommost leaf nodes to the topmost dependency node (e.g. sub in Figure 2b), and the final output h is treated as the information embedding g. As the proposed binarization algorithm tends to leave the target at the top of the binary tree, information effectively flows from context to the target firm. For a binary Tree-LSTM model running on a target-dependent tree, we add the prefix TGT-to the model identification; otherwise the target is the root word defined by the parser, and RT-is prefixed to the model identification. Training We pre-train skip-gram embeddings (Mikolov et al., 2013) of size 100 on a collection of Bloomberg financial news from October 2006 to November 2013, and the size of the trained vocabulary is 320,618. In addition, firm names and an UNK token for representing any words out of the vocabulary are added to the vocabulary, having an initial embedding as the average of the pre-trained word vectors. The word embeddings are fine-tuned during model training, with dropout (Srivastava et al., 2014) using a probability of 0.5 to avoid overfitting. The other hyperparameters for our models along with dependency type representations are initialized according to the method of Glorot et al. (2010) For sequential LSTMs we use a 2-layer structure with inputs of size 100 and outputs of size 200. For Tree-LSTM models, only one layer is exploited, and the input and output dimensions are the same as that of sequential LSTMs. Training is done by maximizing the conditional log-likelihood of the target effect category for 15 iterations. The parameters are optimized by stochastic gradient descent with momentum (Rumelhart et al., 1988) using an initial learning rate of 0.005, with L2 regularization at strength 10 −6 . Every 1,000 training examples the parameters are evaluated on the development set by the macro-averaged F-score, and those achieving the highest value are kept. Experiments Settings Data: We collect publicly available financial news from Reuters from October 2006 to December 2015. Instead of taking a whole news article or simply a news title into consideration, we target the section of text which appears in the HTML 'class' attribute of 'focus paragraph' of Reuters news articles. This is invariably the first paragraph of such articles, which provide additional detail to the information contained in the article's title. For example, the focus paragraph of the news titled Exclusive: Target gets tough with vendors to speed up supply chain (4 May 2016, 12:22pm EDT) is: Discount retailer Target Corp (TGT.N) is cracking down on suppliers as part of a multi-billion dollar overhaul to speed up its supply chain and better compete with rivals including Wal-Mart Stores Inc (WMT.N) and Amazon.com Inc (AMZN.O). 5 As Target Corp, Wal-Mart Stores Inc and Amazon.com Inc are mentioned in the paragraph, we assume that the information content of the paragraph affects CAR 3 of these three firms. We ignore focus paragraphs that do not contain any names of U.S.-based, publicly listed firms. Finally, text are grouped per firm per event date, and for each group a CAR 3 is computed accordingly. This yields 22,317 instances, 5,848 of which have information gathered from more than one news. A total of 1,330 firms are covered +CAR3 -CAR3 Train 9,674 9,643 Dev 493 507 Test 995 1,005 in our data. We randomly select 1,000 and 2,000 instances as development and test sets, respectively, and the rest are used for training. The numbers of positive and negative CAR 3 examples in the training, development and test sets are fairly balanced, as shown in Table 1. Evaluation Metric: Although the task of information content effect prediction is a binary classification problem, we do not evaluate our models using the accuracy metric because the data are automatically aligned and some CAR 3 values may not reflect the information correctly. Instead, we evaluate the performance of the models by the area under the precision-recall curve (AUC), where precision is the fraction of retrieved positive/negative effect instances that really have positive/negative impact, and recall is the fraction of positive/negative effect instances that are retrieved. Table 2 summarizes AUCs of both positive and negative effect predictions on the development set for models using different embedding methods. First, the deep neural embedding approaches outperform the conventional sentiment-based representation widely exploited in finance research. This shows that deep neural models are stronger in capturing news information on and beyond sentiment signals. Second, compared with the sequential embedding strategy, namely BI-LSTM, those Tree-LSTM based (TLSTM) dependency embeddings perform better, demonstrating the benefit of syntactic information. Finally, the information content prediction can benefit from the target-dependent tree transformation (TGT) compared with that using the root word (RT). In addition, the performance of target-dependent models can be improved by incorporating an input word embedding (LEX) with its contextual information (CTX). It is worth noting that the main improvement comes from using the neural feature representation instead of the sentiment word statistics. Table 3 gives the AUCs for the baseline sentiment-based representation, the sequential embedding BI-LSTM, and the targeted dependency tree method TGT-CTX-TLSTM evaluated on the test set. TGT-CTX-TLSTM outperforms the other baselines, and the improvements between models are statistically significant (p ≤ 0.05). Final Results One possible application of the proposed model is the use as a security recommender in the financial domain. Thus we apply the model to instances with |CAR 3 | > 2%, namely information with high impact. A total of 1,021 instances in the test set pass this threshold. As shown in Table 3 the proposed model achieves AUCs of 0.7 and 0.68 on +CAR 3 and -CAR 3 , respectively. The results not only show the robustness of our model compared to the baselines but also demonstrate its applicability. To demonstrate the attention mechanism for weighing individual news, Table 4 shows three sets of news, each of which consists news from the same event window and mentioning a specific firm. Both the CAR 3 and the predicted effect probability for each event window are given, and the modeled weight is shown in front of each news, which meets human expectations. For example, one would expect that the news of Wal-Mart Stores Inc missing its profit expectation is more influential than that of being capable of paying shopping using smartphones at Wal-Mart, as shown in the first news group. • Target: Wal-Mart Stores Inc; CAR3: -4.7%; TGT-CTX-TLSTM: 0.21 0.95 Wal-Mart Stores Inc's (WMT.N) full-year profit may miss analysts' expectations as growth slows in its international markets, pressuring the company even as its U.S. discount stores continue to prosper. 0.05 A group of big retailers that includes Wal-Mart Stores Inc, Target Corp and Japan's 7-Eleven is developing a mobile payment network, adding to the proliferation of options that let consumers pay with smartphones. • Target: Oshkosh Corp; CAR3: 9.8%; TGT-CTX-TLSTM: 0.81 0.66 Activist investor Carl Icahn offered to buy all the outstanding shares of Oshkosh Corp (OSK.N) Thursday for a 21percent premium to the U.S. truckmaker's closing price on Wednesday, sending its shares to their highest in more than a year. 0.34 Truck maker Oshkosh Corp (OSK.N) advised its shareholders on Thursday to take no action related to activist investor Carl Icahn's offer to buy all outstanding shares in the company for $32.50 each. • Target: Sony Corp; CAR3: -2.3%; TGT-CTX-TLSTM: 0.27 0.86 Sony Corp stuck with its full-year profit forecast after slashing its outlook for TV sales, confident that other units will perform better than earlier anticipated to offset additional losses in the unit. 0.14 Japan's biggest technology conglomerates reported quarterly results, with weak TV demand a common theme at both Sony Corp and Sharp Corp. Related Work Our work is related to research that applies NLP techniques on financial text to predict stock prices and market activities. In terms of corpora, financial news (Leinweber and Sisk, 2011;Xie et al., 2013;Luss and d'Aspremont, 2015;Ding et al., 2015), firm reports (Kogan et al., 2009;Li, 2010;Lee et al., 2014;Qiu et al., 2014) and web content, such as tweets (Bollen et al., 2011;Vu et al., 2012) and forum posts (Das and Chen, 2007;Gilbert and Karahalios, 2010) have been studied. In terms of linguistic features, existing work can be classified into tree major categories: bag-of-words (Kogan et al., 2009;Lee et al., 2014;Qiu et al., 2014), sentiment-based (Das and Chen, 2007;Li, 2010;Bollen et al., 2011;Vu et al., 2012;Luss and d'Aspremont, 2015), and information-retrieval-based (Schumaker and Chen, 2009; Xie et al., 2013;Ding et al., 2015) methods. Our work falls into the category of information retrievalbased features by exploiting syntax information derived from a dependency parser. However, different from the aforementioned work, our goal is not to predict stock prices but to measure the economic value of news information content. The proposed Tree-LSTM based model for automatically representing syntactic dependencies is in line with recent research that extends the standard sequential LSTM in order to support more complex structures, such as Grid LSTM (Kalchbrenner et al., 2015), Spatial LSTM (Dyer et al., 2015), and Tree-LSTM (Le and Zuidema, 2015;Tai et al., 2015;Zhu et al., 2015;Miwa and Bansal, 2016). We consider the information content prediction as a semantic-heavy task and demonstrate that it can benefit significantly from a novel target-specific dependency Tree-LSTM model. Conclusion We showed that the impact of information in news release can be predicted using a firm's CAR, and that a proposed target-depend Tree-LSTM model, incorporating contextual information with syntax dependencies, is more effective in representing information content in news text compared to the classic bidirectional LSTM model and a baseline sentiment-based representation. The proposed model can serve as a security assessment tool for financial analysts, and benefit more comprehensive financial models and studies.
2016-12-22T08:44:57.161Z
2016-12-01T00:00:00.000
{ "year": 2016, "sha1": "a0a0761cae0b12e74bdfd3361b7e74c0075828f3", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ACL", "pdf_hash": "a0a0761cae0b12e74bdfd3361b7e74c0075828f3", "s2fieldsofstudy": [ "Business", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
21507319
pes2o/s2orc
v3-fos-license
Calculation of local spins for correlated wave functions In many cases the spin properties of a molecular system can properly be characterized by the spin density. It vanishes identically for a singlet system in every point of the space.w This is in accord with the fact that the ground state of most (especially organic) molecules is a singlet and they do not exhibit any explicit spin (magnetic) properties. There are, however, systems for which description by using the spin density is not sufficient to characterize the physical situation: binuclear complexes, diradicals or antiferromagnets. In such systems one postulates the existence of some local spins, although the overall system is a singlet and there is no spin density. In order to distinguish between a covalent molecule (crystal) and a system of the antiferromagnetic type in which the spins are coupled into a singlet, one should consider the decomposition of the expectation value of the total spin-square operator, hŜi, into atomic and diatomic contributions, and the atomic ones will give the local spins (spin squares). That problem was first approached by Clark and Davidson who decomposed the operator of total spin-square into a sum of atomic and diatomic contributions and computed the expectation value of each. Such a procedure permits one to spot every component of the wave function in which a given orbital appears singly occupied, and assign a contribution to the local spin given by the particular component. This approach is appropriate to identify the covalent (non-ionic) structures and may be used in constructing an effective Heisenberg Hamiltonian, but is not appropriate when the experimentally observable spin (i.e., magnetic) properties are needed: it attributes the value of 8 for the expectation value of the atomic spin-square for both hydrogen atoms in the H2 molecule treated at the RHF level, in obvious contradiction with the non-magnetic character of this molecule. For other molecules, likewise, the scheme of Clark and Davidson attributes a local spin to each atom, equal to 3 8 of its covalent valence. In such situation we have concluded, that instead of computing the expectation values of the atomic and diatomic components of the total Ŝ operator, one has to decompose the resulting expectation value hŜi into a physically reasonable sum of atomic and diatomic contributions. As the partitioning of a single physical quantity into several components is usually not unique, we have introduced the additional requirements, that (i) one should get no spins whatever for the covalent systems described by a closed-shell RHF wave function using doubly filled orbitals, and (ii) if the wave function is properly dissociating, then the asymptotic values of the atomic spins obtained for the atoms at large distances should coincide with the values pertinent to the respective free atoms. That project has been fulfilled first for the single determinant (UHF wave) functions, the resulting formula of which has been used with success by Reiher et al. in ref. 8 and has also been realized in our free program. Most recently that approach has been extended to correlated wave functions, as well. In the single determinant case we have presented the hŜi expectation value in terms of the spin density and overlap matrices P and S, respectively, as Introduction In many cases the spin properties of a molecular system can properly be characterized by the spin density.It vanishes identically for a singlet system in every point of the space.wThis is in accord with the fact that the ground state of most (especially organic) molecules is a singlet and they do not exhibit any explicit spin (magnetic) properties.There are, however, systems for which description by using the spin density is not sufficient to characterize the physical situation: binuclear complexes, diradicals or antiferromagnets.In such systems one postulates the existence of some local spins, although the overall system is a singlet and there is no spin density. In order to distinguish between a covalent molecule (crystal) and a system of the antiferromagnetic type in which the spins are coupled into a singlet, one should consider the decomposition of the expectation value of the total spin-square operator, hS ˆ2i, into atomic and diatomic contributions, and the atomic ones will give the local spins (spin squares). That problem was first approached by Clark and Davidson 1-4 who decomposed the operator of total spin-square into a sum of atomic and diatomic contributions and computed the expectation value of each.Such a procedure permits one to spot every component of the wave function in which a given orbital appears singly occupied, and assign a contribution to the local spin given by the particular component.This approach is appropriate to identify the covalent (non-ionic) structures and may be used in constructing an effective Heisenberg Hamiltonian, but is not appropriate when the experimentally observable spin (i.e., magnetic) properties are needed: it attributes the value of 3 8 for the expectation value of the atomic spin-square for both hydrogen atoms in the H 2 molecule treated at the RHF level, in obvious contradiction with the non-magnetic character of this molecule.For other molecules, likewise, the scheme of Clark and Davidson attributes a local spin to each atom, equal to 3 8 of its covalent valence.In such situation we have concluded, that instead of computing the expectation values of the atomic and diatomic components of the total S ˆ2 operator, one has to decompose the resulting expectation value hS ˆ2i into a physically reasonable sum of atomic and diatomic contributions.As the partitioning of a single physical quantity into several components is usually not unique, we have introduced the additional requirements, that (i) one should get no spins whatever for the covalent systems described by a closed-shell RHF wave function using doubly filled orbitals, [5][6][7] and (ii) if the wave function is properly dissociating, then the asymptotic values of the atomic spins obtained for the atoms at large distances should coincide with the values pertinent to the respective free atoms. That project has been fulfilled first for the single determinant (UHF wave) functions, 7 the resulting formula of which has been used with success by Reiher et al. in ref. 8 and has also been realized in our free program. 9Most recently 10 that approach has been extended to correlated wave functions, as well. In the single determinant case we have presented the hS ˆ2i expectation value in terms of the spin density and overlap matrices P s and S, respectively, as thus requirement (i) above is fulfilled automatically.If an atomcentered basis is used, the different terms of eqn (1) can be naturally assigned to the individual atoms or pairs of atoms.In there exists a genuine UHF solution differing from RHF, then different parts of the molecule (e.g., the dissociating atoms) are assigned spin densities of opposite sign, providing a qualitatively correct dissociation pattern and the overall hS ˆzi = 0 simultaneously; the UHF scheme, however, suffers from the shortcoming that the overall wave function does not correspond to any pure spin state. In order to describe situations in which the overall wave function is a singlet but there is a need to speak about local spins, one requires correlated (multideterminant) wave functions.The respective formula has been obtained 10 by deriving the expectation value hS ˆ2i in a ''mixed'' second quantized framework and separating out the values which the different terms have in the single determinant case.Thus one obtains hS ˆ2i as the sum of the right-hand side of eqn (1) and two types of terms which vanish if the wave function is a single determinant.One group of these terms is connected with the deviation of the first-order density matrix from the idempotency characteristic for the single determinants, i.e., with the differences P s S À (P s S) 2 for the LCAO ''density matrices'' P s (s = a or b).Other terms are connected with the so-called ''cumulant'' describing the deviation of the second order density matrix from the expression which it would have in the single determinant case. The authors of the recent papers 11,12 performed a decomposition of hS ˆ2i for correlated wave functions, which is applicable for open-shell systems only.In fact, in their scheme ''zero value is obtained for all one-center and two-center contributions in singlet state systems''; 11 in accord with that, they expressed 12 the components of hS ˆ2i in terms of the spin-density matrix-which identically vanishes for singlets.We consider that a physically inadequate approach: when, for instance, the H 2 molecule dissociates, then it dissociates into free hydrogen atoms, each being in doublet state, hS ˆ2i A = 3/4, even if these doublets may be coupled into an overall singlet, leading to the absence of a definite value for the atomic S z components and zero spin density.In contrast to that approach, we use the requirement that in the asymptotic regime the decomposition should recover the free atomic values. In the present paper we are going to present the formula obtained in ref. 10 in a form more convenient for programming, describe briefly its numerical realization and discuss the results of the first exploratory calculations. The working formula The formula for the decomposition of hS ˆ2i in the correlated case had been derived in ref. 10 in terms of the atomic basis orbitals.Owing to the fact that in the actual calculations the first and second order density matrices are available in terms of the molecular orbitals, we have rewritten that expression to the compact form Here C s is the coefficient matrix of the MO-s (natural orbitals) of spin s, and the elements of the cumulant D of the second order density matrix C are defined as In practice one does not use natural spin-orbitals but applies a common set of one-electron MO-s (the spatial natural orbitals) to express the density matrices, so C a = C b = C.The elements of the first-and second-order density matrices can be defined through the expectation values of the strings of creation and annihilation operators ĉs+ i and ĉsÀ j , respectively, as and Their values should be extracted from the results of the actual quantum chemical calculations. The different terms of expression ( 2) can be obviously assigned to the atoms on which basis orbitals w m , w n are centered, thus it can trivially be rewritten as a sum of atomic and diatomic contributions: A quantity, closely related to the problems of local spins is the free valence index F A of an atom; 13,14 its advantage is that it can be calculated by using the first-order density matrix only.It gives the difference between the actual valence of the atom and the sum of the bond orders formed by it, so it can be considered as the effective number of the unpaired electrons on the atom.For correlated wave functions the free valence index can be written as In the previous papers 7,10 we have presented the atomic contributions to hS ˆ2i in the form explicitly containing the free valence F A in order to discuss the similarities and differences with the results of Clark and Davidson. 1 Similarly to hS ˆ2i, the free valence index depends on the spin density (if any) and reflects the deviation of the first-order density matrix from the idempotency taking place if the wave function is not a single determinant; it, however, does not contain terms related to the cumulant.For a singlet system, in which P s = 0, the sum of the atomic free valence indices is equal to the ''number of effectively unpaired electrons'' as defined by Staroverov and Davidson. 15he analysis discussed in ref. 10 showed that our formula can describe that a dissociated H 2 molecule exhibits two local spins with hS ˆ2i A = 3/4 on the individual atoms, while the overall system is singlet, hS ˆ2i = 0. Also, it was established that it describes correctly the dissociation of a singlet oxygen molecule into two triplet oxygen atoms-or that of a singlet ethylene into two triplet methylene moieties.Analytical considerations of more complex systems would be cumbersome and one would also be interested in following how exactly these spins emerge during the dissociation and how large spins can be detected in different interesting model systems, not necessarily undergoing dissociation. Results of calculations We have implemented the above equations and accomplished the calculation of first-and second-order density matrices for the wave functions obtained by the full CI (FCI) program of Knowles and Handy 16 linked to an old HONDO version 17 (the same suite that has already been used in ref. 18) as well as by a suitably modified CAS-SCF part of Gaussian-03. 19tandard basis sets have been used throughout, except that for the H 2 FCI calculations the cc-pVTZ basis set has been used with the 6 Cartesian d-orbitals, instead of the 5 pure ones, owing to the limitations of the HONDO version applied. For H 2 we have performed CAS-SCF(2,2) calculations by using the standard cc-pVTZ basis set with 5 d-orbitals and FCI-ones with 6 d-orbitals.The additional s orbital which is effectively present in the latter case does not, however, cause any appreciable effects.Fig. 1 displays the atomic components hS ˆ2i A and free valences F A for these two types of wave function.(Each interatomic component hS ˆ2i AB is simply equal to the atomic one but with sign minus, thus ensuring the overall hS ˆ2i = 0.) We can see that the CAS and FCI curves are very close to each other, although at the equilibrium distance the FCI method accounts for B91% of the total energetic error remaining at the CAS-SCF level.(The lowest CAS-SCF energy is À1.151550 a.u., while the FCI one is À1.172456 a.u. and the exact Born-Oppenheimer minimum energy is À1.174476. 20,21) That observation is important because it indicates that the ''full valence CAS-SCF'' calculations should be appropriate for our purposes. Inspection of Fig. 1 indicates that the quantities hS ˆ2i A and F A as functions of the internuclear distance behave as could be expected.Also, they basically change parallel to each other.However, the F A curve is more smooth: the hS ˆ2i A curve exhibits an additional inflection point roughly at the same internuclear distance (ca.1.22 A ˚) where the genuine UHF solution departs from the RHF one.At the present this is a curious detail only, because no analysis of the possible relations between the detailed behaviour of the hS ˆ2i A curve obtained at the FCI level and the existence of the distinct UHF solution has been performed as yet.The fact that the interrelations between these two quantities are not fully trivial is also illustrated by Fig. 2, indicating a change of behaviour at the values corresponding to the internuclear distance of ca. 1 A ˚. Fig. 3 displays the hS ˆ2i A and F A curves for the dissociation of the singlet N 2 molecule calculated by the 6 electrons in 6 orbitals CAS-SCF method and cc-pVTZ basis set.The free nitrogen atom (state 4 P 3/2 ) has three electrons outside the closed shells, i.e., for a free nitrogen atom S = 3/2 and hS 2 i = 15/4 = 3.75.One can see that the atomic spin-square curve approaches 3.75 upon the dissociation, and the free valence F A tends to 3, indicating that our definitions permit to correctly recognize the two atomic quartet states within the global singlet wave function. Fig. 4 displays the hS ˆ2i A and F A curves for the dissociation of the triplet ground state and the lowest singlet state of the O 2 molecule calculated by the 8 electrons in 6 orbitals CAS-SCF method and cc-pVTZ basis set.In both cases the correct dissociation into the triplet oxygen atoms (state 3 P 2 ) is observed.This is in accord with the theoretical discussion given in ref. 10.An interesting feature is that the free valence of the oxygen atom in the dissociated triplet state is 2.5, and not 2 as in the singlet case.This observation can be put in a direct correspondence with the result of Staroverov and Davidson 15 who found and discussed in detail that the number of ''effectively unpaired electrons'' for the triplet O 2 is equal 5-in the dissociation limit the free valences of the atoms can be shown to sum to the number of ''effectively unpaired electrons''.(That counterintuitive result was attributed to a degeneracy of orbitals occupations present in the triplet case. 15) It is also remarkable, that the hS ˆ2i A and F A curves for the singlet state are nearly indistinguishable. ) Both quantities behave as expected, i.e., increase monotonically with the C-C distance.Similarly to the H 2 case discussed above, their interrelation becomes nearly linear over some C-C distance (B2 A ˚).It may be noted that both the free valences and the local spins are quite insignificant on the hydrogens in this system, as the small active space applied is sufficient to describe the dissociation of the C-C bond but not to take into account any correlation elsewhere. Fig. 6 shows the local spin-square curves for the dissociation of ethylene CQC bond, by using 4 electrons in 4 orbitals CAS-SCF method and cc-pVTZ basis set.(The geometry of the CH 2 moieties was kept fixed.)Contrary to the ethane case, the contributions of the hydrogens are small but not negligible.Therefore, both the hS ˆ2i A value for the carbon atoms and the sum of all atomic and diatomic components for a methylene moiety are shown-the latter tends exactly to the value of 2 characteristic for a pure triplet state.That curve is quite analogous to the singlet O 2 curve; also the free valence curve (not shown) goes very close to the local spin-square, although not so close as observed in the O 2 case. It is known that conjugated (p-electron) systems usually have genuine UHF solutions with energies lower than the respective RHF ones even at the equilibrium geometries (see e.g., ref. 22).These solutions break the spin-symmetry of the system, and for symmetric molecules they do not have strict spatial symmetry either.However, if the point group of a singlet molecule has a so called ''halving subgroup'' (subgroup in which the number of elements equals half of that in the whole group) then some symmetry operations only interchange the spins a and b in the wave function.For such systems the projection of the wave function on the singlet subspace restores the complete symmetry of the wave function 23,24 and even at the unprojected UHF level a number of physical quantities have the proper symmetry.As the UHF method is the simplest one in which some correlation is accounted for and there appear local spins in the approximations to the singlet ground states, we present the UHF results for these molecules alongside with the CAS-SCF ones.(The UHF results were calculated by using the program ref. 9, realizing the formulae 7 pertinent to the single determinant case.)As we shall see, the UHF results agree qualitatively with the (much more expensive) CAS-SCF ones, so the UHF method may be useful for a quick orientation. Table 1 displays the free valence indices of carbon atoms and some of their hS ˆ2i components of trans-butadiene, cyclobutadiene and benzene, calculated by the ''p-electron full-valence''-i.e., (4,4), (4,4) and (6,6)-CAS-SCF level of theory and by the UHF method, by using the cc-pVTZ basis set.All results are pertinent to the respective energy minima, except the cyclobutadiene UHF solution of the D 2h symmetry, which was calculated in the RHF minimum, because the only minimum at the UHF level has the symmetry of a square (D 4h ). The relatively large values of the free valences and local spins obtained for trans-butadiene indicate a significant importance of correlation for such a conjugated chain.At the CAS-SCF level, the largest off-diagonal hS ˆ2i element is within the formal ''double bond'', which agrees with our qualitative picture on that molecule.(The negative sign of the hS ˆ2i 12 component indicates that correlation does not basically destroy spin pairing in that bond.)The further offdiagonal elements are small and exhibit an antiferromagnetic type oscillation.The UHF results seem somewhat exaggerated, and the antiferromagnetic nature of the single UHF determinant is prominent. Cyclobutadiene, this classical antiaromatic system, has a D 2h ground state, which means that the rectangular structure with two double and two single bonds, corresponding to only one of the two possible ''Kekule´-structures'', has lower energy than a square permitting the resonance of the latter.This experimental finding is reproduced both at the RHF level and at the ''p-electron full-valence'' CAS-SCF level, but not for UHF. At the square geometry the RHF solution either corresponds to only one of the ''Kekule´-structures'', and does not have the full symmetry of the system, or is symmetry-adapted but has a higher energy (Musher's point-like discontinuity 25 ).Here the exact p-electron solution qualitatively differs from any closed-shell wave functions, because the p-electrons form a small ''molecular antiferromagnet''. 22,26In that point (and practically only in that point) the p-electron wave function is almost exactly described by a spin-projected Slater determinantthe extended (or projected) Hartree-Fock (EHF) wave function. 22,27This EHF wave function can be best described by considering four singly occupied equivalent-but not strictly orthogonal-localized orbitals, each of which is basically (but not completely) localized on one corner of the square, putting on them alternatively a and b spins, 26 then first coupling the identical spins along the diagonals into two triplets (the two ''antiferromagnetic sub-lattices'') and then coupling them into a resulting singlet. 28In accord with that, each corner of the square cyclobutadiene carries a local spin with an hS 2 i A value exceeding half of the value 3/4 characteristic for a free spin; the large value of the free valence index F A is in accord with that.(The overlap of the four localized orbitals reduces the effectively unpaired character of the electron sitting at each corner.)The off-diagonal hS 2 i components also reflect well the antiferromagnetic character of the wave function. In the D 2h conformation having the minimum energy, the local spins are much reduced, but still exceed the values of trans-butadiene.Similarly to the latter molecule, the off-diagonal hS 2 i elements again emphasize the spin-correlation within the bonds and exhibit an antiferromagnetic behaviour.The antiferromagnetic character of the wave function is significantly exaggerated by the UHF method, which can explain why there is no D 2h minimum on the UHF potential curve.Fig. 7 displays some CAS-SCF results obtained for cyclobutadiene by changing the ratio between the sides of the rectangle (the 6-31G** basis has been used).It can be seen that the energy minimum corresponds to a D 2h geometry, while at the square conformation the energy has a shoulder.Both the free valence and the local spin-square exhibit well-defined maxima at the square geometry.Among the p-electron systems studied, benzene shows the smallest deviation from the closed-shell RHF structure characterized by zero values of all spin-components and of the free valences.The CAS-SCF and UHF values show an overall similarity; however, UHF can describe only the oscillating antiferromagnetic behaviour but not the finer details.For instance, at the CAS-SCF level the hS 2 i 14 component has a larger absolute value than the hS 2 i 13 one, which may be put into correspondence with the importance of spin-pairings in a ''Dewar-benzene'', the latter being also of significance for aromaticity of benzene. 29inally we shall discuss very briefly a simple scheme imitating at the ab initio level of calculations the classical ''three-center four-electron'' model of superexchange, used to explain antiferromagnetism in systems like oxides or fluorides of transition metals.In this model one considers explicitly only two magnetic ions and one ligand atom, and each center is represented by only one orbital.These usually are the appropriate 3d orbitals of the metals and a 2p orbital of the ligand.(For a detailed analysis of this model we refer to ref. 30 and references therein.)For that reason we selected the system Sc-OH 2 -Sc with the linear arrangement of atoms Sc-O-Sc, in which the hydrogens were added to fix the otherwise ''dangling'' electrons of the oxygen atom.By performing (4,3) CAS-SCF calculations by using the cc-pVTZ basis set, we got the results which could be expected: the singlet is tangentially (by some 0.12 mH) lower in energy than the triplet, and in both cases there are sensible (exceeding 0.007) spin-square values only connected with the Sc atoms: the atomic spin-squares are in both cases close to that of a free spin (0.7248 for the singlet and 0.7205 for the triplet), and the interatomic offdiagonal ones (À0.7177 and 0.2411)-together with the minor components-provide the resulting hS 2 i to be exactly 0 and 2, respectively.The free valence indices are in full agreement with this picture: we got the values 0.976 and 0.960 on the Sc atoms in the singlet and triplet cases, respectively, and all the other ones are essentially negligible. By extending the active space by 4 electrons and 4 orbitals, and doing (8,7) CAS-SCF calculations, we got a significantly lower energy but only slightly different qualitative results as far as the hS 2 i components are concerned.The increased flexibility resulted in a larger stabilization of the singlet with respect the triplet (ca. 2 mH) but the local spins remained essentially unchanged (0.7357 and 0.7365 in the singlet and triplet cases; the hS 2 i components not connected with the Sc atoms do not reach the value 0.02).There is, however, a significant difference in the free valences: the values 1.361 and 1.354 were obtained for the Sc atoms in the singlet and triplet cases, respectively, the other values again being very small.That difference indicates that in systems which are of interest from the point of view of magnetic properties, the information contained in the (much easier to calculate) freevalence index may be insufficient and one has to calculate explicitly the values of the respective local spins. Conclusions In this paper we have rewritten the formula proposed in ref. 10 for decomposing the expectation value hS 2 i of the total spin operator into atomic and diatomic components in the case of general (correlated) wave functions in terms of the cumulant and realized it numerically for the first time.The results confirm its conformity of this formula with the physical expectations; one may suppose that this is the proper decomposition which corresponds to the-physically rather obviousrequirements (i) and (ii), and perhaps no further freedom remained in choosing the scheme of that decomposition.zThe atomic hS 2 i components in most cases change in parallel Fig. 7 Total energy of cyclobutadiene (a.u.), the atomic spin square hS ˆ2i A and the free valence index F A of the carbon atoms as functions of the ratio a/b of the sides of the rectangle, calculated at the (4,4) CAS-SCF level by using the 6-31G** basis set.(One of the sides of the rectangle has been kept fixed at the value optimized for the square conformation.)z This is the case, even if for some exotic problems this decomposition may produce somewhat strange results.Thus, for the excited singlet state of H 2 one gets negative atomic hS 2 i components at shorter distances, while for the S z = 0 triplet state the atomic components can exceed the value 3/4 characteristic for a single electron.This phenomenon can perhaps be put in parallel with the negative spin densities which are observed for some atoms in many free radicals.However, we do not think that analogous results could be encountered for any ground state system.(We are grateful to a referee calling our attention to these excited states.) Fig. 1 Fig. 1 Change of the atomic spin square hS ˆ2i A (upper part) and of the free valence index F A (lower part) during the dissociation of the H 2 molecule calculated at the full CI (solid lines) and valence CAS-SCF levels (dashed lines) by using the cc-pVTZ basis set. Fig. 2 Fig. 2 Interrelation between the free valence index F A and the atomic square hS ˆ2i A for the dissociation of the H 2 molecule calculated at the full CI (solid line) and valence CAS-SCF levels (dashed line) by using the cc-pVTZ basis set. Fig. 5 Fig.5displays the hS ˆ2i A and F A curves for the carbon atoms when the length of the C-C bond of the ethane molecule is increased, as obtained with the simplest 2 electrons in 2 orbitals CAS-SCF method and cc-pVTZ basis set.(The geometry of the CH 3 moieties is optimized for each C-C distance.)Both quantities behave as expected, i.e., increase monotonically with the C-C distance.Similarly to the H 2 case discussed above, their interrelation becomes nearly linear over some C-C distance (B2 A ˚).It may be noted that both the free valences and the local spins are quite insignificant on the hydrogens in this system, as the small active space applied is sufficient to describe the dissociation of the C-C bond but not to take into account any correlation elsewhere.Fig.6shows the local spin-square curves for the dissociation of ethylene CQC bond, by using 4 electrons in 4 orbitals CAS-SCF method and cc-pVTZ basis set.(The geometry of the CH 2 moieties was kept fixed.)Contrary to the ethane case, the contributions of the hydrogens are small but not negligible.Therefore, both the hS ˆ2i A value for the carbon atoms and the sum of all atomic and diatomic components for a methylene moiety are shown-the latter tends exactly to the value of 2 characteristic for a pure triplet state.That curve is quite analogous to the singlet O 2 curve; also the free valence curve (not shown) goes very close to the local spin-square, although not so close as observed in the O 2 case.It is known that conjugated (p-electron) systems usually have genuine UHF solutions with energies lower than the respective RHF ones even at the equilibrium geometries (see e.g., ref.22).These solutions break the spin-symmetry of the system, and for symmetric molecules they do not have strict spatial symmetry either.However, if the point group of a singlet molecule has a so called ''halving subgroup'' (subgroup in which the number of elements equals half of that in the whole group) then some symmetry operations only interchange the spins a and b in the wave function.For such systems the projection of the wave function on the singlet subspace restores the complete symmetry of the wave function23,24 and even at the unprojected UHF level a number of physical quantities have the proper symmetry.As the UHF method is the simplest one in which some correlation is accounted for Fig. 3 Fig. 3 Change of the atomic spin square hS ˆ2i A and of the free valence index F A during the dissociation of the N 2 molecule calculated at the (6,6) CAS-SCF level by using the cc-pVTZ basis set. Fig. 4 Fig. 4 Change of the atomic spin square hS ˆ2i A and of the free valence index F A during the dissociation of the triplet ground state (upper part) and of the lowest singlet state (lower part) of the O 2 molecule calculated at the (8,6) CAS-SCF level by using the cc-pVTZ basis set. Fig. 5 Fig. 5 Change of the atomic spin square hS ˆ2i A and of the free valence index F A of the carbon atoms with the increase of the C-C distance in the ethane molecule at the (2,2) CAS-SCF level by using the cc-pVTZ basis set. Fig. 6 Fig. 6 Change of the atomic spin square hS ˆ2i A of the carbon and of the sum of the hS ˆ2i components for a methylene moiety with the increase of the C-C distance in the ethylene molecule at the (4,4) CAS-SCF level by using the cc-pVTZ basis set. ½2ðC by SÞ im ðC ay SÞ jn C b mk C a nl D l a k b i b j a À ðC ay SÞ im ðC ay SÞ jn C a mk C a nl D l a k a i a j a À ðC by SÞ im ðC by SÞ jn SÞ mm À ðP a SP a SÞ mm þ ðP b SÞ mm À ðP b SP b SÞ mm Table 1 Free valence values and hS ˆ2i components for butadiene, cyclobutadiene and benzene treated at the ''p-electron full-valence'' CAS-SCF and UHF levels of theory with the cc-pVTZ basis set a At the RHF minimum.
2018-04-03T02:55:56.655Z
2010-09-08T00:00:00.000
{ "year": 2010, "sha1": "4f4a19f40d1159f59b59ca1306b46d7e71d51732", "oa_license": "CCBYSA", "oa_url": "https://zenodo.org/record/895324/files/article.pdf", "oa_status": "GREEN", "pdf_src": "ScienceParseMerged", "pdf_hash": "4f4a19f40d1159f59b59ca1306b46d7e71d51732", "s2fieldsofstudy": [ "Chemistry", "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
139696538
pes2o/s2orc
v3-fos-license
The schemes analysis of profile milling of long and non-technological workpieces . The article considers alternative schemes of profile milling of long non-technological workpieces– of the peripheral segments, formed during longitudinal opening of logs. Peripheral segments are a large waste of manufacture and contain the wood fibres with high physics-mechanical and operational properties. Therefore manufacturing qualitative wood products from peripheral segments is especially actual problem. However, the named workpieces are characterised by low adaptability to manufacture, therefore working out and the analysis of the machining schemes, expelling a resonance in technological system are necessary. The non-resonance machining scheme is used in a design of the milling machine tool PFP-100 for profile milling of a large wood waste. Sophisticated complex machine systems [7] that provide a comprehensive cutting of logs (at the same time peripheral segments into small elements) are used abroad. After such a cutting of the peripheral segments, its small elements are glued, dried and machined, which leads to a deterioration of the ecological situation (due to the use of adhesives), an extension of the technological chain and a significant increase in the cost of finished products. Prospective is not the use of expensive machine tools, but the development of much cheaper equipment and environmentally friendly processes of profile processing of large sawmill wastes, on the basis of which the release of quality profile products becomes real. The solution of these issues is possible on the basis of deep comprehensive research, development and design work, and, first of all, the development of a vibration-resistant technological profile milling scheme. The purpose of this study is to develop a profile milling scheme for non-technological large sawmilling wastes characterized by relatively small elastic movements of the workpiece under the action of the workload, a slight vibration speed of the elastic recovery of the wood, and less severe impacts of the cutting teeth of the cutter on the workpiece surface. To achieve the goal, it is necessary to solve the following tasks: development of alternative profile milling schemes for peripheral segments; analysis and choice of the scheme for practical implementation; experimental verification of the selected scheme; development of recommendations on the further use of research results. In the logging process, logs are cut down, branches are cut, transported along the ground, as a result of which the outer surface of the log contains nonmetallic inclusions (sand, clay, etc.), which adversely affects the resistance of the cutting tool during profile milling. The oncoming milling scheme, unlike the passing one, is characterized by less dynamic impact of the mill on the workpiece, and its teeth are not encountered when processing foreign abrasive inclusions, which provides favorable cutting conditions. For these reasons, the material presented hereafter refers to the on-coming profile milling of segments. Due to the large extent of the machined surfaces of the peripheral segments, the use of a tool for rigid fixing of the workpiece is not practical due to the considerable complexity of the technological equipment. Therefore, alternative profile milling schemes were developed, in which: -the peripheral segment (workpiece) is supported by several rollers, between which there is no intermediate support ( Fig. 2) (first diagram); -in the middle between the rollers a plate 4 is fixed, forming a gap Sp with the technological base of the workpiece (Fig. 3) (second scheme); -a pulley 8 is installed between adjacent rollers, which is in contact with the workpiece setting base of the workpiece (Fig. 4) (the third scheme). In the process of profile milling of the unpeeled curvilinear surface of the peripheral segment 1 (Fig. 2), an allowance of thickness t is removed. The main (PZ+ΔPZ) and radial (Py+ΔPy) components of the cutting force act on the processed segment, which are characterized by the increments ΔPZ and ΔPy due to the variability of the allowance, hardness and knotty. After a single cutting with the tooth of the cutter, the workpiece has a trace in the form of a curve of line 2. The centrifugal force Qsin(ωτ+φ1) also acts on the workpiece (ω is the angular velocity of the mill, τ is the current machining time, φ1 is the initial angular position of the force Q), caused by the principal vector of imbalances Dst, and the bending moment Msin (ωτ+φ2) (φ2 is the angular reference of the moment M), due to the main moment of the instrument MD imbalances. The force Q and the moment M change during processing by sinusoidal law. The maximum load on the spindle unit with the tool occurs when the main vector Dst is perpendicular to the MD main imbalance moment. The peripheral segment 1, mounted on the technological base surface 4, moves with the speed of the working feed when the bending rolls 3 and 5 rotate with angular velocity ω1. When the segment 1 is pressed by rollers 6 and 7 with the forces Qr6 and Qr7, the teeth of the rolls are cut into the surface 4 and the drive forces FV3 and FV5 are transmitted to the segment. The roller 6 is pointed, therefore, under the action of the force Qr6, it cuts into the workpiece, ensuring its direction when entering the processing zone. There is a normal reaction NV3 of the roll 3. The profiled roller 7 directs the segment at the exit from the treatment zone without damaging its treated surface. Changes in the elastic displacements of the segment under the influence of a variable workload lead to a change in the frictional forces of the segment with the bending rolls 3, 5 and the rollers 6, 7. In the absence of support between adjacent rollers (the first scheme), there is a danger of resonance due to the coincidence of the natural frequency of the transverse bending vibrations of the workpiece with the frequency of external dynamic action, therefore, before processing it is necessary to perform calculations for the absence of resonance. The frequency of natural oscillations of the workpiece as a function of its dimensions, modulus of elasticity and other parameters is determined by the formula: where hn, ho, Bn, Bo, respectively, is the height and width of the untreated and processed part of the peripheral segment; E -modulus of elasticity of wood; lv-interaxial distance of adjacent rollers; Mk is the mass of the workpiece that participates in the oscillatory process. From (1) it follows that the value of the natural oscillation frequency fc of the workpiece can be controlled, and, consequently, also the vibration resistance of the profile milling process of the peripheral segment. The frequency of external dynamic impact during profile milling is determined by the formula: where n is the rotational speed of the mill; z -number of teeth of the cutter. Absence of resonance in the technological system of profile milling of workpieces is achieved by meeting the conditions: From the expressions (1)−(3) we determine the working frequency of the shaping cutter rotation, at which there is no resonance in the technological system: With the initial information about the geometric dimensions of the peripheral segments, the elasticity modulus of the wood, the oscillating mass, the interaxial distance of the adjacent rolls, it is possible to determine by (4) the resonanceless operating speed of the cutting tool in min -1 for specific profile milling conditions. Calculations according to (2) show that, when processing peripheral segments with a cutting speed (45 ... 50) m / s, the number of cutting teeth of the milling cutter z = 4 ... 10 the frequency of the compulsive force is in the interval (330 ... 830) Hz. Resonance can be avoided with a small interaxial distance of the adjacent rollers lv = (0.2 ... 0.5) meter and processing workpieces of relatively large sizes hn, ho, Bn, Bo (100 mm or more), but setting the rolls at a small distance lv complicates the mechanism of the workpiece feed, and the increase in its size narrows the technological possibilities of the first scheme. On the basis of the analysis, the drawbacks of the first scheme are revealed: a change in the friction forces within a wide range, which increases the probability of occurrence of selfoscillations in the technological system; The contact of the tool with the workpiece takes place under harsh conditions, when the milling cutter and the workpiece move towards each other in the process of resilient restoration; nonresonant processing is possible with a small interaxial distance lv of the rolls and a relatively large size of the workpiece, which limits the technological possibilities of the circuit. Let us analyze the second scheme (Fig. 3). The length of the support plate 4 is lp, the rolls 3, 5 support the workpiece 1, creating normal reactions NV3, NV5. The plate 4 limits the elastic movements of the workpiece with the value of the static gap Sp = [y] between the plate and the process base of the segment. The pressing of the profile roller 7 to the processed surface of the segment provides a spatial position of the longitudinal axis of the segment and eliminates its displacement when shear forces occur, for example, in the processing of knots. In the absence of the cutting process, the normal reactions NV3, NV5 are determined by the degree of compression of the springs and their rigidity dr6 and dr7: NV3 = Qr7; NV5 = Qr6. The approximate total value of the frictional force between the rolls, rollers and workpiece in the absence of the cutting process equals to: where f1 -coefficient of rolling friction of wood against metal. The frictional force between the plate 4 and the installation plane of the segment 1 is: where f2 -coefficient of sliding friction of wood against metal; Np -normal reaction of the plate. Since the support plate takes part of the external load, the frictional forces and the external load on the rolls 3, 5 are smaller in the second scheme than in the first one. The total frictional force consists of friction forces of the workpiece with rolls, plate and rollers: The friction force Ftr2 is greater than the friction force that occurs during the first processing scheme. On the level of elastic movement of the workpiece it is preferably better using the second scheme than the first one, since the elastic restoring forces of the workpiece are smaller in the second case and equal to: where y1 -elastic movements of the workpiece during the first scheme of segment processing, dc -the stiffness of the segment in the vertical direction. Smaller elastic forces affect the decrease in vibration speed of the elastic restoration of wood, which leads to a less severe impact of the workpiece when it meets the cutting teeth of the mill. The components FV3 , FV5, Fr6, Fr7 can be reduced by setting the rolls 3, 5 and the rollers 6, 7 on the rolling bearings. The second scheme, in comparison with the first one, provides a more relaxed profile milling of peripheral segments, which is its advantage. Reduction of the numerical value of the frictional force is possible by using, instead of the plate 4, a roller 8 (figure 4) mounted on the rolling bearing. As a result, we get the third processing scheme. The normal reaction NV3 of the roll 3 in the absence of cutting is equal to the force Qr6.. The profiled roller 7 presses on the machined surface with a force Qr7, which facilitates the infeed of the roll 5 into the mounting surface of the peripheral segment. Since the workpiece has a support in the form of a roller mounted on the rolling bearing, the frictional force in the support is insignificant. The impact pulse of the cutter tooth is perceived by a low frictional force, which leads to a selection of gaps in the kinematic elements of the feed chain and to the elastic twisting of the shafts under the action of the cutting torque. After the cutting tooth emerges from contact with the segment, the cutting torque is zero. The process of elastic restoration of the deformed elements of the kinematic supply chain and cutting tool begins, and the elastic forces are added with the feed force (FV3 + FV5), which causes an increase in the speed of the translational motion of the workpiece towards the shaped cutter. The interaction of the next cutter tooth with wood occurs in unfavorable conditions, when the speed of the workpiece movement with respect to the tool is maximal. A situation is created that is similar to the first processing scheme, with the only difference being that in the first case resonance oscillations occur in the vertical plane, and in the third scheme -in the horizontal plane. The milling process under the third scheme becomes unstable [8]. To verify this situation, experiments were carried out. During the profile milling of the peripheral segment, according to the third scheme, resonant phenomena were observed when the workpiece, making intensive oscillations in the horizontal plane, came out of contact with the shaping cutterl in the direction opposite to the direction of the working feed, after which it moved with increasing speed to the cutting tool. There was a blow to the teeth of the cutter on the workpiece, as a result of which the workpiece again came out of contact with the cutter. After inspecting the contact points of the installation reference plane of the workpiece with the drive rolls, it was found that the traces of the cutting teeth of the drive rolls were not inserted into the wood, that is, the "teeth" of the wood formed by pressing the metal rolls into the workpiece were cut off. The described picture was due to insufficient resistance to the rapid elastic recovery of the kinematic supply chain elements. The process was characterized by intensive impact of the tool on the workpiece, there was a real danger of breakage of the cutting teeth of the mill, so this treatment processing was stopped. Thus, the non-resonant profile milling of the peripheral segments is provided to the greatest extent by the second scheme, in which the additional support of the workpiece is made in the form of a metal plate installed in the cutting zone opposite to the shaped cutter. Mathematical dependencies (1) - (8) were used in the development of a woodworking machine with software control of the PFP-100 model, in which the second scheme of profile milling of peripheral segments and stemwood was implemented. Technical and technological solutions, used in the machine tool, are protected by patents of the Russian Federation [9,10]. The real machine model was certified, presented at the 6th Moscow International Salon of Innovations and Investments, where it was awarded a diploma and a silver medal.
2019-04-30T13:08:35.485Z
2018-01-01T00:00:00.000
{ "year": 2018, "sha1": "19e27dcf5e4d94f3af7d9c0956b73f8be712fcd4", "oa_license": "CCBY", "oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2018/83/matecconf_icmtmte2018_01016.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "a57fb3da5791634dec807a0532b93f945c133e4a", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
257935040
pes2o/s2orc
v3-fos-license
Prevalence and Antimicrobial Susceptibilities of Bacteria Isolated from Circumcised and Non-Circumcised Women with Urinary Tract Infections in Different Gynecological Clinics in Khartoum Locality, Sudan Urinary tract infections (UTIs) are one of the diseases that are widely spreading among women. A number of factors contribute to UTIs, including circumcision, which narrows the opening of the urinary system. This cross-sectional study was conducted from April 2019 to February 2021 to detect the frequency of antimicrobial-resistant bacteria isolated from circumcised women attending two Clinics, for Gyncology in Khartoum locality. Conventional methods were used for isolation, identification, and antimicrobial susceptibility testing. A total of 80 midstream urine samples (n = 80) were collected from all female eligible volunteers, of which 40 had been circumcised and 40 had not. The study investigated 80 females aged 7-70 years, with a mean of 29.3 + 13.1 SD. There were 16/40 (40%) circumcised women who were married and 23/40 (60%) single, whereas for non-circumcised women there were 7/40 (17.5%) married and 33/40 (82.5%) single. Among the circumcised patients, 34/40 (85%) had growths compared to 6/40 (15%) of the non-circumcised participants, and UTIs were significantly associated with circumcision (P=0.001). Circumcised females had a 32 times higher odd ratio (O.R) of UTIs than non-circumcised females. Escherichia coli was the most predominant isolate among circumcised and non-circumcised women (15(37.5%)). The isolated bacteria in circumcised women were moderately sensitive to Augmentin 22/34 (67.7%) and Gentamycin 20/34 (58.8%) compared to other antimicrobial agents; Ciprofloxacin 16/34(47.1%), Cefuroxime 12/34(35.3) and Amoxycillin 10/34 (29.4%) while all Gram negative rods were highly resistant to Nalidixic acid (100.0%). In contrast to non-circumised women; all isolated bacteria were highly sensitive to Gentamicin 6/6 (100.0%) and Cefuroxime 5/6(83.3%), and moderate sensitive to Augmentin 4/6 (66.7%) and Ciprofloxacin 4/6(66.7%). Also all isolated were highly resistant to Nalidixic acid (100.0%) and Amoxycillin 1/6 (16.7%). UTIs and antimicrobial-resistant bacteria were more prevalent among circumcised women than non-circumcised women. E.coli was the most prevalent bacteria among circumcised and non-circumcised women. Urinary tract infections (UTIs) can affect any part of the urinary system, and women are more likely than men to develop UTIs 1 . Generally, UTIs occur when bacteria enter the urinary tract through the urethra and begin to multiply in the bladder. Despite the urinary system's designed defenses, these can sometimes fail. When that happens, bacteria may take hold and grow into a full-blown infection in the urinary tract. Women experience more than one infection during their lifetimes 2 . The anatomy of the female urethra is of particular importance to the pathogenesis of UTIs. The female urethra is relatively short compared to the male urethra, and it is situated close to the warm, moist, per rectal region, which is highly populated by microorganisms 1 . Other risk factors include: sexual activity, certain type of birth control and menopause (after menopause a decline in circulating estrogen causes change in the urinary tract that make women more vulnerable to infection) 2 , urinary tract abnormalities (babies born with urinary tract abnormalities), blocking in the urinary tract; as kidney stone, catheterize (people who can't urinate on their own and use a tube to urinate), urinary procedure (which can be recent urinary surgery or an exam of the urinary tract that involves medical instrument) 3 . Female genital mutilation (FGM) also known as female genital cutting or female circumcision that don't allow the urine to leave completely or cause urine to back up in the urethra have an increased risk of UTIs and menstrual problem 4 . At least 200 million girls and women alive today have undergone FGM 7 living in 30 countries in Africa, the Middle East and Asia where FGM is practiced 8 . The most serious type of FGM is type three that create covering seal and small opening is left for urine and menstrual blood to escape 4 . FGM is thought to be a tribal tradition or an Islamic imperative, and several studies have reported the reasons that a girl might undergo FGM include providing her with an honorable social life, preserving her virginity, and allowing her to become a mature woman for a safe marriage 5,6 The overall prevalence of urological complications in women with genital mutilation is 20%. Recurrent urinary tract infections, lower urinary tract symptoms, urinary retention, urogenital fistulas, meatus stenosis, urethral stones, and megaurethra are the reported ones 9 . All those risk factors can lead us to recurrent infection and may result in antibiotic resistant bacteria. Also geographic variation in etiologic agents of UTIs and their resistance patterns in antibiotics 10,11 UTIs is the second most common infection presenting in community practice which associated with elevated antibiotic resistance in Sudan 12 . Female genital mutilation may be increasing UTIs 5 . To our knowledge, there are few studies concerning the consequences of circumcision on the urinary system and its risk for increasing antibiotic-resistant bacteria associated with UTIs. So, the ultimate goal of this study is to provide information about this problem which may highlight circumcision and its association with antibiotic-resistant bacteria causing UTIs. Methods This cross-sectional, case-control, hospital-based study was conducted in two clinics in Khartoum locality. The study was carried out between July 2019 and March 2021. The study included 80 Sudanese women (n=80) with genital mutilation (n=40 case groups) and 40 non-genital mutilated women (n=40 control groups); all of them had UTI symptoms including (pyuria, loin pain, burning micturition, frequency, urinary retention and urgency of urination); and with different ages. ethical considerations Ethical approval No. (MLS-RB-07-3-18) to conduct this study was obtained from the Scientific Research Committee, Collage of Medical Laboratory Science, Sudan University of Science and Technology and Dan and Nobatia Clinics in Khartoum Locality. Informed consent was obtained from participants before collection of the urine samples. data collection Data were collected using direct interview the patients, which provided information conceding each case examined. Laboratory processing specimen collection and processing The participants collected mid-stream urine specimens in a sterile wide-mouth container after they were informed of the collection procedure, and all specimens were processed within one hour of collection. Isolation Aseptically, and with a 0.005 calibrated loop, a loop full of well-mixed urine was incubated aerobically at 37°C overnight on blood and Cysteine Lactose Electrolyte Deficient (CLED) agar plates. Identification of the isolated bacteria Identification of the isolated bacteria was done by colonial morphology after assessment of significant bacteriuria (>10 5 CFU/1ml of urine), Gram's stain and conventional biochemical tests. Biochemical tests for identification of Grams negative rods Grams By using a sterile wire loop, 3-5 wellisolated, purified and fresh colonies of similar appearance of the tested organism were touched and emulsified in 3-4 ml of sterile physiological saline. The turbidity of the suspension was matched to the standard turbidity (0.5% McFarland standard) on good light. Sterile swab was used to inoculate a plate of Mueller Hinton agar. Excess fluid was removed by pressing and rotating the swab against the side of the tube above the level of the suspension. Streaked the swab evenly over the surface of the medium in three directions, rotated the plate approximately 60 to ensure even distribution. With the petri dish lid in place, allowed about 3-5minutes (no longer than 15 minutes) for the surface of the agar to dry. By using a sterile forceps, the appropriate antimicrobial discs was placed, evenly distributed on the inoculated plate. After overnight incubation, the plates was examined to ensure the growth was confluent and by a ruler the diameter of each zone of inhibition was measured in mm. The zone sizes of each antimicrobial disc was interpreted by interpretative chart, according to CLSI guidelines 16 . statistical analysis The obtained data from this study was analyzed using statistical package of the social science (SPSS) version 20.0. Frequencies were expressed in form of table and Chi-Square test was used to determine the significant differences at p-value d"0.05. ResuLts A total of 80 urine specimens had been collected as follow: 40 were circumcised and other 40 were not, and their ages ranged from 7 to 70 years old with a mean age of 28.8± 13.8 S.D. About 16/40 (40%) of circumcised women were married and 23/40 (57%) were single, while among non-circumcised group there were 7/40 (17%) married and 33/40 (82%) single as in table 1. The growth rate was 34/40 (85%) among circumcised females and 6/40 (15%) among non-circumcised participants and there was high statistical difference between circumcision and UTIs (P.value=0.001) and circumcised women were 32 times exposed to risk of UTIs than non-circumcised one as showed in table (2). The isolated bacteria were 24/34 (70.6%) G-ve rods and 10/34 (29.4%) G+ve cocci among circumcised women, while all isolated bacteria were G-ve rods 6/6 (100%) and no isolated G+ve cocci among non-circumcised one as displayed in table (3). Among the isolated bacteria; Escherichia coli was the most predominant isolate among circumcised and non-circumcised women. Circumcised women were 5, 8, 2, and 3 times more susceptible to urinary tract infection with Escherichia coli, K.pneumoniae, Portus mirabilis, and Pseudomonas aeruginosa, respectively than non-circumcised ones. There was a high association between the isolated organism and circumcised women (P. value=0.001) as illustrated in table (4). The all isolated bacteria from circumcised women were highly sensitive to Gentamycin (7/12 (58.0%)) compared to other antimicrobial agents and in non-circumcised was 100.0% for Gentamycin and Augmentin. Also all isolated G-ve rods in both groups were totally resistant to Nalidixic acid as displayed in table (5) and (6). dIscussIon In this study, the growth rate was 34 (85%) among circumcised women rather than non-circumcised 6 (15%). This result almost confirms the theory of Elduma when he mentioned UTIs as one of the circumcision complications in Sudan (2018) 5 . In the current study, the circumcised women were more likely to have and develop UTIs (85%) than the non-circumcised group (15%). This result was similar to that obtained by Amin et al. (2013) 17 in Egypt which reported highly significant different types of UTIs by 86.6% of the study group (251 circumcised women participate in the study). Gram negative rods isolated more frequently than G+ve cocci in this study, 24/34 (70.6%) and 10/34 (29.4%), respectively in circumcised women, Also in non-circumcised women Gram negative rods also grew more frequently than G+ve cocci in 6/40 (15%) of the cases, but there were no isolated G+ve cocci among circumcised women. This was harmonized to that reported in Sudan by Saeed who found 45(65%) Gram negative bacteria more than Gram positive 24(35%) from 69 of significant growth urine sample 12 . The present study found the commonest frequent was E. coli 12(35.3%), (followed by En. faecalis 8(23.5%) and K. pneumoniae 7(20.6%) among circumcised women, Moreover, while E. coli 3(50%) was the most prevalent among noncircumcised women. It was close to the study carried out by Theodore 18 in Nigeria who found 141 out of 181 urine specimens and E.coli was 47 (33.3%) followed by Klebsiella species 28 (19.9%) and 5(3%) En. faecalis. Also among noncircumcised females E.coli was more predominant than others. The isolated bacteria from circumcised women were highly resistant to a number of the antimicrobial agents used, including: Nalidixic acid (100%), Amoxycillin (70.5%), and cefuroxime (64.7%). This was in agreement with a study conducted by Ahmed and colleagues 19 in Sudan, in which E.coli showed high resistance to amoxicillin, but low resistance to nalidixic acid. In non-circumcised women, (83%) were resistant to Nalidixic acid and Amoxycillin, whereas (0%) were resistant to cefuroxime.
2023-04-05T15:29:32.577Z
2023-03-21T00:00:00.000
{ "year": 2023, "sha1": "7d1dbd0c310f966f2ba886e5b8d18b787e5fc203", "oa_license": "CCBY", "oa_url": "https://doi.org/10.13005/bpj/2643", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "2ca56bbe8e42e9bb92ecaaba2dd84839d1b9e069", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
16723630
pes2o/s2orc
v3-fos-license
Six-fermion production at e+e- colliders The class of six-fermion production processes at e+e- colliders comprises very interesting particle reactions, such as the production of top-quark pairs and of Higgs bosons in the intermediate Higgs mass range, the scattering of massive gauge bosons, and triple gauge-boson production. The Monte Carlo event generator LUSIFER is designed for the analysis of such processes. A few illustrating results obtained with LUSIFER are discussed. Introduction The Monte Carlo event generator Lusifer in its first version, which is described in Ref. [1] in detail, deals with all processes e + e − → 6 fermions at tree level in the Standard Model. The predictions are based on the full set of Feynman diagrams, the number of which is typically of the order of 10 2 -10 4 . Fermions other than top quarks, which are not allowed as external fermions, are taken to be massless, and polarization is fully supported. The helicity amplitudes are generically calculated with the spinor method of Ref. [2]. The phase-space integration is based on the multi-channel Monte Carlo integration technique [3], improved by adaptive weight optimization [4]. Channels and appropriate mappings are provided for each individual diagram in a generic way. More details on the phase-space parametrizations can be found in Refs. [5,6]. Owing to the potentially large number of Feynman diagrams per final state, an efficient generic approach has been crucial, in order to gain an acceptable speed and stability of the program. Initial-state radiation is included at the leading logarithmic level employing the structure-function approach (see e.g. the appendix of Ref. [7]). In the following we collect a few illustrating results of Ref. [1] that have been obtained with Lusifer. In some cases, the tuned comparison with the combination of the Whizard [8] and Madgraph [9] packages is included in the discussion. Specifically, we focus on top-quark pair production, the production of Higgs bosons in the intermediate Higgs mass range, and the scattering of massive gauge bosons. The precise input for the used parameters and phase-space cuts, as well as much more results, can be found in Ref. [1]. We refer to the literature for further discussions of top-quark pair production [10,11,12], Higgs-boson production [12,13], vector-boson scattering [14], and triple gauge-boson production [11,12], which are also based on full e + e − → 6f matrix elements. Results On Top-Quark Pair Production In Table 1 we collect some results on cross sections that receive contributions from top-quark pair production, e + e − → tt → 6f . The difference between the cross sections with two and four quarks in the final states roughly reflects the colour factor 3 between leptonically and hadronically decaying W bosons that have been produced in t → bW + . The cross sections are all strongly dominated by the signal diagrams for resonant tt production, which are identical for all considered final states. Differences are entirely due to so-called background diagrams, the size of which is, however, very sensitive to the angular separation cut between outgoing e ± and the beams. The inclusion of gluon-exchange diagrams would not influence the integrated cross section significantly. The numbers show that ISR reduces the cross sections at the level of ∼ 4% at a centre-of mass (CM) energy of √ s = 500 GeV. Finally, the comparison of the Lusifer and Whizard & Madgraph results reveals good agreement. Results On Higgs-Boson Production In Figure 1 we show the invariant-mass (M 4q ) and production angular (θ 4q ) distributions for the four-quark system (including all 4q configurations of the first two generations) of the reactions e + e − → (ν µνµ /ν eνe ) + 4q. The crucial difference between the ν µνµ and ν eνe channels lies in the Higgs production mechanisms: while the former receives only contributions from ZH production, e + e − → Z + (H → WW) → 6f , the latter additionally involves W fusion, e + e − → ν eνe + (WW → H → WW) → 6f , which dominates the cross section. Therefore, the cross section of ν eνe + 4q is an order of magnitude larger than the one of ν µνµ + 4q. Table 2: Born cross sections in fb (without ISR) for e + e − → ν eνe µ −ν µ ud for various CM energies and schemes for introducing decay widths and backward production of Higgs bosons is preferred, and the M H dependence is mainly visible in the overall scale of the distribution, but not in the shape itself. Results On Vector-Boson Scattering Finally, in Table 2 we consider the high-energy behaviour of a typical channel involving the subprocess WW → WW, using different schemes for introducing finite decay widths. This comparison is particularly important in order to control gaugeinvariance violating effects in several schemes. In the fixed-width scheme all massive boson propagators receive a constant width Γ B (B = H, W, Z), while in the running width scheme Γ B is multiplied by p 2 /M 2 B × θ(p 2 ), with p 2 denoting the virtuality of the propagator. Both schemes violate gauge invariance. In the complex-mass scheme [6], gauge invariance is restored by consistently using complex masses for the unstable particles in the Feynman rules, i.e. it makes use of the propagators of the fixed-width scheme and appropriately chosen complex couplings. The example confirms the expectation from 4f (+γ) studies [6] that the fixed-width scheme, in spite of violating gauge invariance, practically yields the same results as the complex-mass scheme. In contrast, the running-width scheme breaks gauge invariance so badly that deviations from the complex-mass scheme are already visible below 1 TeV. Above 1 TeV these deviations grow rapidly, and the high-energy limit of the prediction is totally wrong. Thus, if finite decay widths are introduced on cost of gauge invariance, the result is only reliable if it has been compared to a gauge-invariant calculation, as it is for instance provided by the complex-mass scheme. Moreover, our numerical studies (see also Ref. [1]) show that the fixed-width scheme is in fact a good candidate for reliable results also in six-fermion production, although it does not respect gauge invariance. Whether this observation generalizes to all 6f final states (or even further) is, however, not clear.
2014-10-01T00:00:00.000Z
2002-10-10T00:00:00.000
{ "year": 2002, "sha1": "d922d87fd3d61980e31115b958dd4851563d9182", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-ph/0210168", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "3ec35c10ad7ea07d8820ab82aea414e066cb0c20", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
52992557
pes2o/s2orc
v3-fos-license
Highest-frequency detection of FRB 121102 at 4-8 GHz using the Breakthrough Listen Digital Backend at the Green Bank Telescope We report the first detections of the repeating fast radio burst source FRB 121102 above 5.2 GHz. Observations were performed using the 4$-$8 GHz receiver of the Robert C. Byrd Green Bank Telescope with the Breakthrough Listen digital backend. We present the spectral, temporal and polarization properties of 21 bursts detected within the first 60 minutes of a total 6-hour observations. These observations comprise the highest burst density yet reported in the literature, with 18 bursts being detected in the first 30 minutes. A few bursts clearly show temporal sub-structures with distinct spectral properties. These sub-structures superimpose to provide enhanced peak signal-to-noise ratio at higher trial dispersion measures. Broad features occur in $\sim 1$ GHz wide subbands that typically differ in peak frequency between bursts within the band. Finer-scale structures ($\sim 10-50$ MHz) within these bursts are consistent with that expected from Galactic diffractive interstellar scintillation. The bursts exhibit nearly 100% linear polarization, and a large average rotation measure of 9.359$\pm$0.012 $\times$ 10$^{\rm 4}$ rad m$^{\rm -2}$ (in the observer's frame). No circular polarization was found for any burst. We measure an approximately constant polarization position angle in the 13 brightest bursts. The peak flux densities of the reported bursts have average values (0.2$\pm$0.1 Jy), similar to those seen at lower frequencies ($<3$ GHz), while the average burst widths (0.64$\pm$0.46 ms) are relatively narrower. Introduction Fast Radio Bursts (FRBs) are a class of radio transients with inferred extragalactic origin due to their anomalously high dispersion measures (DMs) relative to the contribution expected from the Galactic electron distribution (Lorimer et al. 2007). This inference was proven for one source, FRB 121102, when repeated bursts at the DM of 557 pc cm −3 were localized by interferometry to be unambiguously associated with a dwarf galaxy at redshift z = 0.193 (Chatterjee et al. 2017;Marcote et al. 2017;Tendulkar et al. 2017). Bursts from FRB 121102 have isotropic apparent radio energies of 10 40 erg, several orders of magnitude higher than for any other radio transient on millisecond timescales . FRB 121102 is the only FRB known to repeat and for which a position is known to sub-arcsecond precision. This has facilitated extensive follow-up observational campaigns. While the burst cadence is irregular, there appear to be epochs in which the source is more active and multiple bursts are detected. For example, Scholz et al. (2016) reported 6 bursts within a 10-minute interval while several other long observing sessions resulted in non-detections (e.g. Price et al. 2018). These bursts have so far only been detected between 1 − 5.2 GHz Scholz et al. 2016;Michilli et al. 2018;Spitler et al. 2018 in prep). Additionally, reported the non-detection of bursts at 70 MHz, 4.5 GHz, and 15 GHz, during epochs in which bursts were detected at 1.4 and 3 GHz, indicating that some bursts are not likely to be broadband. Propagation effects, such as scintillation and plasma lensing, can significantly alter the observed radio emission from impulsive radio sources (Macquart & Johnston 2015;Cordes et al. 2017). Estimates of the FRB burst rate have taken into account the observed bursts (e.g. Lawrence et al. 2017); however the observed radio emission could be affected by these propagation effects. Disentangling the intrinsic emission from propagation effects is thus an important goal in FRB science. Detection of bursts at different frequencies and over wider bandwidths is helpful for studying the frequency dependence of potential propagation effects. Measurement of burst polarization can also shed light on emission physics and the source's local environment. Recently, Michilli et al. (2018) reported a very high and variable Faraday rotation measure of ∼10 5 rad m −2 for FRB 121102, suggesting that this source is embedded in an extreme and dynamic magneto-ionic environment. Here we report the detection of 21 bursts from FRB 121102-all of which occurred within an hour-using the 4 − 8 GHz receiver on the Robert C. Byrd Green Bank Telescope (GBT) and the Breakthrough Listen backend. These are the highest-frequency detections of bursts from any FRB to date 1 . The remainder of this paper is structured as follows. In Section 2, we describe our observations. In Section 3.1 we highlight issues with finding a true DM for the bursts, followed by a discussion on the spectro-temporal structures in a subset of bursts in Section 3.2. We confirm and extend the results of Michilli et al. (2018) by performing polarimetry on baseband voltage data in Section 3.3. Our wide instantaneous bandwidth reveals these bursts to be band-limited, with dynamic structure varying on short timescales, which are further discussed in Section 3.4. A brief discussion of our findings is in Section 4 along with a summary in Section 5. Observations Observations of FRB 121102 were conducted with the GBT as part of the Breakthrough Listen (BL) project (Worden et al. 2017). BL is a comprehensive search for extraterrestrial intelligence (SETI) employing both optical and radio telescopes; the BL target list includes nearby stars and nearby galaxies, as well as other anomalous astronomical sources broadly classified as "exotica" (Isaacson et al. 2017). As a component of the latter category, BL is conducting a targeted SETI search towards known FRB positions to investigate any associated artificial signals and/or underlying modulation pattern, hypothesizing that one or more FRBs may be deliberate artificial beacons or other manifestations of technology (e.g., Lingam & Loeb 2017) 2 . Observations using the 4 − 8 GHz (C-band) receiver on the GBT were conducted on 2017 August 26 during a 6-hour BL observing block. The initial hour (with scans numbered from 0 to 10) was used for configuration of various telescope settings and calibration procedures. The calibrator 3C 161 and an off-source position were observed for one minute each, along with a calibration noise diode, and used for flux and polarization calibration using the pac tool in PSRCHIVE. A 5-minute observation of the bright pulsar PSR B0329+54 was also performed as a diagnostic to verify polarization and flux density measurements. The remaining five hours of the session were divided into ten 30-minute scans of FRB 121102, identified with scan numbers ranging from 11 to 20. 2 However, we emphasize here that it is unlikely that the bursts we detected were transmitted from an intelligent civilization. Each dual-polarization passband was Nyquist-sampled using 8-bit digitizers, polyphase channelized to 512 'coarse' frequency channels, requantized to 8 bits, and then distributed to a cluster of compute nodes that recorded these data to disk. Directly after observations, the coarsely-channelized voltage data were further channelized (to 366 kHz resolution), integrated (with a sampling time of 350 µsec) using a custom GPU-accelerated spectroscopy code, and finally written toSIGPROC 3 filterbank files as total intensity (Stokes I) dynamic spectra with 4096 spectral channels. These dynamic spectra were searched using the Heimdall package (Barsdell et al. 2012) for dispersed pulses within the DM range of 500 to 700 pc cm −3 , using a DM interval of 0.1 pc cm −3 . We detected 21 bursts above a threshold signal-to-noise ratio (S/N) of six. A section of raw voltages (of total 1.5 seconds) around each detected burst was extracted for further processing and coherently dedispersed to a DM of 565.0 pc cm −3 (see Section 3.1) using the DSPSR package (van Straten & Bailes 2011). The coherently dedispersed PSRFITS data products have temporal and spectral resolutions of 10.1 µs and 183 kHz, respectively 4 . We have discarded lower part of the frequency band (4 to 4.5 GHz) in further analysis due to spurious radio frequency interferences. and standard deviation of PA across the pulse phase, respectively. We were not able to flux calibrate bursts 11L, 11P, and 11R due to low S/N. Also for these pulses, we sub-integrated adjacent time bins and hence could only obtain an upper limits on the widths. Analysis i.e. from scans 11 and 12. We assigned the bursts identifiers 11A through 11R, and 12A through 12C, according to their scan number and order of arrival. DM Optimization The DM of FRB 121102 has been previously reported to range between 553−569 pc cm −3 (Spitler et al. 2014Scholz et al. 2016;) from lower frequency observations. For each burst detected here, we measured the S/N as a function of DM, and found that individual bursts show slightly different peak S/N-maximizing DMs (Table 1). To investigate further, we coherently dedispersed raw voltages from burst 11A across a range of DMs with a DM step of 5 pc cm −3 . This investigation lead to the discovery of detailed spectro-temporal structures (components, see Figure 1 and Section 3.2). It is likely that the differences between the DMs originate from how these components superimpose for different trial values of DM. Figure 1 shows burst 11A dedispersed and detected at three different DMs. While it is not clear if these components are intrinsic to the emission mechanism or due to propagation effects, their peculiar alignment provides enhanced S/N at higher DMs compared to previously reported DMs (Spitler et al. 2014Scholz et al. 2016;. Assuming these components to be intrinsic, we optimized the S/N of individual components by maximizing the average absolute rate-of-change of total flux density, which we call here a structure parameter, across the pulse window defined as, Here, n is the number of on-pulse bins while S i is the flux at the i th bin and ∆t is the time resolution. This structure-parameter-maximizing DM differs significantly from the S/N-maximizing DM (see Figure 1). Similar techniques have also been explored and will be reported in detail in Hessels et al. (2018, in prep) at lower frequencies. Although an analysis of this type was not possible for most of our detected pulses due to either low S/N, fewer components, or both, we assumed a structure-maximizing DM of 565.0 pc cm −3 to be consistent for all our bursts and used that value in coherent dedispersion. We note that this estimate can be off by our DM resolution in Figure 1 from a 'true' DM (i.e. ±5 pc cm −3 ). At the highest DM trial of 600 pc cm −3 , the alignment of burst structures produces the highest S/N while at the trial DM of 565 pc cm −3 these structures appear to be well separated. Spectro-temporal components We found highly variable temporal and spectral features in many of the bursts. Figure 2 shows dynamic spectra of all bursts after coherent dedispersion at a DM of 565 pc cm −3 . Bursts 11A and 12B exhibit distinct components, while bursts such as 11E, 11K, and 11O show some indication of unresolved components. For bursts 11D and 11F, such components are not clearly visible but likely give rise to the slanted or curved features seen in the dynamic spectra. Each burst was modeled as a sum of multiple Gaussian components using the PSRCHIVE utility paas. The model was then used to derive average widths and fluences for all bursts (see Table 1). We note that burst widths found here are relatively narrow compared to burst widths seen at lower frequencies (¡3 GHz) -as also noted for the Arecibo sample of 4.5-GHz bursts presented by Michilli et al. (2018). As 11A and 12B clearly exhibit distinct components, we measured the component Faraday Rotation We determined the Faraday rotation measure ( We have only considered PAs from the phase bins with linear polarization above 3 sigma. The weighted mean PA for each burst, measured after weights calculated empirically from the variance in PAs across these phase bins, is listed in Table 1. The uncertainty on the weighted PA was measured as a sum of all weights for each burst. It should be noted that the PA is roughly uniform across the pulse phase, however, there are apparently significant variations in the average PA between all observed bursts at the ∼ 3σ level. For example, bursts 11A and 12B show slightly lower PAs compared to rest of the reported bursts. These differences in PAs could possibly arise from our uncertainties in the RM measurements, as these two quantities are covariant. No circular polarization was found for any burst and any undetected circular polarization is less than a few percent. 11A to 11F exhibit a spectral peak around 7 GHz, which appears to then shift to lower frequencies in later bursts. Burst-to-burst spectral variation In order to assess the origin of large-and finer-scale frequency structures seen in Figure 2 and 3, we calculated the Galactic scintillation bandwidth across the observed band. We used the NE2001 model (Cordes & Lazio 2002), and estimated a Galactic scattering timescale τ s = 20 µs ν −α towards the direction of FRB 121102 (ν is the observing frequency in GHz while α is the scaling parameter To compare these predicted values for ∆f DISS with fine-scale frequency structures from the detected bursts, we obtained spectra for each burst by selecting an appropriate on-pulse window. Following the procedure of Cordes et al. (1985) and Cordes (1986), we then computed auto-correlation functions (ACFs) for these spectra ( Figure 5). As the spectral extent of a few bursts were significantly broad-spanning over 3 GHz-the ACFs over the entire spectra produce a multi-featured ACF with multiple widths. To measure the characteristic bandwidth for a given burst, we divided each spectrum into two or three parts, depending upon the spectral extent. We then fitted a Gaussian function to the ACF to obtain the half width at half maximum (HWHM) as a characteristic bandwidth for the given sub-spectra for each burst. We found that the ACFs showed different characteristic bandwidths in different sub-spectra in the 4.5 to 8 GHz band. Table 2 shows these characteristic bandwidths from 15 stronger bursts. The measured characteristic bandwidth follows the expected ∆f DISS across the 4.5 to 8 GHz band (see Figure 6) suggesting that the fine-scale frequency structures seen in each of these bursts are likely due to Galactic interstellar scintillation, with a few exceptions. For example, burst 11H does not appear to show characteristic bandwidth matching with the ∆f DISS at higher frequencies. However, this could be a statistical fluctuation given the small number of scintles in the burst. Instantaneous Burst Rate Our detection of 21 bursts within 60 minutes represents the highest number of bursts detected within a short interval (i.e. hour timescale), with 18 bursts occurring in the first 30 minutes. We did not detect any bursts during the following 4 hours of observations, which supports the idea that FRB 121102's bursting behavior is episodic. This could be due to intrinsic changes in the emission conditions or due to more favorable 'plasma lensing' conditions ) during the first hour, potentially enhancing observed burst energies by an order of magnitude. Our observations highlight the advantages afforded by wider instantaneous bandwidth. If our 4 GHz instantaneous bandwidth were halved, we would only have detected 10 to 12 bursts, mainly as the band-limited spectral structures they exhibit would have fallen out of band (Figure 2). have also pointed out similar limitations of narrow bandwidth observations. The spectral peak variation of each burst and apparent band-limited characteristics indicate that further bursts may be detected by searching for pulses over subbands, particularly in observations over large fractional bandwidths. The existence of spectral structure may also impact the performance of single-pulse search pipelines, which are generally designed to search for broadband pulses. Average spectral and temporal properties across 1 -8 GHz We compared the average peak flux densities of all our bursts with previously reported observations at various lower frequencies, as shown in Figure 7(a). We found that, statistically, the distribution of peak flux densities across frequency is consistent with being flat across 1 − 8 GHz. It should be noted that these measurements were obtained from different telescopes at very different epochs which might affect their intrinsic absolute scaling. The different sensitivities of different telescopes at various frequencies are also highlighted in Figure 7(a). We also compared the burst widths at various frequencies (Figure 7(b)) and highlight that bursts are relatively narrower at higher frequencies (> 4 GHz). The apparently flat spectrum of FRB 121102 stands in contrast to the steep spectral indices observed for most neutron star emission. Giant pulses from the Crab pulsar, for instance, typically exhibit a very steep spectrum with a power-law index α = −2.6 and with a steep power-law distribution of rates at a fixed fluence (Meyers et al. 2017). Ordinary pulsars also show very steep spectra with a mean index α = −1.4 (Bates et al. 2013). An important exception is the radio to millimeter spectrum of magnetars. The Galactic Center magnetar, SGR J1745−2900, has been observed to have a flat spectrum up to 291 GHz (Torne et al. 2017). The similarity in spectral index between FRB 121102 and radio magnetars suggests a common emission mechanism. It is likely that further high frequency observations with better sensitivity, enabling the detection of weaker bursts, may provide better constraints on the steepness of the spectral index. If the apparently flat spectral behavior of FRB 121102 is a common property for other FRBs, we suggest that future FRB surveys could be conducted effectively at higher frequencies, utilizing either fly's eye mode or multiple beams to compensate for a smaller field of view. Higher-frequency observations may also have lower terrestrial radio interference and larger instantaneous receiver bandwidth -potentially beneficial for detecting more of these spectrally-limited bursts. Similar suggestions were also made by . Frequency structures The large-scale frequency structures are unlikely to be instrumental in nature and are likely intrinsic to the progenitor or propagation effects imparted in the source's local environment. We note the similarity of these structures to the banded structures seen for the Crab giant pulses (GPs) at similar higher radio frequencies reported by Hankins & Eilek (2007) and Jones (2010). We also found that burst-to-burst spectral properties change on the order of tens of seconds. If bursts from FRB 121102 have a physical origin similar to Crab GPs, then such changes are comparable to similar spectral feature variations between Crab GPs, although these manifest at a shorter timescale (few microseconds). High Faraday Rotation Measure The high RM found here for FRB 121102 is almost 500 times more than RMs reported for any other FRB (e.g. Masui et al. 2015) and somewhat larger than the RM of the Galactic center magnetar SGR J1745−2900 (Eatough et al. 2013). Our measured RM is about 10% lower than the RM from bursts detected at 4 − 5 GHz, from data obtained seven months prior to the observations reported here. This change in the RM is already highlighted in Michilli et al. (2018). This is a significant change in the RM and further justifies regular monitoring to clarify how the RM varies with time. SGR J1745−2900 also shows changes in the RM of similar scale over four years of regular monitoring (Desvignes et al. 2018). The high RM suggests an intense magnetic field, of order 1 mG, in the progenitor's environment. As noted in Michilli et al. (2018), plausible scenarios that could produce this RM include: the source being in vicinity of a intermediate mass or supermassive black-hole, like SGR J1745−2900 (Eatough et al. 2013); inside a powerful pulsar wind nebula, or a supernova remnant. Piro (2016) suggested that an expanding supernova shell could also cause the RM to decrease with time as reported here. However, such changes can also cause the corresponding DM to decrease with time by a similar factor, which was not observed. Other FRBs might have similar high RMs; however, measuring large RMs requires higher frequency resolution at lower frequencies (e.g. ≤ 3 GHz) to avoid intra-channel depolarization. For example, to search for RM up to 10 5 rad m −2 , the required channel resolution is of order tens of kHz at 1 GHz. This again highlights the utility of high-frequency observations of FRBs as intra-channel depolarization is inversely proportional to observational frequency to the third power. Conclusion We have reported, for the first time, that FRB 121102 is active above 5.2 GHz. The 21 bursts detected over 60 minutes represent the highest instantaneous burst-rate yet observed. We have confirmed that bursts from FRB 121102 are highly linearly polarized, that the source shows a large RM of 93589 ± 118 rad m −2 , and that the diverse and variable spectral and temporal properties seen at lower frequencies are also exhibited above 5 GHz. As at lower frequencies Scholz et al. 2016;), we also found that the source exhibits large-scale frequency structures that could be intrinsic, imparted by its local environment. These structures vary between bursts and can bias the estimation of burst properties, such as dispersion measure and pulse width as they superimpose to provide enhanced S/N at higher trial DMs. We found fine-scale structure consistent with Galactic interstellar scintillation. Future observations of this source will help to answer some of the questions yet outstanding, including whether the source exhibits any periodicity, at what frequency the apparently flat spectral index transitions and how the event rate varies as a function of frequency. previous studies, indicating that higher frequency bursts are relatively narrower.
2018-04-11T17:12:58.000Z
2018-04-11T00:00:00.000
{ "year": 2018, "sha1": "7b1ecf1ba4fcd2ed8b298dc52769f2359fbda68c", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1804.04101", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "7b1ecf1ba4fcd2ed8b298dc52769f2359fbda68c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
226285343
pes2o/s2orc
v3-fos-license
Perspectives and practices of healthcare providers and caregivers on healthcare-associated infections in the neonatal intensive care units of two hospitals in Ghana Abstract Healthcare-associated infections (HAIs) remain a serious threat to patient safety worldwide, particularly in low- and middle-income countries. Reducing the burden of HAIs through the observation and enforcement of infection prevention and control (IPC) practices remains a priority. Despite growing emphasis on HAI prevention in low- and middle-income countries, limited evidence is available to improve IPC practices to reduce HAIs. This study examined the perspectives of healthcare providers (HPs) and mothers in the neonatal intensive care unit on HAIs and determined the major barriers and facilitators to promoting standard IPC practices. This study draws on data from an ethnographic study using 38 in-depth interviews, four focus group discussions and participant observation conducted among HPs and mothers in neonatal intensive care units of a secondary- and tertiary-level hospital in Ghana. The qualitative data were analysed using a grounded theory approach, and NVivo 12 to facilitate coding. HPs and mothers demonstrated a modest level of understanding about HAIs. Personal, interpersonal, community, organizational and policy-level factors interacted in complex ways to influence IPC practices. HPs sometimes considered HAI concerns to be secondary in the face of a heavy clinical workload, a lack of structured systems and the quest to protect professional authority. The positive attitudes of some HPs, and peer interactions promoted standard IPC practices. Mothers expressed interest in participation in IPC activities. It however requires systematic efforts by HPs to partner with mothers in IPC. Training and capacity building of HPs, provision of adequate resources and improving communication between HPs and mothers were recommended to improve standard IPC practices. We conclude that there is a need for institutionalizing IPC policies and strengthening strategies that acknowledge and value mothers’ roles as caregivers and partners in IPC. To ensure this, HPs should be better equipped to prioritize communication and collaboration with mothers to reduce the burden of HAIs. Introduction Healthcare-associated infections (HAIs) are the most frequent adverse event in healthcare delivery worldwide, and constitute a serious and preventable threat to patient safety Rothe et al., 2013). They lead to increased use of antibiotics, increased healthcare costs, longer hospital stays and higher morbidity and mortality rates (Gupta et al., 2011;Umscheid et al., 2011;Schmier et al., 2016). Increased length of stay associated with HAIs varies between 5 and 30 days in low-and middle-income countries (LMICs) (WHO, 2011). Costs associated with HAIs vary from US$865-US$13 000 as individual costs for various HAIs in LMICs (Pada et al., 2011;Ha and Ha, 2012) to overall costs of e7 billion annually in Europe (WHO, 2011). A study conducted in a tertiary hospital in Ghana reported that the HAI treatment cost an additional US$1985 per patient, and those patients with HAIs paid twice as much as those without HAIs. The estimated annual cost of an HAI to the hospital was US$700 000 and it cost the broader society almost US$900 000 (Fenny et al., 2020). The risk of developing HAIs in health facilities in LMICs is higher than in high-income countries (Rothe et al., 2013). A study by Labi et al. (2019) reported that the overall prevalence of HAIs among hospitalized patients in Ghana was 8.2% (range: 3.5-14.4%). Patients in the intensive care unit tend to have a higher prevalence of HAIs than those admitted to other units of the hospital (WHO, 2011). The neonatal intensive care unit (NICU) can pose a higher threat to patient safety due to its unique complexities, and factors such as understaffing, limited resources and ineffective organization of service delivery, which undermines the quality of care in LMICs Samra et al., 2011;Enweronu-Laryea et al., 2018). The burden of HAIs can be reduced through appropriate infection prevention and control (IPC) interventions and adherence to hand hygiene (HH) practices (Umscheid et al., 2011;Rothe et al., 2013). Compliance with HH guidelines among healthcare providers (HPs) is 38.7% (WHO, 2009), and ranges from 9.2% to 57% among doctors and 9.6% to 54% among nurses in Ghana (Yawson and Hesse, 2013). The Ministry of Health (MOH), Ghana, has taken steps to improve IPC by introducing the National IPC Guidelines (MOH, 2015), manuals and protocols to improve the quality of care (Escribano-Ferrer et al., 2017). The IPC guidelines provide information to HPs on standard precautions including HH, use of personal protective equipment (PPE), environmental cleanliness and waste management. These guidelines aim to promote excellence in clientcentred care and maximize protection against infections for HPs and clients (MOH, 2015). Strategies aimed at reducing HAIs are however centred mostly on HPs, with little consideration for the role of patients' relatives (caregivers). There are no clear policy guidelines on the roles of caregivers, who contribute to the basic care of hospitalized patients (Agbenyefia, 2017). Role ambiguity between HPs and caregivers could lead to frustrations in the hospital environment, where caregivers are unclear about patients' needs and do not know whether they are contributing positively to care (Ricciardelli, 2012;Agbenyefia, 2017;Alshahrani et al., 2018). The participation of caregivers in IPC promotion has been recommended as a promising strategy and there has been a call for increased caregiver engagement, which is critical to patient safety (Agbenyefia, 2017;Alshahrani et al., 2018). Meaningful caregiver involvement in the joint endeavour of preventing infections is complex, and potential barriers reported by caregivers include feelings of inadequacy, fear of the consequences, having different beliefs from HPs and a desire to avoid interrupting busy HPs (Jackson et al., 2003;Latour et al., 2010;Rainey et al., 2015;Sutton et al., 2015). Caregivers may feel unprepared for IPC roles due to lack of training and little guidance from HPs (Reinhard et al., 2008;Sapountzi-Krepia et al., 2008). It is therefore critical for HPs to support the role of mothers as their babies' primary caregivers (Nyqvist and Engvall, 2009). Healthcare systems are increasingly involving families as partners, and many NICUs now promote family-centred care (Reinhard et al., 2008;Sapountzi-Krepia et al., 2008;Skene et al., 2016;Ottosen et al., 2019). Some studies in high-income countries have explored familycentred care and the perspectives and roles of caregivers (Rainey et al., 2015;Ottosen et al., 2019;Sutton et al., 2019), but such studies are sparse in sub-Saharan Africa. Studies in Bangladesh, Indonesia and South Korea have shown that recognition of caregiver involvement in IPC strategies was not included in the national guidelines (Horng et al., 2016;Park et al., 2020). Some studies in Ghana have explored mothers' experiences in the maternity and NICU wards (Yevoo et al., 2018;Dugle et al., 2020;Lomotey et al., 2020) and HPs' perceptions of the quality of neonatal care (Elikplim Pomevor and Adomah-Afari, 2016). These studies, however, were not focused on the interactions between HPs and mothers in NICUs and did not explore HAIs. Other studies addressing HAIs in NICUs in Ghana were mostly focused on incidence, epidemiology and mortality (Annan and Asiedu, 2018;Labi et al., 2018Labi et al., , 2020. In the face of resource and financial constraints in LMICs, infection control is a cost-effective intervention that will decrease morbidity and mortality by reducing HAIs in NICUs (Srivastava and Shetty, 2007), where mothers are important stakeholders. KEY MESSAGES • Reducing the burden of healthcare-associated infections is a responsibility for both healthcare providers and caregivers. Thus, there is a need to improve communication and interaction between healthcare providers and carers towards achieving this goal. • Infection prevention and control in health facilities can be more effectively observed if health facilities provide the needed resources to enable health providers to undertake such measures. • Regular monitoring and supervision in the wards can contribute to proper infection prevention and control, which will help to reduce healthcare-associated infections, especially in the neonatal intensive care units of the various hospitals. • Pre-and post-training of health workers in health training institutions and hospitals on infection prevention and control can contribute to health providers inculcating the positive habit of strict observation of infection control measures at work. This can help to reduce healthcare-associated infections in the wards, including in neonatal intensive care units. Furthermore, the recent COVID-19 pandemic has prompted concerns about adherence to IPC guidelines (Houghton et al., 2020), hence a need to examine factors contributing to IPC compliance, to help identify strategies that will support caregivers and HPs observe IPC practices at such a critical period in global healthcare. Local data are critical for developing and implementing evidence-based context-appropriate guidelines and protocols for IPC. To provide data to guide local policy on IPC practices, this hospital ethnographic study examined the factors that influence caregiving in the NICU, how lay mothers negotiate their roles with health professionals within the hospital context and how these interactions influence the practice of IPC and the reduction of HAIs. Study setting Ghana currently has 162 district-level hospitals, 10 regional-level hospitals and five teaching hospitals (tertiary-level) in the public health sector. District hospitals form the first referral point from health centres and polyclinics, regional hospitals form the secondary-level referral point and teaching hospitals provide complex tertiary-level care (Nsiah-Asare, 2017). This study was conducted in the NICU of a tertiary-level hospital (TH) and a secondary-level hospital (SH) in Southern Ghana. The study was conducted within the context of a larger hospital-based project investigating Healthcare-Associated Infections in Ghana (HAI-Ghana project). In this article, we present the findings of a cross-sectional study that focuses on perspectives and practices on HAI among HPs and mothers. The study sites, TH and SH, provide similar levels of neonatal care (including intravenous infusions, parenteral medicines and neonatal resuscitation). TH and SH cater to the medical needs of babies in populations of 5 million and 3 million, respectively. TH was selected to provide insight in the context of a larger facility, while SH provided insight from the perspective of a secondary-level health facility. The NICU of TH has a nominal capacity of 60 cots, warming platforms and incubators, and that of SH has a nominal capacity of 30. Most of the babies admitted to both NICUs are preterm and critically ill babies. The average HAI prevalence at the hospitals was 10.2% (Labi et al., 2019). TH employs 12 doctors, 40 nurses and other technical staff in the NICU, while the NICU in SH has around five doctors and 20 nurses. Work is organized around a similar shift pattern in both hospitals, with three shifts running (morning, afternoon and night). Conceptualizing the study In this study, we consider the hospital an organizational cultural environment, with HPs, who have the biomedical and technical knowledge, and mothers from the wider Ghanaian cultural environment (Assimeng, 1999). Thus, differences and similarities in context, norms and rules are bound to conform and clash. HPs with medical expertize execute their roles while interacting with mothers in often stressful situations and this can create a huge cultural and communication chasm (Ruben, 2016). Medical professionals have jealously guarded their exclusive rights to medical knowledge from time immemorial, and have therefore used their professional status to exclude others who do not belong to the group (Freidson, 1988). In the clinical encounter, the major concern of HPs is to ensure clinical care. The caregiver who has a highly personal and emotional involvement in the child's illness may hold different perspectives due to the lack of technical expertize (Jones and Jones, 1975). So, whereas for HPs, infections and morbidity may just be passing clinical events, these have consequences that resonate beyond the medical realm for mothers, who are critical stakeholders. This oppositional view sets the stage for the 'clash of cultures'. The lack of trust between HPs and mothers mainly stems from these opposing views. Coe (1970) explains how the communication process, which is core to the medical encounter, is asymmetric between HPs and caregivers. This inherent asymmetry is reinforced by the medical profession's socialization practices which ground a relationship of domination, reflected in the management of information exchange and utilization of medical jargon (Filc, 2006). In the absence of clear lines of communication, anxious caregivers produce their own scripts to assuage their anxiety over the health status of their patients (Senah, 2002). Caregivers often accept these patriarchal attitudes of HPs without questioning, as a reflection of the broader society, where questioning authority is perceived as insubordination (Zaman, 2004). Communication between HPs and caregivers can lead to an improvement in several areas of health and well-being, while the lack of communication or poor communication could result in poor compliance with guidelines (Jones and Jones, 1975). Under a broader paradigm of sociocultural theory, Bourdieu (1991) emphasizes that when individuals interact, they do so in a specific social context, 'the field', which shapes their practices, perceptions and attitudes. Social fields in medicine include the field of health provider-client interactions (Emmerich, 2013), where positions of power are determined by medical knowledge, professional prestige, etc. Power is present in all interpersonal relationships, is relevant in healthcare and 'comes into being' when it is put into action through 'strategies' such as expressions through language and communication (Foucault, 1982). Study design We used an ethnographic approach involving qualitative interviews with mothers and HPs, participant observation, informal meetings and discussions. An ethnographic approach allows us to obtain rich details of social phenomena, and requires long periods in the field to actively study, experience and represent the lives of participants in their natural setting (van der Geest and Sarkodie, 1998;Emerson et al., 2011). In-depth interviews and focus group discussions were used to collect data. Selection of study participants Women 15 years and older, whose babies had been hospitalized in the NICU for a minimum of 48 h during the study period, were eligible to participate in the study. Purposive sampling was used to recruit mothers to share their perspectives and experiences of care in the NICU. A total of 22 mothers participated in the in-depth interviews, and 24 mothers participated in four focus group discussions with between four and eight mothers per group: (TH: 15 interviews and two focus groups [n ¼ 12]; SH: seven interviews and two focus groups [n ¼ 12]). Sixteen HPs participated in the in-depth interviews. A cross-section of frontline HPs, health managers and IPC coordinators were purposively selected to achieve diversity in terms of staff cadre and level of experience. literature was reviewed to develop the question guides consisting of semi-structured questions and probes to address the objectives of the study. The question guides were slightly revised for clarity, comprehensiveness and relevance, following a pilot test with three mothers and three HPs in a similar ward setting who were not included in the study. A health facility checklist was developed based on the review of the WHO ward infrastructure survey and existing literature (WHO, 2009;Yawson and Hesse, 2013) to support data collection. The checklist captured the available HH facilities on the wards. Both the qualitative interview guides and checklists were developed in English. Data collection The ethnographic study was conducted in both hospitals between January and June 2018. Interviews lasted 45 min to one hour and the focus group discussions lasted one hour to an hour and a half. Interviews were conducted face-to-face in the hospital in a calm location as per the convenience of participants, and participants were interviewed alone. Demographic information was collected, and participants spoke about their experiences of IPC practices and interactions in the ward. The first author, GSM, conducted interviews with HPs in the English medium. GSM is a medical doctor and a PhD student experienced in qualitative research. GSM understands issues related to caregiving and used her student-researcher identity to observe care and interactions in the ward. Two trained female research assistants who were graduate students in health-related fields and with qualitative research experience assisted with data collection. The researchers were not familiar to the participants prior to the study. All researchers were fluent in the local Akan language. Some mothers' interviews were conducted in English and others in Akan by GSM and the research assistants. GSM conducted the FGDs, while a research assistant took observational field notes. Participants were able to freely speak English or Akan, so they could express themselves comfortably. Probes were incorporated, but participants did not need much prompting to share their experiences and ideas. Participants were offered refreshments after the interviews. The interviews were audio-recorded, transcribed verbatim and those in Akan were translated into English by a research assistant. The translation was verified by a second research assistant. Data saturation was achieved. Field notes were documented, reconstructed and expanded following each ward visit, and data were incorporated into further ethnographic analyses (Emerson et al., 2011). More than 100 h of participant observation and informal interactions with mothers and HPs were also conducted, to observe activities and interactions relating to IPC and HH. Personal HH practices observed included HH upon arrival and before leaving work at the end of the day or duty shift. Quantitative data on HH compliance (WHO, 2009) were captured and are reported elsewhere. Data processing and analysis Transcripts and notes were kept confidential in password-protected files. Transcripts were read by two team members (GSM, BPT) to develop a coding structure reflecting issues arising from the data. Coding of the transcripts was done using NVivo, and commonly occurring themes and subthemes were identified. We read and reread the transcripts for the overall experiences being presented and coded to develop open, focused and theoretical codes to describe dimensions of participants' experiences and interactions. This study takes a constructivist epistemological approach, where knowledge is dependent on perception and experience, and drew on inductive thematic coding, memo writing and reflexivity (Charmaz, 2001). We used a grounded theory approach for data analysis. This enables in-depth exploration of multiple subjective experiences, provides explicit, sequential guidelines for conducting qualitative research and helps the researcher to streamline and integrate the data collection and data analysis process. Grounded theory allows the results to be 'grounded' to the data collected (Glaser et al., 1968;Chun Tie et al., 2019). The preliminary findings of the study were presented to HPs and managers in both hospitals in seminars to return the results for verification and validation of the study. Comments and suggestions received during the seminar were incorporated into the final report. The presentation of our findings was guided by the Social Ecological Model (SEM). SEM has been recognized and accepted for use broadly in the efforts of enhancing health and well-being, and is widely used to better understand the health behaviours of individuals. SEM acknowledges that an individual's behaviour is shaped through multilevel factors. In general, five hierarchical levels of SEM have been recommended and used in social science, psychology and health science sectors: individual, interpersonal, community, organizational and policy levels (Newes-Adeyi et al., 2000;Lounsbury and Mitchell, 2009;Rawal et al., 2020). Themes denoting factors influencing the practice of IPC to reduce HAIs were categorized under the various SEM levels, and other arising themes were also integrated ( Figure 1). SEM offers a holistic understanding of the factors influencing IPC practices of HPs and mothers. Strategies were employed to ensure the trustworthiness of the findings, such as checking the data for accuracy and completeness and using team meetings to establish coding consensus (Shenton, 2004;Houghton et al., 2013). Reflexivity was employed, such as GSM taking note of her own preconceived ideas as a medical doctor which could influence observations in the study settings. Researchers examined personal biases and the effect of the researcher on the research process and interpretation of findings. We used the COREQ approach (Tong et al., 2007) to report on the characteristics of the research team, study design, data collection, data analysis and other strategies (Supplementary File). Ethical clearance was obtained for this study (GHS-ERC 07/03/ 2017). Written informed consent was obtained from interview participants who were informed about the study objectives, and all ethical procedures were followed. For confidentiality, direct quotes from participants are identified by codes (Doctor, D; Nurse, N; Manager, MG; Mother, MT; PA, Physician Assistant). Results HPs who participated in the study included six males and 10 females, aged 21-60 years. There were eight HPs from each hospital, including two managers, two IPC coordinators, four doctors, one physician assistant and seven nurses, with 50% of HPs having worked five years or less in the NICU. Mothers were between the ages of 15 and 49 years, and >50% of them had secondary school education or higher. Table 1 presents the characteristics of mothers in this study. HPs described a clear association between observing IPC measures and reducing HAIs. HPs mentioned HH, waste segregation and disinfection as some measures to reduce HAIs. The national IPC policy document was not available in the wards, although there were clinical protocols and HH messages on posters in both NICUs. During an observation session, nurses were seen debating which detergent to use for cleaning incubators (Observation#21TH), and this was later clarified by a manager who said the information was in the ward protocol. The manager stated: 'People should familiarize themselves with and use the protocols and posters'. The manager also complained that knowledge of HAIs among staff was low. Mothers were aware of the possibility of acquiring infections in the NICU and described infections with terminologies in Twi such as 'mmuawa' (germs) and 'yareE/yadeE' (diseases). Mothers were observed washing their hands before entering the NICU, which they explained was to avoid contaminating their breastmilk or transferring germs to their babies. A mother said: 'Babies are quite delicate; their immune system is not built to term'. Mothers said they heard about 'infections' through talks at antenatal clinics, and from the television and radio. A mother whose baby had previously acquired an HAI mentioned that she was motivated to wash her hands to avoid expenses associated with HAIs: With my first child I didn't listen to what the nurses said, I was stubborn. So, I had to keep visiting the hospital several times. This time, I wash my hands and do exactly as I am told so that no disease will affect my child. Yes, because right now, there is no money (MT9). Attitudes of HPs and mothers towards IPC HPs did not strictly follow protocols and were seen using their discretion during some procedures (Figure 1, Individual-level factors). A nurse who was observed administering medications with only one gloved hand explained that she was 'saving gloves', as gloves were scarce on the ward (Observation#24TH). Some senior nurses felt that student nurses who used gloves while changing soiled cot sheets were wasting the limited supplies. Some HPs also used normal examination gloves and sterile gloves interchangeably, although there were guidelines for the use of each type of glove. HPs who were observed performing HH used soap and water stored in veronica buckets (improvised buckets with a tap) (Veronica Bucket, 2020) when running water was not available in the ward. In both NICUs, <50% of observed HPs performed the recommended hand washing steps correctly, according to WHO guidelines. However, HPs in TH were more compliant with HH and use of the alcohol hand rubs than HPs in SH (Table 2). HPs referred with concern to instances of babies acquiring HAIs from other babies especially when two or three unrelated newborns share an incubator for warmth or a cot for phototherapy. A mother F o r P e e r R e v i e w Beliefs of HPs and mothers about IPC Many HPs focused on protecting themselves from HAIs and mentioned that they did not want to take any infections from the hospital to their homes. HPs reported performing HH because of fear of becoming infected themselves. One HP said: 'I think what motivates most people, including myself, is not cross-infection. It is always about personal protection'. Some HPs perceived that mothers were not interested in IPC. However, mothers did observe and scrutinize IPC practices among HPs, and formed ideas and clues about IPC from what they saw. Mothers mentioned that they observed HPs wearing gloves to protect themselves and the babies from infections. Mothers described their own habits of handwashing before eating, after using the bathroom and before touching their babies. Skills of HPs and mothers on IPC There were more specialist paediatricians and neonatal nurses in TH, and there was frequent training on neonatal care for the HPs. TH had a full-time IPC nurse who was focused on IPC in the wards, while SH had an IPC nurse who had to fulfil other clinical roles. IPC training had been conducted for HPs who had worked in the NICUs for more than a year, but not for HPs and students who joined the wards subsequently or for short rotations. Doctors were generally responsible for invasive procedures like insertion of intravenous cannulas, but nurses also performed these procedures in SH. Seven per cent of mothers had no formal education at all, while 41% of mothers had attended school up to the primary level (Table 1), but most were not able to understand or read English well. This presented a challenge with the use of written media as a channel of IPC communication, as HH posters and instructions were mostly in English. Mothers expressed the desire to receive IPC information via simplified messages on social media, using visual illustrations and/or in the local languages (Figure 1, Individual-level factors). Some mothers reported that they were not instructed about HH in the NICU. One mother said: 'Nobody has taught us how to wash our hands, so we just wash them anyhow'. Interpersonal-level factors influencing IPC practices The influence of peers on IPC practices HPs mentioned that they are often inspired or reminded by colleagues to perform HH (Figure 1, Interpersonal-level factors). One HP mentioned that 'some people will take reminders in good faith and change their behaviour; some will not'. Our observations in TH showed that HPs had a practice of engagement in shared breaks and meals in a staff room in the NICU (Observation#23TH). This provided an environment where HPs interacted and shared information on a range of topics, including IPC. Some mothers said that they acquired knowledge of HH by watching other mothers. Mothers held conversations in the waiting areas and exchanged knowledge by sharing experiences. The influence of communication on IPC practices HPs argued that due to the high turnover of mothers and babies, it was not possible to provide education on IPC to every mother, although most mothers received some orientation. With reference to Figure 1, communication on the interpersonal level is very crucial to the reduction of HAIs. However, from our observations, communication between HPs and mothers was limited. Mothers craved carerelated information, and expected explanations regarding decisions taken concerning their babies. A mother was told that 'we tell you only what is important for you to know' when she approached a doctor for information on her baby. Another mother complained that the doctor was 'more interested in laboratory tests which were to be done for the baby'. Although laboratory tests are an essential component of clinical care, the mother wanted a social discussion while the HP approached it from a biomedical perspective. HPs' patriarchal style of communicating with mothers and distrust of mothers' ability to comprehend IPC Interactions between HPs and mothers were more of a one-way dialogue when they occurred, with HPs instructing the mothers on what to do, or mothers reporting back to HPs on specific issues. HPs mentioned that some of the mothers were not competent enough to process technical information about IPC. Some HPs mentioned that some mothers do not know how to use sinks and toilets, because they were brought up in villages where such facilities are mostly unavailable, and that these mothers have also not been trained on hygiene practices. One HP stated: 'some of them are villagers. . .it is the way they are trained'. Some HPs perceived mothers as potential sources of infections. A nurse stated: 'they sit on the floor on the corridor downstairs, then come here with the dirty cloth'. Some HPs mentioned that mothers or their relatives were capable of sneaking in herbs to be applied to the babies' umbilicus, contrary to the hospital protocol of using chlorhexidine, which was not known to most mothers. During FGDs, mothers discussed the use of spirit, gel or local herbs on their babies' umbilical cords as part of traditional newborn protective care. A mother complained: I came here last night and up till now they haven't given me spirit to clean the baby's navel, so this can also cause infection (MT7). Mothers also discussed that in caring for their babies, they sometimes received conflicting messages because 'nurses will say one thing, and grandmothers will say something else'. Community-level factors affecting IPC practices Ward rounds were regularly conducted as part of the routines in both NICUs, where HPs would give a detailed account of each baby to a supervising consultant who would then lead an academic discussion. Ward rounds represented an avenue for interaction where shared values on IPC came into play (Figure 1, Community-level factors). We observed that HPs were more likely to perform HH during ward rounds when the leading consultant paid attention to HH (Observation#22TH, #39SH). Mothers had no role in ward rounds, and HPs preferred that they would be absent during this period to avoid interfering with questions and unsolicited comments. Mothers were termed 'difficult' if they did not comply. Mothers sometimes needed more time to complete care activities. One mother said: When leaving NICU you are tired and frustrated because you did not finish feeding your baby, and you are wondering if the nurses will continue feeding them for you or not (MT22). There are restrictions on visits by family and friends in both NICUs. Staff explained that this was an IPC measure. However, the reason for this restriction was not understood by most mothers. These mothers were opposed to these restrictions, as the local culture encourages family members to celebrate the arrival of a new baby by seeing the newborn. One complained: My sister came all the way from Cape Coast (Central region) but she hasn't seen the baby yet. . . nobody else has seen the baby, which is worrying (MT14). A mother complained that when she tried to get a nurse's attention to see to her baby, she was told casually that 'as for pre-terms their condition can change at any time'. A manager later explained how some cultural beliefs reflect in HPs' attitudes towards the babies: Culturally people think the babies still belong to the spirit world before day seven. . . the way they start treating them changes afterward; less effort is needed to convince staff to go the extra mile after day seven (MG1). Organizational-level factors influencing IPC practices Human and material resource deficits that affect commitment to IPC Managers emphasized the need to have the necessary human and material resource allocations for optimal IPC practice (Figure 1, Organizational-level factors). HPs expressed the need for more HH stations and supplies in the NICU. Some HPs mentioned that due to the scarcity of PPE such as aprons, masks and boots, some PPE intended for single use are used multiple times. The NICUs lack an adequate supply of gloves, and HPs sometimes had to borrow gloves from other cubicles and wards or improvise by using a single glove at a time rather than a pair. On other occasions, HPs wore double pairs of gloves, explaining that they could not trust the quality of some types of gloves. Staff purchased and brought their own scrubs (PPE) to work and were responsible for cleaning them. HPs reported on the struggle to make IPC a priority because of clinical demands, including several new admissions daily and timeconsuming clinical responsibilities. A doctor in SH said: 'A doctor's work is clinical care, and if the clinical workload is heavy, sometimes it's only natural that you'll overlook other things'. The health facility checklist was used to assess ward infrastructure. The NICU in TH had three cubicles, with two sinks which were not always functional. SH had a much smaller NICU space, with two cubicles with a sink in each. Neither of the NICUs had a steady supply of soap, water or towels at the sinks, and the sinks designated for handwashing by mothers generally had the least supplies. HPs sometimes had to move out of the cubicles to the nursing station to access a sink with running water, and this was reported as a barrier to HH compliance (Table 3). One HP complained: We use kitchen liquid soap to wash our hands, and it is so diluted. Disinfectant for cleaning equipment is also so diluted that it's meaningless. So when we clean the incubators, we're only redistributing the germs, because we don't have the right disinfectants (D2). Policy-level factors affecting IPC practices Although a national IPC policy exists, it is not applied optimally in the NICUs. Only a few HPs said they had seen it previously or attempted to read it (Figure 1, Policy-level factors). HPs wanted soft copies or summaries of essential practical portions of the bulky guideline in the form of posters or smaller protocols that can easily be assessed and utilized on the wards. There was no structured HAI surveillance in the NICU wards, so HPs were not aware of or able to keep track of HAI rates. Although both hospitals had microbiology laboratories that could identify infectious agents to treat babies who develop HAIs, mothers had to bear the costs of this, which some could not afford. Training and supervision to improve IPC practices Managers mentioned that there is a need for supervision to improve adherence to IPC guidelines in the wards. Managers also mentioned that senior nurses should enforce policies by being on the frontlines to work with the HPs and supervise them. As one manager said, 'If you don't war with them, you can't tell them how to fight' (MG2). Managers suggested the need for regular IPC training, which should not only be about the technical aspects of HAIs but should also cover the basics such as communication with clients. One said: 'Medical students should be taught to respect patients and relatives, even before they graduate'. Managers in both hospitals expressed the need to have a team of dedicated staff to oversee IPC activities and make IPC teams fully operational. Staff also undergo yearly appraisals; however, IPC is not a critical part of this appraisal. A manager mentioned that it would be useful to include IPC compliance as part of the criteria for staff appraisals and promotions (Figure 1, Policy-level factors). Partnership to improve IPC in the NICU Mothers or family members are required to convey specimens to the laboratory, retrieve laboratory results and arrange the purchase of medicines for their babies. A partnership is crucial in this context because if mothers failed to fulfil this expectation, treatment could be delayed. Sometimes, mothers delayed in providing funds for laboratory tests due to a lack of understanding of the need for these tests in diagnosis (Observation#41SH). A mother explained that she would rather prioritize the use of her funds to buy medicine to cure her baby's illness than for tests that provide no cure. Mothers perceived partnership as HPs being present and attentive in interacting with them and their babies. Mothers and nurses performed common care activities for babies such as bathing, changing nappies and feeding. Mothers of preterm or low-birth-weight babies practised 'skin-to-skin' or kangaroo mother care in assigned rooms, which offered the opportunity to interact with HPs, especially in TH where an assigned nurse was present during the day shift. Nurses who were perceived to be friendly and showed positive attitudes towards mothers became the preferred ones whom most mothers would approach for interactions. Mothers received instructions from nurses on care practices, but this was often circumstantial and unstructured. Mothers referred to specific behaviours such as frowning, which made them feel unwelcome to interact with some HPs. Mothers expressed the need to be treated with respect irrespective of their background, as some felt that they were probably older than some HPs, which is especially a factor in Ghanaian society where seniority by age is given cognizance. A mother said: 'We are all human beings. . .so you should treat us as sisters. . .or friends'. When researchers enquired whether mothers would like to have an IPC-related role such as reminding staff to wash their hands, the responses suggested that mothers felt reluctant to participate in such roles. Mothers indicated that they did not want to be perceived as interrupting the provision of care. A mother said: 'When you try to do that, they will tell you that you are trying to tell them how to do their job'. Another mother who happened to be a doctor also said: As soon as you wear a patient's coat, you become a patient. . . so sometimes, you wouldn't want to offend the one taking care of your baby, because you feel that this person is taking care of my baby and what if she leaves my baby? (MT2). However, some mothers said they recognized the need to assume more responsibility to protect their babies. A mother who had lost one of her twins following an HAI believed it was the fault of the HPs and blamed the NICU practice of nursing twins together in the same incubator. She had concerns about the safety of the second twin and stated: If the one on duty is not doing well with my baby, I will complain; even if the person gets annoyed, I don't care! (MT6). One outspoken mother mentioned that some of the nurses found her irritating for being rather assertive: I mean once I find out the thing is not properly handled, I am not going to tolerate that, so they felt that I was irritating (MT8). HPs perceived that their authority would be challenged or mothers would lose confidence in them if mothers were empowered to take up the role of reminding them about HH. Some direct quotes reflecting HPs' perceptions are captured in Table 4. HPs mentioned that it would require resources to promote IPC among mothers, e.g. the provision of aprons for mothers' use, to minimize infection risks to the babies. HPs added that they needed mothers to be compliant in caring for their babies. One HP said: When we are fortunate to get 'correct' mothers who know how to feed and handle their babies properly, the burden goes down. . .they pay for their labs and they are here to show love to their babies (N8). elsewhere (Ocran and Tagoe, 2014;Ogoina et al., 2015;Akagbo et al., 2017;Wahdan et al., 2019). There were key barriers and facilitators to IPC practices to reduce HAIs. The barriers include contextual factors such as resource constraints, HPs' distrust of mothers, the negative attitudes of some HPs and HPs' fear of having their authority undermined by mothers. Facilitators included the positive and approachable attitudes of some HPs, influence from colleagues to perform IPC and mothers learning from other mothers in the NICU. This study further used the socio-ecological model to present how personal-, interpersonal-, community-, institutionaland policy-level factors interact to influence IPC practices. While HPs reported that they gave mothers some information and orientation, they also admitted that heavy workloads and resource constraints hindered their ability to communicate IPC practices to mothers. Yet, they blamed mothers for some practices that they perceived as not hygienic enough. Mothers, on the other hand, reported that they were willing to learn hygienic practices since their interest was to protect their babies. Similarly, Aboungo et al. (2020) found that mothers were often blamed for negative health outcomes, and that blame is often directed at the actions or inactions of mothers or other factors that concern mothers. Blaming mothers may be a way of diverting responsibilities from HPs and other stakeholders. Our findings revealed a cultural clash between HPs and mothers, as HPs sometimes adopted a patriarchal approach, where mothers were given instructions to comply with but were not given the opportunity to ask questions or to comment on hygiene practices. Also, HPs expressed some level of distrust of mothers' ability to adapt to the hospital culture of hygiene practices and the use of hospital facilities, as there was the belief that some mothers came from villages that lacked such facilities and so were not familiar with their use. Similarly, Andersen's (2004) study in a regional hospital in Ghana revealed differential treatment towards clients who were considered poor, ignorant and uneducated, summed up in the term 'villagers'. A barrier that hindered HPs' communication on IPC with mothers was the fear that sharing professional and technical knowledge could compromise their authority in the NICU. HPs, therefore, minimized the exchange of information with mothers. Abraham and Shanley (1992) described HPs as 'authority figures' who engage in this behaviour to ensure that their authority and expertize remain unchallenged. McCann et al. (2008) also reported that fear of loss of power and control on the part HPs contributed to limitations being placed on parental interactions and involvement in care. Other studies have shown that HPs derive immense power from their knowledge, skills and access to clients, in the social field of the ward (Lipsky, 1980;Mintzberg, 1983). Our findings show a gap in readiness on the part of most HPs to have empowering dialogues with mothers. Contrary to our findings, Aston et al. (2015) found that HPs challenged a health discourse that situated HPs as authority figures, and chose to develop relationships in ways that would build mothers' confidence and improve partnership. Bodenheimer et al. (2002) differentiated strength-based discourses, focused on mothers identifying and making healthcarerelated decisions, from the historical medical discourse that positions HPs as experts who assess and judge mothers' actions and ultimately tell them what to do. This suggests a need for transformation of HPs' perspectives and approach to dialogues with mothers in NICUs in Ghana, as the implementation of IPC practices to reduce HAIs is influenced by building and nurturing trust to improve collaboration between HPs and mothers. Mothers, irrespective of their background, felt powerless when their babies were admitted into the NICU because this was a different cultural context. Mothers felt compelled to submit to the higher authority of HPs in the interest of receiving quality care for their babies. This reflects the wider Ghanaian society, where traditional and institutional authority is obeyed and not questioned (Assimeng, 1999). Such a practice was detrimental to mothers' ability to communicate and seek clarification from HPs, which led to safety concerns among mothers. Also, Zaman (2004) noted that the values and norms of society are expressed in the hospital wards, where people socialize hierarchically, and caregivers, who are mainly from poor economic backgrounds, are at the bottom of the hierarchy. The organizational culture in the NICU, which focused on biomedicine and thus dealt with technicalities and appropriate ways of holding babies or refraining from holding them as a form of IPC practice, was at variance with the typical Ghanaian culture that encourages parents and family members to hold and show affection to newborn babies. Sometimes HPs distrusted mothers and perceived them as a potential risk of infection to the babies, and this hindered positive and clear interaction between HPs and mothers. As the cultural ethos of each group presents a set of expectations and interpretations often at variance with one another, this adds new twists to the clash of cultures (Senah, 2002). The limited communication further engendered mistrust among mothers of the good intention of HPs to protect their babies. Some mothers also felt disrespected by HPs due to their language, actions or inactions. However, respect did not appear to be the focus when HPs were dealing with issues relating to quality of care. Coe (1970) described Table 4 Direct quotes by HPs on perceptions of caregiver involvement in IPC in the NICU When you give them [mothers] that chance, the way they will talk to you might provoke you. . .but we as nurses should know what we are about (N1). It will bring problems because some of the mothers might think they know better than us. . . They will never have confidence in you. They will think you don't know your job. . . It brings down your morale (N2). Some of them. . .they are rude. It depends on how you will talk to me. If you are trying to teach me my work, then I will also give you what you also don't know (N3). In our setting, it will be difficult for a patient to tell a nurse to wash their hands. . .probably a health worker will feel offended when a patient says something like that, unless maybe it is said in a playful way (D1). It will be beneficial to them in the long run because when they go home it will also help them (N6). I think it will also help you the health worker when they draw your attention to these things to keep you in check (N7). If it is true you haven't washed your hands then you should take it cool. I don't think there is the need for me to get upset. . . I don't know about my other colleagues (PA1). It's all about continuous education. At least when we are always reminded about a precaution. . .we also have to involve the relatives in the education (D2). the lack of effective communication between HPs and clients as a great source of discontent in hospitals. Mothers were comfortable interacting and engaging with HPs who exhibited positive attitudes. This suggests that mothers are interested in gaining more knowledge and collaborating with HPs in the interest of their babies' health. However, they were also influenced by cultural beliefs and experienced a dualistic sense of responsibility to satisfy both cultural and hospital expectations when caring for their babies. This is similar to findings from previous studies in Zambia and Ghana (Moyer et al., 2014;Buser et al., 2020). To address the varying cultural perspectives that engender distrust between HPs and mothers, further engagement and negotiations between HPs and mothers would be beneficial. Also, grandparents and other family and community stakeholders who influence mothers' decisions in newborn care should be engaged during educational sessions at the community level. One barrier to hygiene practice were resource constraints, which led to improvising such as gloving one hand instead of two when there were glove shortages. This also affected the HPs' ability to offer resources to mothers to encourage them to observe IPC practices. HPs had more access to HH resources than mothers, similar to findings in other LMICs (Horng et al., 2016). Other studies have shown that when HPs are offered limited resources in the provision of health care, they exercise discretionary power by improvising and modifying policies, thereby influencing how policies are enacted (Walker and Gilson, 2004;Aberese-Ako et al., 2014). Similarly, a study in Ghana found that it was common for HPs to improvise or modify protocols when basic supplies, logistics and infrastructure needed for adherence were inappropriate or not available (Yevoo et al., 2020). For HPs to be able to respond to the needs of clients, there is the need for their requirements for essential resources, supplies and infrastructure to be addressed (Enweronu-Laryea et al., 2015;Aberese-Ako, 2016), as these concerns compete with their focus on HAI concerns. This study illustrates the importance of discussing a partnership between HPs and mothers and negotiating the role of mothers. In doing this, attitudes, socio-cultural norms and the power distance wherein HPs and mothers operate in a super-and subordinate relationship (Senah, 2002) should not be overlooked. HPs should be aware of how their positions of expertize within the NICU affect interactions with mothers. A deeper understanding of personal, social and institutional aspects of IPC and HAIs will provide opportunities to reflect upon and change practices to support and involve mothers. Partnerships foster improved adherence, and ultimately improve healthcare outcomes (Martin et al., 2005). Participation, engagement, negotiation and sometimes compromise enhance opportunities for interactions in which mothers, as key stakeholders, take responsibility for their part in promoting IPC to reduce HAIs. The findings of this study showed that IPC practices have not been implemented effectively in the NICUs. These findings suggest that communication and partnership that encourage caregiver involvement in IPC should be developed through interactions. Similar to our findings, other studies have pointed out the need for the medical and nursing curricula to emphasize interpersonal communication in healthcare, and to incorporate trainings that allow HPs to learn, practise and reflect on their provision of respectful care and communication (Afulani et al., 2019;Lim et al., 2019). These trainings should be incorporated into the pre-service and in-service training of HPs to improve mothers' experiences in NICUs in Ghana. Our findings outline the current challenges associated with the effective practice of IPC, which should guide policymakers to strengthen measures to improve the implementation of existing IPC policies in the NICU. Findings from this study provide insights to inform strategies to raise the priority of IPC and limit harm from HAIs in Ghana. Limitations Interviews with mothers were conducted within the hospital setting, and mothers may be unwilling to be critical of HPs who are caring for their hospitalized babies. We took steps to build trusting relationships with the mothers and to assure them of confidentiality. We also spent long periods building rapport with HPs to minimize the Hawthorne effect. There is the potential for losing meaning and nuance in the interviews which were conducted in Akan and translated into English. Transcripts were doubled checked by a second research assistant to ensure that original meanings were retained as much as possible. This study explores attitudes and general beliefs, and presented only a few examples of actual cases of HAIs. An important next step in research would be to link attitudes, beliefs and practices to the occurrence of HAIs. Conclusion and recommendations HPs and mothers demonstrated a modest level of understanding about HAIs and IPC practices. Some key barriers and facilitators to knowledge and observance of IPC practices to reduce HAIs were identified. The barriers included non-adherence to protocols, negative and patronizing attitudes of some HPs towards mothers, fear of loss of authority, resource constraints in the hospital systems and poor supervision and implementation of IPC policies. Facilitators included positive and approachable attitudes exhibited by some HPs within the NICU, influence from colleagues to perform IPC, mothers receiving information on IPC from the antenatal clinics and peer support from other mothers. There is the potential to form a partnership between HPs and mothers in promoting IPC practices in Ghanaian health facilities. This is critical considering that Ghana is a low-resource country with limited budgets for health care, so improving IPC could reduce infections and thus save facilities and families from extended hospital stays, with consequent costs of treatment. However, this requires partnerships where mothers are seen as part of the solution, for if mothers are perceived as distrustful and seeking to undermine the authority of HPs, this common objective will not be achieved. Effective communication between HPs and mothers should be a key area of focus in promoting partnerships to reduce the burden of HAIs, particularly among neonates in Ghana. This requires clearly defined policies and strategies that define, acknowledge and value mothers' roles as caregivers, and encourage partnership between HPs and mothers. There is a need for maintaining standard precautions and practices more effectively and efficiently. HPs need to make deliberate efforts to go beyond personal and professional barriers, to acknowledge the role of mothers in patient safety and to empower mothers and caregivers in promoting IPC. There is a need for hospitals to improve the supervision and monitoring of HPs, as some of the gaps in IPC compliance were noted to be due to limited supervision and follow-up. In addition, there is the need for hospitals to devote more funds to providing equipment and hygiene-related medical supplies, which will help to improve hygiene conditions. Also, IPC guidelines should be made available to all staff, and training on HAIs and IPC should be provided regularly to incoming staff and students. Structured training at health facilities must aim to both provide both technical knowledge and develop HPs' interpersonal communication skills, to help bridge the gap between HPs and caregivers. It is also important that medical and nursing curricula emphasize interpersonal communication and patient-centred care. Supplementary data Supplementary data are available at Health Policy and Planning online.
2020-08-27T09:14:21.795Z
2020-11-01T00:00:00.000
{ "year": 2020, "sha1": "9988ffb8dda579302e69e89767e0a279f67974ca", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/heapol/article-pdf/35/Supplement_1/i38/34164331/czaa102.pdf", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "69229395d93a773500c1bbd07f570468e0a196a8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
56175634
pes2o/s2orc
v3-fos-license
Dilemma and Solution of Traditional Feature Extraction Methods Based on Inertial Sensors Correctly identifying human activities is very significant in modern life. Almost all feature extraction methods are based directly on acceleration and angular velocity. However, we found that some activities have no difference in acceleration and angular velocity. *erefore, we believe that for these activities, any feature extraction method based on acceleration and angular velocity is difficult to achieve good results. After analyzing the difference of these indistinguishable movements, we propose several new features to improve accuracy of recognition. We compare the traditional features and our custom features. In addition, we examined whether the time-domain features and frequency-domain features based on acceleration and angular velocity are different. *e results show that (1) our custom features significantly improve the precision of the activities that have no difference in acceleration and angular velocity; and (2) the combination of time-domain features and frequency-domain features does not significantly improve the recognition of different activities. Introduction e classification of human motion based on inertial sensors has been proven to have many important applications in the medical and health fields.In previous studies, time-domain and frequency-domain features are widely used for feature calculation. ere are many studies that use wavelet transform to extract features to classify human activities.However, the research of Preece et al. [1] shows that the time-and frequency-domain features often exceed wavelet features, indicating that the wavelet feature may be not the most effective method for calculating the human body motion classification features.Some time-domain features are derived to classify human activities, such as the mean, median, variance, skewness, kurtosis [2], and interquartile range [3].In order to extract frequency-domain features, the sensor data window is first changed to the frequency domain using discrete Fourier Transform [4].en, we can extract some features from the frequency domain to distinguish different activities, such as power spectral density (PSD) [5], peak frequency [5,6], entropy [7], DC component [7], median frequency [8], spectral energy [9], and frequency-domain entropy [10].Of course, there are other methods that process data from accelerometers and gyroscopes.But all in all, to the best of our knowledge, these features are extracted directly from acceleration and angular velocity, which inevitably have some common drawbacks. We studied 12 kinds of activities and found it easy to confuse elevator up and elevator down.ese two kinds of activities do not have obvious differences in acceleration and angular velocity, so the time and frequency-domain features based on acceleration and angular velocity cannot achieve good classification results.erefore, for those motions that have no significant difference in angular velocity and acceleration, no matter how the features are extracted from the acceleration and angular velocity, it is difficult to achieve good results. In addition, we begin to wonder if there is any essential difference between time-domain features and frequencydomain features based on acceleration and angular velocity.In order to solve this discredit, we have separately tested the effects of time-domain features and frequencydomain features. en, we tested the combination of the two kinds of features and found that the combination of time-domain features and frequency-domain features was slightly higher than only time-domain features.From the experimental results analysis, we believe that, for human activity classification problem, the timedomain features and the frequency-domain features are two aspects of the same rules, and there is no essential difference. Our contributions in this paper are two-fold: (1) to the best of our knowledge, this is the first time that features of motion classification based on velocity and displacement have been proposed, which solve some problems that cannot be solved by traditional time-and frequency-domain features; (2) As far as we know, for the first time, we have studied the difference between the time-domain features and frequency-domain features. Methods Indeed, the time-domain and frequency-domain features based on acceleration and angular velocity have achieved some success.However, for activities without obvious difference between acceleration and angular velocity, such as elevator up and elevator down, the traditional method of extracting features based on acceleration and angular velocity is difficult to work.In order to solve this problem, we carefully analyze the two activities of elevator up and elevator down, summarize the differences between them, and propose some new features for distinguishing such activities.After analysis, we sum up the following rules: (i) When the elevator goes up, the speed is upward; when the elevator goes down, the speed is downward.(ii) When the elevator just starts to move or stops moving, its speed is small and its angular speed is large.(iii) When the elevator just starts to rise or fall, the direction of the speed is the same as the direction of the acceleration; when the elevator stops to rise or fall, the direction of the speed is opposite to the direction of the acceleration. Based on the above evidences, we propose four features to distinguish these activities on each axis of the accelerometer.First, according to the first rule, we introduce three features, namely, the starting speed, the ending speed, and the displacement.Second, according to the second and third rules, we introduce the fourth feature.As some activities have just started and are about to stop, their speed is small and difficult to distinguish.erefore, we extracted another feature to enhance the difference between two activities.When the velocity direction is the same as the acceleration direction, we use v + a as the feature; otherwise, we use v − a as feature.In order to describe the movement of the human body in different directions as much as possible, we have introduced the following twelve new features to enhance the difference between different activities.ese features are summarized in Table 1. Suppose the time window we choose is T, the sampling frequency is n, then the total number of samples is Tn.In the experiment, the time window we selected was two seconds.e displacement, time, and acceleration corresponding to the ith sampling interval are x(i), t(i), and a(i). e speed corresponding to the sampling point is v(i).Data section of one time window is shown in Figure 1. e gravity acceleration is g.Due to the high sampling frequency, the time interval between each sample point is short.At the same time, in order to simplify the calculation, we believe that there is uniform linear motion between each sampling point.Now, we derive the speed and displacement formula. We think these sampling points are equally timedistributed, so we have the following conclusions. First, we derive the formula for the end speed of each axis.According to the kinematics formula, we can easily get the following formula and simplify it by combining it with Equation (1). ( Next, we introduce the determination of the starting speed.If this window is the first window, we default to a starting speed of zero.Otherwise, the starting speed is the end speed of the previous window marked as v −1 (Tn): As for the displacement of each axis, since we think that there is uniform linear motion between each sampling point, the displacement formula can be derived as follows. 2 Mobile Information Systems In the experiment, in order to calculate the last three features we defined in Table 1, that is, Velocity Plus Acceleration Along X, Velocity Plus Acceleration Along Y, and Velocity Plus Acceleration Along Z, our speed takes the end velocity of the two-second time window, and the acceleration takes the difference between the average acceleration of the two-second time window and the gravitational acceleration in each axis.In this way, we can calculate the last three features, and the expression is shown in (6), in which Velocity Plus Acceleration Along * stands for the last three features. Velocity plus acceleration along Finally, we introduce the calculation method of gravity acceleration we used in the experiment.Since the tester is just wearing the device for data acquisition, it is generally at ... Mobile Information Systems a standstill and the starting speed is zero, which is used in our experiments.erefore, for the sake of convenience, in the experiment, we believe that the initial acceleration of each axis is the component of gravity acceleration and assume that the component of gravity acceleration in each axis remains unchanged.We take the average of the first 10 sampling points of each axis as the component of gravity acceleration in each axis, recorded as g. Datasets. In order to illustrate the validity of our custom features, we have selected the USC_HAD of University of Southern California as the verification dataset [11].ey use an off-the-shelf sensing platform called MotionNode to capture human activity signals and build their dataset. MotionNode is a 6-DOF inertial measurement unit (IMU) specifically designed for human motion sensing applications, which integrates a 3-axis accelerometer and, 3-axis gyroscope.ey selected 14 subjects (7 male; 7 female) to participate in the data collection.e sampling frequency is 100 Hz.Twelve kinds of activities collected are Walking Forward, Walking Left, Walking Right, Walking Upstairs, Walking Downstairs, Running Forward, Jumping Up, Sitting, Standing, Sleeping, Elevator Up, and Elevator Down. Results. In order to verify the validity of our custom features, we extracted some common time-and frequencydomain features.In the time domain, we chose the mean, median, variance, skewness, kurtosis, and interquartile range as time-domain features.In the frequency domain, we choose peak frequency, median frequency, power spectral density, DC component, spectral energy, and information entropy as frequency-domain features.en, we added our custom features to these time-and frequency-domain features and compared their results.In the experiments, we adopted the two commonly used models, SVM and random forest.For SVM, the kernel function we use is a polynomial kernel function.For RF, the number of decision trees we choose is 50.First, in order to check whether the distinction between the lift of the elevator and the descending of the elevator is achieved, we tested the precision and recall of our custom features on both models.Precision and recall are often used as performance measures for classifiers in classification problems.Table 2 shows the precision and recall of the models' identification of elevators up when adding custom features to time-and frequency-domain features and the combination of time and frequency-domain features without custom features.Table 3 shows the precision and recall of the models' identification of elevators down when adding custom features and no custom features.e left column below each model is the precision rate, and the right column is the recall rate.From Tables 2 and 3, we can see that, after adding the custom features, the model has significantly improved the recognition rate of the elevator up and the elevator down. Also, we use the ROC (receiver operating characteristics) curve and corresponding AUC (area under the ROC curve) values to check whether the distinction between the lift of the elevator, and the descending of the elevator is achieved.For SVM, we conducted a test.e ROC curves for the two types of classification results for elevator up and elevator down are shown in Figure 2. From the above figure, we can see that in the SVM, after adding our custom features, the ROC curves of elevator up and elevator down completely cover the ROC curve of the original feature.e AUC value for the no custom features' ROC curve is 0.8572, while the AUC value for adding custom features' ROC curve is 0.9729.e ROC curve shows that our custom features have achieved very good results in distinguishing between the elevator up and the elevator down. In order to further explain the significance of our custom features we have extracted and verify the difference between time-domain features and frequency-domain features, we have compared our custom features with the time-domain features and frequency-domain features.We performed comparative experiments on four combinations of features over SVM.We conducted five experiments for each experiment.e detailed experimental results are summarized in Table 4. e accuracy in the table is the total classification accuracy of the 12 activities.For convenience, we denote the time-domain feature as 1, the frequency-domain feature as 2, and the custom feature as 3. From the experimental results, we can see that individual frequency-domain features and time-domain features can achieve good results.However, when frequency-domain features are combined with time-domain features, no significant improvement is obtained, indicating that there is no essential difference between features extracted from the frequency domain and features extracted from the time domain.e superposition of the two did not achieve better results.e time-domain feature is better when combined with our custom features than combined with frequencydomain features.Based on the comprehensive analysis, we believe that our custom feature is a supplement to traditional time-domain features rather than a redundant feature. e traditional time-domain features and frequencydomain features are all based on the acceleration and angular velocity but there is no essential difference, so the superposition of the two will not bring significant improvement.Our custom feature is the mining of the rules of speed and displacement, which is very di erent from the traditional mining of acceleration and angular velocity.So, when these features are introduced in the time-domain feature, we can obtain certain promotion.Especially for those motions which have no obvious di erence between angular velocity and acceleration but there is a clear difference in speed and displacement, we can achieve good results with these custom features.For example, there is no obvious di erence in acceleration and angular velocity in the smooth upward movement of the elevator and the smooth descending of the elevator, but there is a clear di erence in speed and displacement.So, when introduce our custom feature, we can obviously increase the recognition rate of the two kinds of motions. To calculate these features we de ne, we must know the initial state of motion, especially the initial state of speed.In our experiment, we assumed that the initial state is zero.In the database we use, most of the data are collected after the tester reaches a steady state of various motion postures, which does not satisfy our assumptions.If we can record data when the tester just wears the inertial sensors, so as to meet our assumptions, we believe we can achieve better results.For other types of sports, such as running, walking, and station, there is a di erence in speed, which will bring a higher recognition rate. Discussion In this article, we begin with the elevator up and elevator down, which are indistinguishable based on the existing feature extraction methods and analyze the di erences and rules between these two types of movements.en, we have proposed four features on each axis of the accelerometer that have signi cantly improved the distinction between the two types of movements, elevator up, and elevator down.In the experiment, we found that the combination of frequencydomain features and time-domain features does not signi cantly improve the distinction of activities.e two kinds of features are two di erent aspects of acceleration and angular velocity, and there is no essential di erence.From the experimental results, the time-domain features are better than the frequency-domain features and can more fully re ect the di erences between di erent activities.Our custom features are not another response to acceleration, but instead, these features can be used to distinguish movements that di er in the speed of movement.In particular, it is of great signi cance to distinguish between movements that do not have a signi cant di erence in acceleration and angular velocity but have a signi cant di erence in speed. In the experiment, for the sake of convenience, we assumed that the component of the gravitational acceleration remains unchanged, which is obviously not in line with the actual situation.Next, related personnel may consider introducing some basic theories of motion analysis in order to accurately calculate the components of the gravitational acceleration and thus more accurately calculate the features we introduce.We believe that when the features of velocity and displacement are introduced, we can make a great breakthrough in the existing human motion classi cation problem and to some extent get rid of the dilemma that some motions cannot be accurately identi ed on the features of acceleration and angular velocity. Feature name Description End velocity along X End speed along the x-axis End velocity along Y End speed along the y-axis End velocity along Z End speed along the z-axis Starting velocity along X Starting speed along the x-axis Starting velocity along Y Starting speed along the y-axis Starting velocity along Z Starting speed along the z-axis Displacement along X Displacement along the x-axis Displacement along Y Displacement along the y-axis Displacement along Z Displacement along the z-axis Velocity plus acceleration along X v + a if the product of v along the x-axis and a is positive, v − a otherwise Velocity plus acceleration along Y v + a if the product of v along the y-axis and a is positive, v − a otherwise Velocity plus acceleration along Z v + a if the product of v along the z-axis and a is positive, v − a otherwise t (1) a (1) Figure 1 : Figure 1: Data section of one time window. Figure 2 : Figure 2: ROC curve of elevator up and elevator down over SVM. Table 2 : Precision and recall of the identification of elevators up when adding custom features and no custom features. Table 3 : Precision and recall of the identification of elevators down when adding custom features and no custom features. Table 4 : Classi cation accuracy of 12 activities on four combinations of frequency-domain features, time-domain features, and custom features over SVM.
2018-12-28T14:02:50.675Z
2018-11-22T00:00:00.000
{ "year": 2018, "sha1": "7780af1d563554f0f0ffed0190f7eeac5bdb0b5f", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/misy/2018/2659142.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "7780af1d563554f0f0ffed0190f7eeac5bdb0b5f", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
241320200
pes2o/s2orc
v3-fos-license
Efficacy and safety of dezocine in postoperative pain relief: Systematic review and Meta-analysis CURRENT STATUS: Objective: To systematically evaluate the efficacy index and adverse reactions of dezocine in postoperative pain relief, provide statistical theoretical support for guiding clinical application. Methods: We extracted and analyzed multiple data of patients from the PubMed, Embase, The Cochrane Library and China National Knowledge Infrastructure (CNKI) for use in randomized controlled trials of various surgical postoperative pain relief. We used meta-analysis to study several measures of efficacy and safety of dezocine, including visual analogue score (VAS), Ramsay sedation score, mean arterial pressure (MAP), heart rate (HR), Pulse Oxygen Saturation (SpO2) and the incidence of adverse events(AEs). The material data were calculated and analyzed using Review Manager 5.3. Results: After exclusion of literature that did not meet the inclusion criteria, our analysis included 14 randomized controlled trials. The Mean Difference (MD) of VAS at 1 h/6 h/24h between the dezocine group and the placebo group was -1.37 (95% CI -2.07,-0.67, P=0.0001)-0.52 (95% CI -1.04,0.01, P=0.05)-0.10 (95% CI -0.39,0.20, P=0.52), respectively. The MD of Ramsay sedation score at 2h/8h was 1.21 (95% CI 0.67,1.75, P0.0001) and -0.17 (95% CI -0.59,0.26, P=0.44). The MD of MAP at T0/T1/T2 was -0.28 (95% CI -2.46,1.89, P=0.80)-2.66 (95% CI -5.07,-0.25, P=0.03)-4.53 (95% CI -6.17,-2.89, P0.00001). The MD of HR at T0/T1/T2 was -2.26(95% CI -4.32,-0.21, P=0.03)-3.58(95% CI -5.21,-1.96, P0.0001),-3.75 (95% CI -11.55,4.04, P=0.35). The MD of SpO2 at T0/T1 was -0.90(95% CI -1.77,-0.03, P=0.04) and 0.36(95% CI 0.02,0.71, P=0.04).The odds ratio (OR) of AEs was 0.53(95% CI 0.39,0.71, P0.0001). Conclusion: Dezocine shows appropriate anesthetic efficacy and fewer adverse effects, which can reduce postoperative pain effectively. Background Opioids are the main drugs for postoperative analgesia, but patients with long-term opioids are difficult to obtain optimal postoperative analgesia due to tolerance to opioids, and cause some unavoidable adverse reactions. [1] Opioid treatment of chronic pain is recognized as an acceptable and effective method, whereas opioids have the potential for abuse and addiction. [2] As an opioid analgesic for parenteral administration, dextrozine has minimal side effects and low dependence, and has an agonistic and antagonistic effect on opioid receptors. It is a bridged 4 aminotetrahydronaphthalene. [3,4] . Dezocine is a potent opioid receptor agonist-antagonist, which mainly produces analgesic effect by agonizing κ receptors. It has strong analgesic effect, rapid absorption and distribution in human body, large apparent volume, long half-life and slow elimination. Its analgesic intensity, onset time and duration of action are comparable to morphine. Previous studies have shown that due to the morphine-like action of dezocine, some humanoid experimental animals can produce relatively equivalent or more effective analgesic effects than morphine in vivo and in humans. [5] Previous studies have suggested that due to the partial agonist activity of dezocine on the μ opioid receptor, it causes fewer adverse reactions in clinical treatment. [6] As a new anesthetic analgesic, dezocine is widely used in clinical practice, and a large number of randomized controlled trials have been conducted to study its efficacy and safety. However, there is currently no statistical evidence-based medical evidence to systematically validate the efficacy and safety of dezocine. Therefore, this study conducted a meta-analysis of the efficacy and safety of dezocine to control postoperative pain, and provided statistical guidance for the clinical application of dezocine. Search strategy We conducted a literature search to identify relevant available articles published from the databases of PubMed, Embase, The Cochrane Library and China National Knowledge Infrastructure (CNKI) up to April 29, 2019. Search terms included "dezocine", "postoperative pain", "anesthesia" ,"randomized controlled trials". A combination of subject terms and free words were used for the search. The two authors independently screened for eligibility for inclusion in the study and the third reviewer resolved the disagreements from an objective perspective. Selection criteria Inclusion articles satisfied the following criteria: (1) Randomized controlled clinical trials to evaluate Dezocine as a treatment for anesthesia in patients; (2) with or without Dezocine compared with other anesthetics/placebo; (3) Data can be used for at least 1 or all of the following results: visual analogue Score (VAS), Ramsay sedation score, mean arterial pressure (MAP), heart rate (HR), Pulse Oxygen Saturation (SpO2) and the incidence of adverse events (AEs). Exclusion criteria for the article include reviews, case reports, letters, editorials, and studies lacking the necessary data, and are not related to our research topics or are not randomized controlled clinical trials. Data extraction Three reviewers independently screened the report and extracted data from the included studies. The following data were collected: first author, year of publication, study design, number of patients, age(mean), sex(male/female) , treatment regimen, intervention, drug dose, anesthesia and analgesia, and endpoint measures (Table 1). Outcome measures The outcome measures were VAS, Ramsay sedation score, MAP, HR, SpO2 and AEs. The mean arterial blood pressure, average heart rate, and pulse oxygen saturation at the different time points: end of surgery (T0), before extubation (T1), during extubation (T2) were compared. This meta-analysis follows the guidelines provided by the Preferred Reporting Items for Systematic Reviews and Meta-Analyses Report (PRISMA statement). [7] Data analysis The data analyses were performed using Computer Program Review Manager 5.3 (Copenhagen: The Nordic Cochrane Centre, The Cochrane Collaboration). It is estimated that there is a certain degree of heterogeneity between the included RCTs, so the random effects model is used for analysis. Safety is assessed by summarizing the OR of the AEs. The efficacy of dezocine was assessed by comparing its VAS/Ramsay score/MAP/HR/SpO2 with placebo and the corresponding 95% confidence interval (95% CI). Heterogeneity was assessed using Chi2 test and I2 statistics and published bias was assessed using funnel plot. Search results and characteristics of patients in the included studies The PRISMA diagram of the study selection process and the reasons for exclusion is shown in Figure 1. 6 Our search retrieved 1597 publications. 694 studies were excluded as duplicates and 889 were excluded because they did not meet the eligibility criteria in the initial selection. After reviewing the abstracts and full articles, 14 distinct trials were included in our analysis after removing the articles lacking necessary data and those utilizing insufficient follow-up periods. All studies were published in the last 8 years. 1820 patients were included in this analysis. 2h/8h Postoperative Ramsay sedation score A total of 5 studies compared the Ramsay sedation scores of dezocine and saline 2h/8h after surgery. The results showed that the postoperative sedation effect of the 2h dezocine group was better than that of the saline group [MD=1.21, 95% CI (0.67 1.75), P < 0.0001] (Fig. 3A), the 8h Ramsay sedation score was not statistically significant (Fig. 3B) The results showed that the MAP of the T1/T2 phase of the dezocine group was lower than that of the (Fig. 4B C), the T0 MAP group was not statistically significant. (Fig. 4A) T0/T1/T2 Heart rate (HR) A total of 8 studies compared the T0/T1/T2 heart rate (HR) of dezocine and saline. The results showed that the HR of the T0/T1 phase of the dezocine group was lower than that of the saline group (Fig. 5A B), T2 HR group was not statistically significant (Fig. 5C) 7 Safety outcomes of Dezocine (AEs) A total of 11 studies compared the incidence of adverse events between dezocine and saline. The results showed that the adverse events in the dezocine group were lower than those in the saline group [OR=0.53, 95% CI (0.39-0.71), P<0.0001] (Fig. 7). In addition, taking into account the heterogeneity of the included studies, we used a funnel plot analysis, An additional funnel plot did not reveal apparent evidence of publication bias. (Fig. 8 Discussion Recent technology advances in minimally invasive surgical procedures and postoperative anesthetic management and have resulted in reductions in morbidity, enhanced rehabilitation, and an earlier resumption of daily activities. [22] With the improvement of surgical safety and the enhancement of anesthesia quality, the evaluation of restoration quality has become one of the significant end points of current mainstream research. [23] Inappropriate management of postoperative pain and loss of control can lead to complications and affect long-term recovery, which is associated with continued progression of chronic pain and a reduction in the quality of life of patients [24,25] . Opioids are now widely used for postoperative analgesia after various types of surgery, such as sufentanil and morphine. [26,27] However, opioid-related side effects and addiction can not be ignored, which prompted us to enthusiasm for the application of new anesthetics or combined drugs, reduce the clinical consumption of opioids, to the greatest extent to avoid adverse reactions and achieve encouraging postoperative analgesic efficacy. Dezocine is a κ opioid receptor agonist and an opioid receptor antagonist. This is an effective opioid analgesic with analgesic intensity, onset time and 8 duration similar to that of classic opioid morphine. [28] Opioids can achieve analgesic effects by preventing harmful stimulation in duced by central nervous impulses or by previously reducing central nervous system excitability, thereby reducing or eliminating central sensitization caused by injury. As an opioid receptor agonist-antagonist with more clinical use, dezocine mainly produces analgesic effect through agonistic κ receptor, and achieves the goal of preemptive analgesia by peripheral sensitization and central sensitization inhibition, and has appropriate sedative effect, not easy to produce tolerance. In addition, because dezocine does not produce typical μ receptor dependence, it can relax gastrointestinal smooth muscle and reduce the incidence of nausea and vomiting; the occurrence of skin itching is also related to the excitability of μ receptor, after application of dezocine. The blockage of the body also reduces the incidence of skin itching and improves the quality of postoperative analgesia. Previous studies have demonstrated that dezocine can reduce postoperative pain and reduce adverse reactions such as postoperative nausea and vomiting, respiratory depression, dizziness, and urinary retention. Our meta-analysis included 14 eligible RCT clinical studies with a total of 1820 patients. First, we systematically evaluated the visual analogue score (VAS) and the Ramsay sedation score. The results showed that dezocine had relatively superior sedative and analgesic effects. Secondly, we calculated the three vital signs of mean arterial pressure (MAP), heart rate (HR), and Pulse Oxygen Saturation (SpO2). The results showed that the application of dezocine could reduce postoperative MAP and HR. There was no significant correlation between the use of dezocine and SpO2. Then, we analyzed the incidence of adverse events, and the results showed that dezocine had fewer adverse events than the control group, suggesting that the safety of dezocine is better and more acceptable. By analyzing the aggregated data, we observed a degree of heterogeneity in the experimental articles in this meta-analysis. The source of heterogeneity may be due to differences in the dose of dezocine used, or it may be due to differences in the patient's original chronic disease. To ensure the convincing objectivity of the analysis results, we use a random effects model to analyze the aggregated data. We used a funnel plot to evaluate publication bias in the included studies and found that publication bias was not one of the factors that contributed to heterogeneity. 9 Critically, our research has some inevitable limitations. First, there are differences in the types of surgery and disease types involved in the included RCTs, which may lead to partial selectivity bias. Second, due to the limited number of RCTs currently available for dezocine, there is a relative irreversible heterogeneity between the included RCTs, which affects the reliability of some statistical results. At present, multi-center clinical trials for dezocine are being carried out all over the world. We are waiting for new data and results to provide further theoretical support for better clinical application of dezocine. 7.Availability of data and materials The analyzed datasets generated during the study are available from the corresponding author on reasonable request. 9.Author Contributions Haitao Niu was lead investigator and responsible for the overall design and leadership of the study as well as supervision of project staff, study inclusion, data analysis and manuscript development. Fengxi Hao and Zhongyuan Fan conducted the screening process (study inclusion), data extraction/analyses, and contributed to drafting of manuscript. Feng Chen conducted the literature searches. Haichen Chu and rest authors participated in designing the study and contributed to drafting the manuscript. All authors read and approved the final manuscript. Ethics approval and consent to participate Not required owing to the design of the research (systematic review). Consent for publication Not applicable.
2019-10-24T09:17:46.890Z
2019-10-18T00:00:00.000
{ "year": 2019, "sha1": "2730139ef0b8400c2c537fd8149e8dce3739aa9d", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-6844/v1.pdf", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "01c8118da9ea7f78660694845678e33fde6833bc", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
18382111
pes2o/s2orc
v3-fos-license
Central Mode Water (DCMW). The WOA01 is found to be Abstract. Winter mixed layer characteristics in the North Pacific Ocean are examined and compared between Argo floats in 2006 and the World Ocean Atlas 2001 (WOA01) climatology for a series of named water masses, North Pacific Tropical Water (NPTW), Eastern Subtropical Mode Water (ESTMW), North Pacific Subtropical Mode Water (NPSTMW), Light Central Mode Water (LCMW) and Dense Central Mode Water (DCMW). The WOA01 is found to be in good agreement with the Argo data in terms of water mass volumes, average temperature-salinity (T-S) properties, and outcrop areas. The exception to this conclusion is for the central mode waters, DCMW and LCMW, whose outcropping is shown to be much more intermittent than is apparent in the WOA01 and whose T-S properties vary from what is shown in the WOA01. Distributions of mixed layer T-S properties measured by floats are examined within the outcropping areas defined by the WOA01 and show some shifting of T-S characteristics within the confines of the named water masses. In 2006, all the water masses were warmer than climatology on average, with a magnitude of about 0.5°C. The NPTW, NPSTMW and LCMW were saltier than climatology and the ESTMW and DCMW fresher, with magnitudes of about 0.05. In order to put these results into context, differences between Argo and WOA01 were examined over the North Pacific between 20 and 45° N. A large-scsale warming and freshening is seen throughout this area, except for the western North Pacific, where results were more mixed. Introduction Since the time of Iselin (1939), ocean scientists have been seeking to connect the distribution of water properties at the surface of the ocean to those found in the interior.Iselin noticed that interior properties were similar in temperaturesalinity (T-S) characteristics to those found at the surface.The work of Stommel (1979), Marshall et al. (1993), Huang and Qiu (1994) and Qiu and Huang (1995) and many others have laid the foundation for understanding the subduction of water from the surface ocean to where it might be observed underneath the surface sometime later.The basic result of this analysis is a subduction rate, which combines Ekman pumping and lateral induction to give a vertical mass transport into the ocean interior.While knowledge of the subduction rate can indicate how rapidly a particular water mass gets into the interior, the amount of water subducted will depend on the volume of a given water mass available, and the T-S properties of water observed in the interior depend on those at the surface when the water is subducted (Bingham et al., 2002). Mode waters have been observed in every world ocean except the North Indian (Hanawa and Talley, 2001).They were originally given that name because they represent a mode in a volumetric census of waters classified by temperature and salinity (Masuzawa, 1969), but more recently have come to be identified by vertical minima in potential vorticity or temperature or density gradient.Mode waters are among the most important subducted water masses because they can carry climate anomalies from the surface into the interior to resurface later (Sugimoto and Hanawa, 2005).They thus provide the ocean with a memory of wintertime conditions at the surface. Published by Copernicus GmbH on behalf of the European Geosciences Union.In the North Pacific, there are several varieties of mode waters (Hanawa and Talley, 2001), each with its own dynamics and formation processes.North Pacific Subtropical Mode Water (NPSTMW) is formed by strong cooling in the winter offshore of the Kuroshio and KuroshioExtension front (Bingham, 1992).North Pacific Central Mode Water has two varieties, Dense Central Mode Water (DCMW) and Light Central Mode Water (LCMW) (Oka and Suga, 2005).These waters are formed between the Kuroshio and Subpolar fronts and probably in association with eddies and other mesoscale variability (Oka and Suga, 2005;Saito, personal communication).Eastern Subtropical Mode Water (ESTMW) has temperature and salinity characteristics similar to NPSTMW, but is formed in the eastern North Pacific as a result not of strong wintertime cooling, but due to weak summer heating, and a consequent weak seasonal pycnocline (Hautala and Roemmich, 1998;Ladd and Thompson, 2000). Another important North Pacific water mass is the North Pacific Tropical Water (NPTW; Suga et al., 2000).This water mass (which partially overlaps the ESTMW) is associated with high salinity at the surface and Ekman convergence in the middle of the subtropical gyre.It is also seen in the interior as a subsurface salinity maximum (Bingham et al., 2002). Recently, the Argo program (Argo Science Team, 2001) has developed the ability to measure the wintertime mixed layer of the ocean to an unprecedented degree.Argo floats can profile and measure the properties at times when surface ships cannot make such measurements.Ohno et al. (2004) examined winter mixed layer depth (MLD) using Argo float data.They found that the World Ocean Atlas 2001 (WOA01; Conkright et al., 2002) MLDs generally agreed with those measured by floats, except in the northwest Pacific where the WOA01 underestimated the MLD south of the Kuroshio Extension front and overestimated the MLD north of the front.Ohno et al. (2004) attributed the disagreement to smoothing across either the temperature/salinity front or the mixed layer front in the WOA01 as suggested by Suga et al. (2004). Water mass formation is a crucial process in understanding and modeling ocean circulation (e.g.Xie et al., 2000) and a continuing challenge to ocean modelers (e.g.Tsujino and Yasuda, 2004;Qu et al., 2002).One of the most critical aspects of models is proper depiction of the surface mixed layer.Often, the mixed layer boundary condition relaxes to that given in some version of the World Ocean Atlas, the most current version of which was released in 2001 (Conkright et al., 2002).An important issue for ocean models is to understand how well the WOA01 and other such climatologies represent the mixed layer in terms of T-S characteristics, geographic areas and water mass volumes.Only if models have proper surface boundary conditions can the water mass formation and subduction process be accurately simulated.For that reason, the main question to be addressed in this paper is: How well does the WOA01 depict the T-S properties and outcropping regions of some of the important water masses in the North Pacific?This question will be examined by comparing the wintertime mixed layer measured by Argo floats and that depicted in the WOA01.Given the heavy smoothing done in creating the WOA01, one would expect some discrepancies as shown by Ohno et al. (2004).This study extends that of Ohno et al.. (2004) by examining water mass volumes, outcrop areas and T-S properties of several different water mass formation areas.Overall, the conclusion we will come to is that the mixed layer is depicted pretty well with respect to subtropical water masses, but less so with the central mode waters outcropping north of the Kuroshio extension. Data and methods Data for this study come from two sources, Argo profiles and the WOA01. The Argo profiles we used were collected during the winter months of January-March 2006 (Fig. 1).We also examined Argo profiles from 2004 and 2005, but the data distribution is sparser.Results from these years were similar to those presented here.Each float spends 10 days between profiles.The 4997 profiles from January-March 2006 represent returns from 589 separate floats.The spatial coverage is relatively even, except for heavier sampling near the Kuroshio and some poorly sampled regions in the northwestern and western tropical Pacific.Initial data processing and quality control, described by Oka et al. (2006) 1 , consist mainly of Argo's real-time quality control plus visual inspection for suspect data.MLD was calculated for each profile as the depth where sigma-t exceeds that at 10 m depth by 0.125.This criterion is less strict than that recommended by de Boyer Montegut et al. (2004) (who used a criterion of 0.03 sigma-t), but similar to that determined by Kara et al. (2000) (who used a more complex criterion that approximates an isothermal depth of 0.8 • C).The 0.125 criterion is standard for use with the WOA01 data (e.g.Sugimoto and Hanawa, 2005) and we wished to handle the calculation of MLD consistently between the datasets we used.We are using average MLD calculated by two different methods here.One method (method 1) uses individual Argo floats, calculates MLD from each float and then averages the MLDs.The other (method 2) takes averaged hydrographic profiles (the WOA) and calculates the MLD from those averaged profiles.De Boyer Montegut et al. (2004) have carefully considered these different methods, showing one example of how the average MLD calculated by method 2 can be less than that from method 1.They find that globally the method 1 MLDs are 25% greater than method 2. They also suggest that method 1 may result in overestimation of the MLD when using large difference criteria like the one we use. Mixed layer temperature (MLT) and salinity (MLS) were given for each profile as the temperature and salinity at 10 m depth.We present results using January-March data all treated in the same way and averaged together.There is some indication (Oka et al., 2006 1 ) that the MLD reaches a maximum in different areas of the ocean at different times of the winter.To make sure our results were not biased due to averaging the entire winter together, we re-ran all calculations in this paper using March-only Argo profiles and the March WOA01 average.The results were very similar, but with less certainty due to a smaller number of data. In order to calculate water mass volumes, Argo MLD, MLT and MLS were interpolated onto one degree squares in the North Pacific.For a given 1 degree latitude-longitude grid point, we searched for profiles within 2 degrees of the grid point.If no profiles were found, the search radius was increased to 3 degrees, and so on up to 10 degrees.Once one or more profiles were found within a given radius, MLD, MLT and MLS values were averaged together using a Gaussian weighting function with a 1.5 degree e-folding scale.The full 10 degree search radius was rarely used.90% of the one degree squares had profiles within 4 degrees latitudelongitude distance of the grid point. The WOA01 comes already averaged onto a 1 degree grid (Stephens et al., 2002;Boyer et al., 2002).We used the winter seasonal gridded profiles from the North Pacific Basin (Conkright et al., 2002), which are averaged over January-March.MLT and MLS were given as the values at 10 m depth.MLD was calculated using the criterion mentioned above.This is the same calculation as that done by Suga et al. (2004). Volumes were calculated by temperature-salinity (T-S) class in ranges of (0.5 • C, 0.05).For each one degree square with a particular value of temperature and salinity, the volume of that water was calculated as the surface area of the one degree square times the MLD.The total volumes for each T-S class were added up with the results presented as two dimensional volumetric censuses for both Argo 2006 and WOA01 (Fig. 2). In the pictures of Fig. 2, what is shown is the volume assuming the T-S properties of the water are constant throughout the mixed layer.This assumption is probably true for the most part in the real ocean, where the mixed layer ends at the top of the thermocline and sigma-t increases abruptly by more than the 0.125 criterion.However, this is a somewhat problematic assumption for this calculation using the WOA01, because the mixed layer by definition changes in density between the surface and the base.It would probably be more accurate to do this volume calculation for the entire depth of the mixed layer taking vertical T-S variation into account.This problem is resolved somewhat by the choice of bin width in Figs.2a and b, 0.5 • C and 0.05.These values give a sigma-t difference across the bin of about the same size as the mixed layer criterion of 0.125, depending on the temperature and salinity value.Thus it is unlikely that the considerably more painstaking and error-prone calculation described would yield significantly different results.Another issue in Fig. 2 and Table 1 is the significance of the calculated numbers.We adopted the following procedure for calculating the significance of the volumes in Fig. 2. Since the numbers are weighted averages, we used weighted standard deviations for each one-degree square to compute a standard error, the standard deviation divided by the square root of the number of observations.Generally these standard errors were very small.If presented on the same scale as Fig. 2b, the plot would be completely white.These standard errors were added up in a "square root of the sum of the squares" sense to get total errors for the Argo 2006 volumes shown in Table 1.No such calculation could be done for the WOA01 data. Results The distribution of mixed layer volume from the WOA01 (Fig. 2a) reflects in part the distribution of water in the main thermocline in the subtropics, especially in the temperature range of 10 to 20 • C. A mode in volume is seen with T and S range 18-20 • C and 34.75-34.85.This water is the surface expression of NPSTMW.This density is somewhat lighter than classically defined NPSTMW (Masuzawa, 1969) The Argo 2006 data show clear delineations of most of the major water masses (Fig. 2b) with stronger and clearer peaks.NPSTMW is the most apparent peak, centered at 18-19 • C, 34.8-34.9.There are also peaks for NPTW (24 • C, 35.2-35.3)and ESTMW (20 • C, 35.2) .There is a peak at (15-16 • C, 34.5-34.6)that may be either light LCMW or dense NSPTWM.It does not fit exactly in the range of either as defined in Table 1.There is a volume mode that corresponds to DCMW (9.5-11 • C, 34.2-34.3),but is saltier and warmer than usual (Oka and Suga, 2005; Oka et al., 2006 1 ). The most striking contrast between the WOA01 and Argo 2006 volume distributions is the water found to the fresh side of the main thermocline in the WOA01.The signal of this water is weaker in the Argo data.It reflects a tongue of cold, fresh water close the the west coast of North America (see e.g.Suga et al., 2004, Fig. 3g).In the WOA01, this tongue is spread into the interior by the averaging process and increased in volume beyond what is apparent in the Argo data. The North Pacific Hydrobase mixed layer climatology (Suga et al., 2004) was examined in the same way, with volume calculated.It showed a distribution similar to that of the WOA01, so results are not displayed here.This implies that the Hydrobase suffers the influence of smoothing even though the purpose was to minimize this type of problem. We now focus on some named water masses from the North Pacific, NPTW, NPSTMW, ESTMW, LCMW and DCMW.A summary of the T-S classifications and calculated total winter mixed layer volumes for each water mass are presented in Table 1 and water mass T-S boundaries are shown in Fig. 2c.In general the mixed layer volumes of the various water masses are remarkably similar between the WOA01 and Argo.This indicates that the WOA01 does a good job of depicting the volume of each water mass, but spreads that volume out somewhat in T-S space.Some discrepancies exist.For example, the NPSTMW volume is larger for Argo than for the WOA01 perhaps because Argo mixed layers are deeper for NPSTMW (Ohno et al., 2004).The LCMW volume is about 100% larger in the WOA01 than in Argo. Given the randomized nature of the Argo sampling, it makes sense to compare outcrop areas derived from individual profile T-S properties with those in the WOA01 for the various water masses.This is done in Fig. 3.The NPTW distribution (Fig. 3a) shows that the Argo float characteristics generally match in location with the WOA01 with the blue symbols matching the gray areas.There are some discrepancies, especially in the northwest and southeast corners of the WOA01 outcrop area and along the southern edge. The other water masses also show general agreement between Argo and WOA outcrop areas.The water masses where the WOA01 and Argo data are most at odds are both the LCMW and DCMW (Figs. 3d and e).From Argo data, there appears to be no area of pure central mode water (CMW) outcrop.Non-CMW floats mixed up with CMW floats both inside and outside the gray areas.This is likely a result of the nature of DCMW and LCMW formation (Saito, personal communication).These water masses do not have consistent outcrops, but appear within the context of mesoscale features spun off from the Kuroshio and Oyashio extensions. There are two types of discrepancies between float data and the WOA01 in Figs.3a-e.One is where the float measured T-S characteristics of a particular water mass at 10 m, but was outside of the area given by the WOA01 (case 1; blue symbol outside of gray area in Fig. 3).The other is where a float measures water properties outside that of the given water mass, but is within the area where that water mass is shown by the WOA01 (case 2; green symbol inside gray area).Finally, there is the matching case where a float is within the characteristics of a given water mass and is also within the area shown by the WOA01 (case 3; blue symbol inside gray area).These cases are more easily visualized by use of a Venn diagram in Fig. 4. Inside the oval on the right (left) is the set of floats which match the geographic area (T-S characteristics) of a particular water mass.The intersection of the two ovals is the set of floats that match both. To give an idea of how well the floats measure the area of the various water masses, the ratios of numbers of float profiles is shown in Table 2.In general, the floats came up with the predicted characteristics most of the time in the subtropical water masses, especially for the NPSTMW and ESTMW.The results matched less well for the central mode waters.A float measuring DCMW (LCMW) had a 49% (46%) chance of surfacing outside of the outcrop area as defined by the WOA01.46% (44%) of the floats surfacing within the outcrop area did not have DCMW (LCMW) Ocean Sci., 2, [61][62][63][64][65][66][67][68][69][70]2006 www.ocean-sci.net/2/61/2006/characteristics.This tendency is mirrored in the set of matching cases in the fourth column of Table 2 showing a low percentage of matches for LCMW and DCMW, but a high percentage for ESTMW and NPSTMW.These discrepancies highlight the extremely intermittent nature of CMW formation.They are in good agreement with the results of Qu et al. (2002) who found CMW formation to be strongly associated with eddies.These central mode waters could be said not to outcrop in a particular area, but to surface from time to time in a large and ill-defined region of the northwestern North Pacific.The NPTW also has a large number of discrepancies and low number of matches.This may have to do with a general warming of the basin observed in the floats relative to the WOA01 as will be discussed below.We did a basin-wide average and found that, over the entire North Pacific, the floats were warmer than the WOA01 by about 0.5 • C. Most of the discrepancies of the number 2 type were because the observation was warmer than the WOA01. Because surfacing floats may have properties different from the WOA01, it is worthwhile to examine the medians and standard deviations of T-S properties of floats within a given area.This will tell us if the floats are measuring characteristics very different from the WOA01.This is done in Fig. 2c, where the medians are shown for each water mass with standard deviation bars.These are the medians and standard deviations for all floats surfacing in the area defined for a particular water mass by the WOA01 (gray areas in Fig. 3, cases 2 and 3 in the previous paragraph).The distributions fall well within the range stated in Table 1 for the warmer water masses.The LCMW and DCMW standard deviation bars extend well outside the range, but the medians are inside. Despite the fact that the various water masses are generally found within the outcrop areas predicted by the WOA01, there is significant T-S variability between the floats and the WOA01.To highlight this point we did the following analysis.For each float that surfaced in a water mass region (gray areas in Fig. 3), we took the difference between the float and the value taken from the WOA01 where the float surfaced.In other words, if the float surfaced and measured a mixed layer temperature of, say, 10 • C, while the value of the WOA01 at the same one degree square was 9 • C, we recorded the temperature difference as 1 • C. A similar analysis was done for salinity.Histograms of those temperature and salinity differences are displayed in Figs.5a-b. The temperatures of the various water masses in 2006 are generally biased high, with the floats measuring warmer temperatures than indicated by the WOA01 (Fig. 5a).The DCMW histogram appears closest to being symmetric about zero, but is still biased somewhat warm. The 2006 salinity histograms are more mixed (Fig. 5b).Two water masses are fresher than indicated by the WOA01 (DCMW and ESTMW) and the rest are saltier. To put the Fig. 5 results into context we did a similar analysis for the entire North Pacific (Fig. 6).For the temperature, this shows that Argo floats were warmer than climatology over a broad swath of the tropical North Pacific for 2006 (Fig. 6a).The mode water formation areas of the northwestern North Pacific are a special case.There we see a mixture of cold and warm floats, blue and red symbols in close proximity.In this view, it is difficult to see the same trend in temperature in the mode water formation areas that we saw in Fig. 5a.The ESTMW and NPTW areas are more central and clearly warmer than climatology as shown in Figs.5a and 6a. For salinity, the North Pacific is fresher than climatology for a large area south of about 25 • N, wrapping around into the northeastern and northwestern basins(Fig.6b).This matches the freshening of the ESTMW seen in Fig. 5b.An area of the central North Pacific, centered around 30 • N, 160 • E is saltier than the WOA01.This salinification of the NPSTMW formation area is consistent with curve of Fig. 5b. Discussion Overall, the WOA01 and 2006 Argo floats show the outcrop areas of some major North Pacific water masses to be very similar, except for the central mode waters (Fig. 3).The volumes of the water masses agree well between the two data sets (Table 1) as do the T-S characteristics (Fig. 2c), again with the exception of the Central Mode waters.Suga et al. (2006) 2 computed a subduction transport as a function of temperature and salinity class, similar to Fig. 2a for the WOA01.That is, they calculated the subduction rate at each one degree square, multiplied it by the surface area, and summed the transport up for each T-S class.The result is a calculation of water mass volume subducted in a year.The amount of water subducted in a year in a one degree square should be equal to a fraction of the depth of the late winter mixed layer, multiplied by the surface area.That is, once the winter is over, one would expect some fraction (1/2?, 2/3?) 2 Suga, T., Aoki, Y., Saito, H., and Hanawa, K.: Ventilation of the North Pacific subtropical pycnocline and mode water formation, Prog.Oceanogr., review, 2006. of the water in the mixed layer at the end of winter to be inducted into the interior circulation depending on the subduction rate, meridional slope of the mixed layer base, the depth of the spring seasonal thermocline, etc.Comparison of Suga et al.'s (2006) 2 results and what is presented here is consistent with this expectation.Our water mass volumes are generally larger than their subduction volumes but by less than an order of magnitude.This gives confidence in both the present study and in their more complicated calculation.The formation of NPSTMW, ESTMW and NPTW is wellrepresented in most eddy-resolving general circulation models (e.g.Tsujino and Yasuda, 2004) (Qu et al., 2002).The present study can give a clue as to why it has been difficult to simulate the formation of central mode waters.The reliance on relaxation back to the WOA01 or other climatology could introduce problems into a model due to the difference between climatological mixed layer and what is actually present.The formation process of central mode waters is fundamentally different from the other water masses discussed here in that it occurs intermittently in space and time (Saito, personal communication). The isopycnals on which these water masses circulate are not open to the atmosphere on a regular basis over a well-defined region like the other water masses studied.Figure 6 indicates that as a whole in 2006 the North Pacific mixed layer was fresher and warmer than average.These changes encompassed the vast majority of the tropics and eastern and northeastern basins.On the other hand, the mode water formation areas were much less clear as shown in Figs. 5 and 6.This illustrates the fundamentally different nature of surface processes in these areas in winter.Surface properties in the mixed layer are controlled by wintertime heat loss and subsequent convection.The mode water formation areas have a number of fronts within them, which makes the determination of the float sampling a matter of geography.Whether a float measures warmer or cooler (or fresher or saltier) than climatology depends mostly on which side of a local front the float happens to surface on.This makes determination of interannual variability of the T-S properties of mode water formation areas trickier than other regions.Interannual variations may be much more in the nature of shifts in the positions of fronts than changes in T-S properties. We can only speculate here on the reasons for the T-S differences between WOA01 and Argo shown in Fig. 6.Most likely, they are due to interannual variability.That is, surface waters in 2006 happened to be particularly fresh and warm over much of the North Pacific.Similar results were obtained but not shown for 2004 and 2005.Though this is the obvious explanation, there are others possible.Differences could be a result of spatial or temporal sampling biases in the way the floats surfaced.This is most likely a problem for the DCMW and LCMW formation areas, which were not well-sampled by floats in 2006 (Fig. 3e).Another potential issue is biases introduced into the WOA01 in the smoothing and averaging process.Whatever the reasons for the observed differences, the WOA01 will be used in the future as a benchmark against which changes can be measured. 19 Figure 1. Fig. 2 .Fig. 3 . Fig. 2. (a) and (b) Distribution of water volume in the mixed layer by temperature and salinity class.Temperature and salinity are summed over ranges of 0.5 • C and 0.05 respectively.(a) WOA01.(b) Argo 2006.(c) T-S diagram showing the boundaries of the water masses discussed in the text and shown in Table 1.Also show in are medians and standard deviations of T-S properties for Argo 2005, cases 2 and 3 in the text.The median is indicated by letters: T -NPTW; E -ESTMW; N -NPSTMW; L -LCMW; D -DCMW.Standard deviations are indicated by bars.Potential density countours are shown in panel (c).(d) Color scale for panels (a) and (b). Figure 4 Fig. 4 . Figure 4 Fig. 4. Venn diagram illustrating the comparison made in Fig. 3 and Table 2.The shaded oval represents the set of floats that surfaced within the shaded areas of Fig. 3.The blue oval represents the set of floats in Fig. 3 that are colored blue, i.e. had the T-S characteristics of the given water mass.Cases as indicated in the text are shown with numbers. Table 1 . Winter mixed layer temperature-salinity characteristics and volumes of given water masses. Table 2 . Columns 2 and 3 represent discrepancies between numbers of floats in 2006 and water properties given by the WOA01, as described in the text.Column 4 represents the matching case, the percentage of floats which matched in both geographic area and T-S characteristics. but simulating the formation of central mode waters has been more difficult.One Ocean Sci., 2, 61-70, 2006 www.ocean-sci.net/2/61/2006/reason suggested for this is that restoring models to observed SSS and SST is that it double counts the heat and salt transport by western boundary currents and their extensions leading to warm biases in the western and central mode water formation areas
2014-10-01T00:00:00.000Z
2006-07-20T00:00:00.000
{ "year": 2006, "sha1": "ced733154d6414938cf723f1226393c4a1bb312e", "oa_license": "CCBYNCSA", "oa_url": "https://os.copernicus.org/articles/2/61/2006/os-2-61-2006.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "acaa289724b3026d0f8d06a8376627ccff399f5b", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geology" ] }
119260872
pes2o/s2orc
v3-fos-license
The size of the largest antichains in products of linear orders We present new exact and asymptotic results about the size of the largest antichain in the product of $n$ linear orders. Introduction Antichains in the poset {0, 1} n equipped with the standard partial ordering are well-studied and have many different interpretations (e.g. Ersek Uyanık et al., 2017). An expression for the maximal size of such antichains in {0, 1} n is given by a classical theorem of Sperner (1928). If we consider the more general poset {1, . . . , m} n also equipped with the standard partial ordering, an expression for the size of the largest antichain is given in Sander (1993). Sander also provides asymptotic results when m is fixed and n goes to infinity. The interest of Sander in this problem arose form a recreational mathematics problem posed in Motek (1986). Actually, antichains and, hence, maximal antichains, in the poset {1, . . . , m} n are of interest in many domains. For instance in game theory, Hsiao and Raghavan (1993) define a multichoice cooperative game as a real-valued mapping on {1, . . . , m} n , where n is the number of players and {1, . . . , m} denotes the set of ordered actions that each player can take. A profile in such a game is a vector x = (x 1 , . . . , x n ) ∈ {1, . . . , m} n and represents the actions taken by each agent. A winning profile is such that the value of the game at that profile is 1. A winning profile x is minimal if there is no other winning profile y such that y ≤ x. If a game is monotone, then the set of all minimal winning profiles is an antichain. Besides, Grabisch (2016) shows that antichains in {1, . . . , m} n play an important role in the analysis of these multichoice games. Our personal interest in antichains in the poset {1, . . . , m} n stems from the analysis of a multicriteria sorting model presented in Fernández et al. (2017). In this context, the size of maximal antichains corresponds to the maximum number of profiles needed to represent a twofold ordered partition in their model, whenever such a representation is possible. Another paper about antichains in {1, . . . , m} n is Tsai (2018): it presents an upper bound for the number of antichains (a generalization of Dedekind numbers). In the present paper, we will extend Sander's results in two directions. First, we will present an exact expression for the size of the largest antichain in the heterogeneous product Π n i=1 {1, . . . , m i }. Then, we will provide asymptotic results for the size of the largest antichain in {1, . . . , m} n when n is fixed and m goes to infinity. Notation and definitions Let P be a set and ≤ be a binary relation defined on P , satisfying (i) reflexivity (∀x ∈ P, x ≤ x), (ii) antisymmetry (∀x, y ∈ P, x ≤ y and y ≤ x ⇐⇒ x = y) and (iii) transitivity (∀x, y, z ∈ P, x ≤ y and y ≤ z ⇒ x ≤ z). The pair (P, ≤) is called a partially ordered set (poset) 1 . When there is no ambiguity, the poset (P, ≤) is simply denoted by P . For all x, y belonging to a poset P , we say that x and y are comparable if x ≤ y or y ≤ x. A chain of P is a totally ordered subset of P . A linear order on P is a poset such that P is a chain. An antichain of P is a subset of pairwise incomparable elements. A largest antichain is an antichain of maximal cardinality. Let (P, ≤ P ) and (Q, ≤ Q ) be two posets. The product poset (P × Q, ≤) is defined to be the set of all pairs (a, b), a ∈ P, b ∈ Q, with the order given by , where ≤ is the usual ordering of the natural numbers, is a linear order (also called a chain). The product of these n linear orders is the poset is also called a direct product of chains (Caspard et al., 2012). When m is such that m i = m for all i ∈ [n], then the Cartesian product Π n i=1 [m i ] is homogeneous and can be written as [m] n . The size of the largest antichains in Π n i=1 [m i ] and [m] n will respectively be denoted by s(m) and S(m, n). Sperner (1928) has proved that the size of the largest antichain in [2] n is S(2, n) = n n/2 . When n is large, a convenient approximation for S(2, n) is obtained using Stirling's formula: S(2, n) ∼ 2 n 2/πn. Later, Sander (1993) has proved that the size of the largest antichain in [m] n is with g = n(m−1)/2 . Sander has also provided a bound 2 and some asymptotic results for S(m, n) when m is fixed. Notice that S(m, n) corresponds to Sequence A077042 in the Online Encyclopedia of Integer Sequences (OEIS, 2019). Section 3 is devoted to the general case of heterogeneous products and presents some exact results about s(m). In Section 4, we consider the special case of homogeneous products and we present a new exact result about S(m, n) and also an asymptotic result when n is fixed. Heterogeneous product Let us define m I = i∈I m i and Our result about heterogeneous products is the following. Before proving this result, we recall some definitions and results about posets. Let (P, ≤) be a poset. For any x, y ∈ P , we say that y covers x in P iff x < y and there is no z such that x < z < y. A ranking (or grading) of a poset P is a partition of P into (possibly empty) sets P i (i ∈ Z) such that, for each i, every element in P i is covered only by elements in P i+1 . The set P i is called the ith rank of P . If a poset admits a ranking, then we say that it is ranked (or graded). The Whitney numbers of a ranked poset P are {p i : i ∈ Z}, where p i is the cardinality of P i . Let p k be a largest Whitney number of a ranked poset; we The Whitney numbers are said to be symmetric, or P is said to be rank-symmetric, if there exists a d such that p i = p d−i for all i. Let P and Q be ranked posets and R = P × Q. The rankings of P and Q induce the following ranking on R: and R has Whitney numbers A k-family is a subset of P containing no chain of size k + 1. Equivalently, a k-family is a union of k (possibly empty) antichains, so that if P is ranked, any union of k ranks is a k-family. P is said to be Sperner if the rank of largest size is an antichain of maximum size, and P is k-Sperner if the union of the k largest ranks is a k-family of maximum size. P is strongly Sperner if it is k-Sperner for all k ≥ 1. P is a Peck poset if it is strongly Sperner, rank-unimodal, and rank-symmetric. Theorem 3.2 in Proctor et al. (1980) shows that the product of two Peck posets is a Peck poset. Proof of Theorem 1. In this proof, for the sake of brevity, we use X to denote the poset ( , the poset ([m i ], ≤) is a Peck poset. Hence, n − 1 applications of Theorem 3.2 in Proctor et al. (1980) show that X is also a Peck poset. Let us consider a ranking of X such that the minimal element (1, . . . , 1) in X has rank n, which is the sum of the coordinates of the minimal element. The maximal element (m 1 , . . . , m n ) has rank i∈[n] m i . Let p i j be the jth Whitney number of ([m i ], ≤); it is equal to 1 for each non-empty rank. Since X is rank-unimodal, and rank-symmetric, a maximal Whitney number corresponds to the median rank h (defined by (2)). Because of the Sperner property, this rank is also an antichain of maximum size and n − 1 applications of (4) show that the size of this antichain is Define i n = h − j∈[n−1] i j and we obtain Hence, an antichain of maximum size in X is the set x i = h} and a generating function for s(m) is defined as follows, for all x ∈ R, |x| < 1, We also have We denote by c p (A(x)) the coefficient of x p in the polynomial A(x). Hence s(m) = c h (f (m, x)). A property of products of polynomials, which extends to absolutely convergent series, is: In our case, we have For |x| < 1, we have 2 Table 1 illustrates how s(m) varies as a function of m. In particular, we see that increasing one of the m i 's way above the others has a limited impact. When all components of m are identical, it is easy to show that (3) coincides with Sander's expression (1). In that case, Sander's expression is computationally more efficient than ours. This section about heterogeneous products does not contain any asymptotic result because it does not seem relevant to let one of the parameters, say m 5 , go to infinity while keeping the other parameters constant. Homogeneous product Let h = n(m + 1)/2 . Our result about homogeneous products is the following. Theorem 2. For all n ≥ 2, if n(m + 1) is even, then S(m, n) is equal to Otherwise, S(m, n) is equal to Proof. We only prove (6). The proof of (7) is similar. We have seen in the proof of Theorem 1 that an antichain of maximum size in Π i∈ [n] . Hence, if n(m + 1) is even, then an antichain of maximum size in [m] n is the set A = {x ∈ [m] n : i∈[n] x i = h}, with h = n(m + 1)/2. Since 1 ≤ x n ≤ m, if we project the set A on [m] n−1 by dropping the last coordinate x n , we obtain the set Since no x, y ∈ A are comparable, we know that no distinct x, y ∈ A project on the same element in [m] n−1 . Hence |A | = |A| and S(m, n) is equal to where the last equality holds because Let us rewrite {y ∈ [m] n−1 : i∈[n−1] y i ≤ h − m − 1} as the union of several sets: Clearly, for any r = s, A r ∩ A s = ∅ and and C l * r = B r \ C l r . Then A r = l∈[n−1] C l * r and, thanks to the inclusionexclusion principle, where the last equality holds because all dimensions play the same role. The set B r is a regular (n − 2)-dimensional simplex. Its cardinality is equal to the (r − n + 1)-th simplicial polytope number in n − 2 dimensions (Kim, 2002), that is The set l∈[i] C l r is the set of all elements of N n−1 + such that at least i components are strictly larger than m. If r < i(m−1)+n−1, then l∈[i] C l r is empty because it is not possible to have at least i components strictly larger than m. Hence where j is the largest integer such that r ≥ j(m−1)+n−1. If r ≥ i(m−1)+n−1, then Combining (8), (9), (10) and (11) concludes the proof. 2 Expressions (6) and (7) are less elegant than Sander's expression (1). They are also computationally less efficient. Indeed the first summation in (6) has approximately nm/2 terms while the only summation in (1) has approximately n/2 terms. Expressions (6) and (7) are nevertheless interesting because they allow us to derive an asymptotic result for S(m, n) when n is fixed and m → ∞ (see Theorem 3). This was not possible with (1). When n < 5, expressions (6) and (7) reduce to particularly simple expressions. For n = 2, 3 or 4, the asympotic behaviour of S(m, n) is easy to derive from this corollary, while the general case is covered by our next result.
2019-03-18T17:04:33.000Z
2019-03-18T00:00:00.000
{ "year": 2019, "sha1": "38592cb94f9277db88e858a138cfa701dbb5fad1", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "38592cb94f9277db88e858a138cfa701dbb5fad1", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
233445223
pes2o/s2orc
v3-fos-license
Analysis of single comments left for bioRxiv preprints till September 2019 Introduction While early commenting on studies is seen as one of the advantages of preprints, the type of such comments, and the people who post them, have not been systematically explored. Materials and methods We analysed comments posted between 21 May 2015 and 9 September 2019 for 1983 bioRxiv preprints that received only one comment on the bioRxiv website. The comment types were classified by three coders independently, with all differences resolved by consensus. Results Our analysis showed that 69% of comments were posted by non-authors (N = 1366), and 31% by the preprints’ authors themselves (N = 617). Twelve percent of non-author comments (N = 168) were full review reports traditionally found during journal review, while the rest most commonly contained praises (N = 577, 42%), suggestions (N = 399, 29%), or criticisms (N = 226, 17%). Authors’ comments most commonly contained publication status updates (N = 354, 57%), additional study information (N = 158, 26%), or solicited feedback for the preprints (N = 65, 11%). Conclusions Our results indicate that comments posted for bioRxiv preprints may have potential benefits for both the public and the scholarly community. Further research is needed to measure the direct impact of these comments on comments made by journal peer reviewers, subsequent preprint versions or journal publications. Introduction The practice of sharing preprints, authors' versions of non-peer reviewed manuscripts, is on the rise. Once almost exclusively limited to the fields of high energy physics and economics on arXiv, RePec and SSRN preprint servers, preprints have gained much ground across a wide range of disciplines, including medical biochemistry and laboratory medicine (1). Preprints are also increasingly indexed in large scholarly databases and search engines (e.g., PubMed, Crossref, Lens, Dimensions, Microsoft Academic), and major manual referencing styles have issued guidance on how preprints should be cited in scholarly papers (2,3). Meta-re-search on preprints, however, remains scarce and is mostly limited to the explorations of two servers: arXiv (which includes sections on biomolecules and genomics) and bioRxiv (which includes sections on biochemistry and genomics). This limited research has shown that citation of preprints in scholarly literature had increased, and that articles first posted as preprints had higher citations rates and Altmetric scores than those not posted as preprints (2). Additionally, only minimal changes were found between preprints and the versions (of record) published in journals (4). Electronic supplementary material available online for this article. Malički M. et al. Analysis of comments left for bioRxiv preprints In 2020, the COVID-19 pandemic led to a large increase in the posting of preprints, as well as scrutiny and the number of comments they received on both social media platforms (e.g., Twitter) and comment sections of servers on which they are posted, with some comments prompting preprint retractions (5,6). However, despite 70% of preprint servers allowing users to post comments on their platforms, and researchers perceiving the possibility of receiving comments as one of the advantages of preprints compared to traditional publishing, no research, to the best of our knowledge, has examined the nature of comments or actors involved in preprint commenting (1,2,7). In this study, which originated before the COVID-19 pandemic, we aimed to conduct an exploratory analysis of type of comments left on the bioRxiv servers. Furthermore, at that time, the majority of preprints with comments only had a single public comment, and so we decided to focus exclusively on those comments. Materials and methods We conducted a cross-sectional study of bioRxiv preprints that received a single comment on the bioRxiv platform between 21 May 2015 (the earliest date available through the bioRxiv comment Application Programming Interface -API) and September 9, 2019 (study data collection date). Data collection As part of our Preprint Observatory project, we collected all available comments and related metadata using the bioRxiv comment API (8,9). Collected data included DOIs, links to the preprints, commenter username (i.e., proper name or created name), date and time the comment was posted, and the comment text. Data was stored and managed in Microsoft Excel (Microsoft, Redmond, USA) and covered 6454 comments posted for 3265 unique preprints (which represented 6% of 56,427 preprints deposited on or before 9 September 2019). However, during data analysis, we realized, that the bioRxiv comment API did not provide access to comments posted before May 2015, so the percentage of commenting is probably slightly higher, but based on the data we had, it is likely that less than 10% of all preprints received comments on the bioRxiv website). Of the 3265 preprints in our database, 1983 (60%) received only a single public comment, and we decided to focus on them in this study. We enriched the data of those 1983 comments by adding preprint authors, subject area classification, word count for comments, and published date of preprints as reported in Crossref and extracted during our Preprints Uptake and Use Project (10). Finally, we classified the commenters as authors of the preprints or non-authors, and for authors we also captured their byline order (i.e., first, last or other -defined as neither first nor last). Data analysis Comments' were inductively classified by using an iterative process of reading the text, open coding and constant comparison (11). The initial comment types were devised by an analysis conducted on a sample of 35 comments, and later expanded using a sample of 200 comments. This initial categorization revealed distinct differences in the content of comments left by authors of the preprints and those left by non-authors. Identity of the commenter We first checked whether each comment had been posted by an author of the preprint. This was done by comparing if the posted username matched any of the names of the preprint authors (and was helped by a simple full username search with any of the authors' names -the simple search detected only 301 out of our later manually detected 617 cases as usernames often contained initials or symbols that were not an exact match with the names used in the preprint author byline). If the username was a pseudonym or a lab name, we classified the commenter as a non-author. During coding, we amended our initial classification if the comments' contents provided identification of the commenter. Content analysis After grouping comments by the commenter type (author or non-author), three of us independently categorized all comments. Each comment could be classified to multiple categories. The only exception to this rule was if the comment was similar in structure and content to a full peer review report that is traditionally submitted as part of a journal peer review process. In those cases, we decided not to analyse the full contents of such reviews as they were often authored by multiple authors, contained multiple review reports, or included links to detailed reports posted on other websites. For all other comments, we classified the type of content they contained, but not the number of instances of each type they contained. For example, if the content type was a suggestion, we did not count the number of suggestions made in the comment, i.e., one suggestion for formatting a table, another for a figure, and an additional suggestion for expanding the literature section. The three coders held weekly meetings online after coding batches of 200 to 300 comments. These meetings allowed for comparison of categorizations, resolving differences, clarification of existing or introduction of new categories. Before each meeting, we would compare differences between the coders. If only one coder categorized a comment differently (e.g., did not mark a specific category) we re-read the comment, ruled on the found difference, and recorded the final categorization in the main database. When a single coder indicated a category the other two did not, or all coders disagreed on the categorization, the comment was marked and discussed at a weekly meeting until consensus was reached. We observed that our initial disagreement was most common for comments we categorised as suggestions or criticisms, and where tone, rather than content, dictated the categorisation (e.g., Comment 1: "Great to see more well assembled lizard genomes, but it would have been nice to cite the more recent assemblies of…"; Comment 2: The authors state in the introduction that [method] has not been yet been reported". I beg to differ… following models have been generated and published… [provides references to 3 studies]. We categorised comment 1 as suggestion, and comment 2 as criticism, based on their tone even though they both provided authors with additional references. As comments could have multiple categories, comment 1 was also classified as a praise). While methods exist for calculating inter-rater reliability for data that could be classified as belonging to multiple categories, after each weekly meeting we only stored our agreed upon classification, so we cannot reconstruct the initial disagreements to produce such rating. It was also not our goal to study the difficulty of classifying comments, but rather, using a consensus approach, to explore the different types of comments posted on bioRxiv (before the pandemic). Our final classification tree and an example comment for each category are shown in Supplementary Table 1, and all comments and our assigned categories in our project's database (8). Finally, to see if comments of preprints that received a single public comment, and that were the focus of our study, differed from first comments left for preprints which received more than one comment, we also randomly chose 200 of the latter preprints and analysed their first comments. This sub-analysis showed that all of these comments could be classified under our identified comment types. Statistical analysis We report absolute numbers and percentages for types of comments, and medians and interquartile ranges (IQR) for number of words per comment, number of comments per preprint and days from posting of the preprint to the comment. As number of words and days are integers, when medians or 25 th and 75 th percentiles had decimals, we rounded them to ease readability. Note on word count: As the texts of the comments were retrieved in HTML syntax, we replaced the hyperlink syntax (e.g., <a….a>) with the word LINK and counted it as only one word. When references were written out as Author et al., year, or PMID: number, those were counted by as many words as were written. Differences in number of words and time to publication between author and non-author comments were tested with Mann-Whitney test. We did not use time-to-event analysis as information for comments posted before May 2015 was not available through the API. Analysed comments came from all 27 bioRxiv subject area classifications (assigned by the authors during preprint upload, Supplementary Table 2). Even though there were slight differences in the number of comments per subject area, we chose not to explore those differences for several reasons. The sample size was too small for such an analysis, and perceived preprint impact, as well as authors' prestige, country and other factors might influence the posting of comments (and those were not available to us). Significant differences were considered for P < 0.05. All analyses were conducted using JASP version 0.12.2. (https://jasp-stats.org/). Results Between 21 May 2016 and 9 September 2019 there were 1983 bioRxiv preprints that received a single public comment on the bioRxiv website. More than two thirds of those comments were posted by non-authors (N = 1366, 69%), while the remainder were posted by the preprint's authors themselves (N = 617, 31%, Table 1). Overall, the non-author comments were longer than comments posted by the authors (Mann-Whitney test, P < 0.001), and they were posted a median of 23 days after the preprints. In comparison, authors' comments were posted after a median of 91 days (Mann-Whitney test, P < 0.001). Differences between types of comments, with regards to number of words and days between preprint and comment publication, are shown in Table 1. Twelve percent of non-author's comments (N = 168) were full review reports resembling those traditionally submitted during the journal peer review process. Table 3). The latter most commonly published their review following a journal club discussion. Comments not resembling full peer review reports most commonly praised the preprint (N = 577, 42%), made suggestions on how to improve it (N = 399, 29%), or criticized some aspect of the preprint (N = 226, 17%) ( Table 1). Praise was most commonly found alongside suggestions or comments asking for clarifications, and least commonly alongside comments that criticised the preprint, reported issues or that inquired of the preprints publications status (Supplementary Table 4). Praise words alone (e.g., "Amazing work!") constituted 6% (N = 86) of comments. Comments containing suggestions often included suggestions of literature (co-) authored by the commenter or suggestions of other literature (Supplementary Table 3). Lastly, we present some examples of the comments we classified as belonging to the "other" category (a full list of those comments is available on our project website). There were three comments that raised research integrity issues (a possible figure duplication, an undeclared conflict of interest, and use of bots to inflate paper download numbers). There were also comments that raised personal issues. In one comment a parent requested more information on a rare disease (covered by the preprint) that was affecting their children, and in another case an individual inquired about possible PhD mentors for a topic related to the preprint. There were also comments that touched upon the culture of preprinting, with one comment asking authors to include brief summaries of what had changed between preprint versions, another expressing a view that preprints make traditional publishing redundant, and one praising authors for replying to questions they asked through email. Similarly, one comment we classified as full peer review report, also included a statement of hope "to get more comments on bioRxiv…prior to submission to a peer reviewed-journal" as they would "rather have a revised pre-print than a correction / retraction" in a journal. Authors' comments most commonly contained updates about the preprint's publication status (N = 354, 57%), additional information on the study (N = 158, 26%), or solicited feedback for the pre-print (N = 65, 11%, Table 1). Of all authors' comments, most were posted by the first author of the preprint (we could not identify the byline order for four percent of comments, N = 22, as the registered username was either a pseudonym, e.g., W1ndy, or a lab name, e.g., Lewy Body Lab). A small percentage (N = 29, 5%) of author comments were replies to feedback authors received elsewhere, e.g., during peer review or through personal emails (Supplementary Table 5). Lastly, as above, we present few examples of authors' comments classified as belonging to the "other" category (with full list of those available on our project website). In five comments authors requested suggestions on where to publish their preprint, and in one comment authors mentioned that an editor saw their preprint and invited them to submit it to their journal. In one comment, an author alerted the readers of an error in a figure and also playfully chided (using a smiley emoticon) the co-author for hastily uploading the files before checking them. In another, co-authors alerted readers that the preprint had been posted without the approval of the co-authors and urged the scientific community to ignore this version (to date the preprint in question has not been retracted). Finally, in one example (of a comment classified as a publication status update), the author said they did not plan to submit the preprint to a journal, as publishing on bioRxiv makes it freely available to everyone. Discussion Our analysis of single comments left for bioRxiv preprints before September 2019 found that more than two thirds of those comments were left by non-authors and were most commonly praises, suggestions, or criticisms of the preprints. Additionally, almost a sixth of non-author comments contained detailed peer review reports akin to those traditionally submitted during the journal peer review process. Despite, to the best of our knowledge, our study being the first to analyse comments left on preprint server's website, these findings support previous studies that showed the opportunity to receive feedback was perceived as one of the benefits of preprints compared to tradi- (2,7). However, we also found that less than ten percent of all bioRxiv preprints received public comments before the COVID-19 pandemic. This low prevalence of scholarly public commenting has been previously observed for post-publication commenting of biomedical articles, and was the reason for discontinuing Pub-Med Commons, the National Library of Medicine's commenting resource (12). Similar low prevalence of post-publication commenting has also been found across disciplines on PubPeer (13). Nevertheless, as has been previously stated for those services, some of those comments have been crucial for scholarly debates and even led to retractions of papers, a practice also observed for bioRxiv preprints (12)(13)(14). In our study, we observed that eleven percent of authors' comments were actively inviting others to comment on their preprint, with one comment explicitly stating that they would rather make changes to the preprint than to a version published in a journal. The lack of traditional peer-review is often perceived as the biggest criticism of pre-printing, alongside cases of information misuse and posting of low-quality studies (15). Thus, bioRxiv (alongside arXiv and medRxiv) have displayed clear disclaimers for COVID-19 preprints that state preprints are "preliminary reports that have not been peer-reviewed" and they should not be "reported in media as established information" (16). Related to this criticism and the benefits of preprint commenting, there has also been a rise of specialised preprint review services (e.g., PreReview, Review Commons, Peerage of Science) or overlay journals (e.g., Rapid Reviews, JMRIx) aimed at providing expert reviews for preprints, or endorsement of preprints (e.g., Plaudit) (17)(18)(19)(20)(21)(22). Additionally, in December 2020, journal eLife announced they would only be reviewing papers that have been first posted as preprints, and that they are switching from being a publisher to "an organization that reviews and certifies papers that have already been posted" as preprints (23). On a similar note, to emphasize the possible role that commenting has in the scientific discourse, reference software Zotero can display references that have PubPeer comments, and a recently launched biomedical search engine PubliBee, implemented (up)voting of comments (24,25). Upvoting of comments is already available for several preprint servers, including bi-oRxiv, that utilize Discus as the commenting platform (26). It will be interesting to see if more journals and publishers implement similar changes, and if the focus on reviewing preprints will lead to a decrease in the practice of (double)blind review of manuscripts. Alongside posting of full peer review reports, our study also confirmed other known practices and potential benefits associated with the preprinting culture. For example, using preprints as final publication outputs, soliciting or being invited by editors to publish studies posted as preprints, calling out suspected research integrity issues, engaging in discussion or proposing collaborations, as well as publishing of peer review reports from those training on how to conduct peer review, or from journal club discussions. These findings may provide authors encouragement to consider or continue depositing preprints. Furthermore, we have shown that almost a third of the comments were left by the authors of the preprints, and their comments were mostly updates of preprints' publication status or additional information about the studies. Authors' comments were also in general left after a much longer period than those of the non-authors. This aligns with found median times of 166 to 182 days between posting a preprint on bioRxiv and publication of that study in a journal, which were similar to the median time of 172 days we found for comments on publication status updates (2). Despite being the first analysis of single comments posted for bioRxiv preprints, our study is not without limitations. We did not attempt to define if non-authors that posted comments were indeed peers, nor did we compare their expertise or publication records with those of the authors of the preprint on which they were commenting. We are also aware that some comments were left by patients, students and the individuals that stated a lack of expertise in the field. However, defining and soliciting feedback from a competent peer is known to be difficult, with previous studies dem- Malički M. et al. Analysis of comments left for bioRxiv preprints onstrating minimal agreements between peers assessing the same study (27,28). Furthermore, we did not attempt to define the quality of the comments, nor if the contents of comments (e.g., raised criticisms or suggestions) were indeed valid. We also did not check if comments led to changes or updates of the preprints or eventual published manuscripts, nor if the authors were even aware of them. Regarding the latter, as we analysed preprints that only had a single comment, none of the authors used the preprint platform to reply to them. We however did find that five percent of authors' comments were replies to comments or peer reviews they received elsewhere, and we did encounter an example of a non-author comment that indicated they communicated with the authors by email. The purpose of our research was not to provide external validity of the claims stated in the comments, but rather showcase, for the first time, the most common types of comments left on the platform (before the COVID-19 pandemic). Our study is also limited in that we did not analyse discourse that might occur in preprints which received multiple comments. However, we did analyse the first comments of a random sample of 200 of such preprints to confirm that they do fall within the categories analysed here. Finally, we acknowledge that our backgrounds are not in life sciences, and that this may have affected our ability to make a clear distinction between some comment types, especially in distinguishing between suggestions and criticisms. We however feel that the observed differences in the number of words between our identified comment types, as well as prevalence or praise which is more com-mon for comments that contained suggestions than criticisms, provides support for our categorization. In conclusion our study indicates that bioRxiv commenting platform appears to have potential benefits for both the public and the scholarly community. Further research could measure the direct impact of these comments on comments made by journal peer reviewers, later preprint versions or journal publications, as well as the feasibility and sustainability of maintaining and moderating commenting sections of bioRxiv or other preprint servers. Finally, we believe that user-friendly integration of comments from server platforms and those posted on social media (e.g., Twitter) and specialized review platforms would be beneficial for a wide variety of stakeholders, including the public, authors, commenters, and researchers interested in analysis of comments. Acknowledgments This project was conceived and inspired during the collaboration of ASAPbio and ScholCommLab on the Preprints Uptake and Use Project. Our preliminary analysis was presented at the 2nd International Conference on Peer Review. Elsevier funding was awarded to Stanford University for a METRICS postdoc position that supported MM's work on the project. We would like to thank, in alphabetical order, Cathrine Axfors, Lex M. Bouter, Noah Haber, Ana Jerončić, and Gerben ter Riet for their comments on the draft of our manuscript.
2021-04-30T05:15:03.324Z
2021-04-15T00:00:00.000
{ "year": 2021, "sha1": "34232ff5586fddc697230d384d9938f63bc535f2", "oa_license": "CCBY", "oa_url": "https://www.biochemia-medica.com/assets/images/upload/xml_tif/BM31_2_020201.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "34232ff5586fddc697230d384d9938f63bc535f2", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
181413928
pes2o/s2orc
v3-fos-license
Public Transparency in Brazil and the Open Government Partnership – OGP The present work presents some transformations that have occurred in Brazil in recent years in the public transparency aspect, mainly in view of the legal structure after the signing of the Open Government Partnership (OGP) treaty. In summary, it exposes beyond the legal apparatus, the growth of the portal of public transparency and the classification of the country regarding the opening of public information, theme of item 2 Transparency and Open Government. Contemporary literature has addressed the potential of the Internet to broaden the possibilities of bringing together the State and society by providing information on legislative activities and governmental procedures, contributing to a greater possibility for citizens to know and make use of their rights and how to act more actively as a social actor. With the use of Information and Communication Technology, especially after the various legal aspects adopted by Brazil, this public participation has expanded and gained more relevance. Briefly, the opening of public information increases the level of confidence of the administrations and enables the wide inspection of citizens, legitimizing and strengthening the presence of the state and in accordance with constitutional principles. Introduction The access to information is recognized as a fundamental right by various international organizations, among them the United Nations and the Organization of American States. The Universal Declaration of Human Rights, Article 19, establishes that every human being has the right to freedom of opinion and expression, and may have opinions to seek, receive and impart information and ideas by any media and regardless of frontiers (UN, 1993, p.10). The Federal Constitution of Brazil presents as a guarantee the possibility of access to public information under the State's custody. It states that, except in exceptional cases, such data must be public. Advertising is one of the principles to be obeyed by the Public Administration, alongside legality, impersonality, morality, and efficiency. (BRAZIL, 1988). Jacobi and Pinho (2006) point out that, following the promulgation of the 1988 Constitution, the pressures of a more active and organized civil society have increased. Thus, new public spaces and interaction were created, broadening the dialogue with society. In this way, from that period, the concept of transparency within the state body began to consolidate, with greater intensity. Given this context, the present work intends to indicate legal developments around the Brazilian Public Administration after signing the agreement with the OGP, with a view to transparency as an instrument for the promotion and improvement of public management, as well as to expose the Brazilian position in terms of transparency at the global level. The research was carried out using the deductive method and the technique used was the indirect documentary, of which bibliography was pertinent to the subject, as well as portals that deal with transparency at a global level. The paper first addresses the perspective of open government, following the legal aspects of Brazil in this area, an analysis of the federal transparency portal, the benefits offered by public transparency and, finally, the conclusions. Publicity and Transparency as Requisites of Open Government The availability of the information generated by the Public Administration to the scrutiny of society is fundamental to increase transparency and favour social control. In addition to improving governance and public awareness of state programs and activities, increased transparency of data provides the basis for a better understanding of them for participation and collaboration in the generation of innovative and higher value-added services (UBALDI, 2013). In general, open government has three central ideas: 1. Transparency promotes social control 2. Citizen participation enhances government effectiveness and decision-making 3. The actions carried out by the government are better incorporated with the cooperation of the citizens. The government's opening in these ways has as its core idea the concept of virtual democracy, characterized by the use of Information Technology as a platform for Interlocution between citizens and the State, compelling Public Administration to innovate to serve the client-citizen, using the principles. Following this idea of innovation, using public knowledge to improve the quality of public policies, Hilgers and Piller (2011) show that the execution of these actions of innovation and opening of Public Administration is not immediate. According to them, the innovation process with public participation takes place in three phases. In the first stage, open innovation invokes first and foremost transparency. The Public Administration must act actively by publishing all its political and administrative architecture at all levels of federal, state and municipal administration. In addition to this, one has the permanent requirement of the management of its processes in an efficient and transparent way. Thus, a perception of the citizen's initial participation in Public Administration begins. In the second stage, with the idea of more consolidated transparency, public participation begins to gain visibility. In this sense, with the participation of the citizen, more direct and transparent relationships are institutionalized (EVANS, 2003). During this period, a more mature dialogue takes place between the State and the taxpayer, increasing the legitimacy of the Government, establishing a new model of democracy. The open data system presented to the client-citizen is established on an innovative solutions platform. This model has as a background the internet and establishes itself with the popular participation in public management, provoking qualitative changes in the State / civil society relation. (JACOBI AND PINHO, 2006). In the third phase, participation takes place in a collaborative and interactive way. Thus, in addition to technocratic e-government administrative reforms, it is important to implement the model in which public institutions cross internal borders, establishing relations with other public agencies, companies, and the citizen. In this way, public value is not created exclusively by the government, but by the joint collaboration of those involved, through the collective capacity to learn, propose changes and adapt. In this scenario, the collaborative platform of web 2.0 acts as a great inducer of these principles. With this, there would be an effective capacity for accountability and interlocution between the Public Administration and the citizen, as well as the provision of information with real value for those involved. Hilgers and Piller (2011) explain that: Certain procedures in the administrative system can be designed much more effectively in terms of an open collaboration process. In addition to the technocratic reforms of electronic government, a major issue in the administrative reforms of these days is to strengthen intra-administrative cooperation, on the one hand, but also with organizations beyond administrative boundaries, such as other public agencies, companies, networks, but also citizenship (Hilgers and Piller, 2011, p.5, free translation). At the root of this technological revolution, there is a growing demand for public policies of higher quality and impact, given the state's increasingly scarce resources, as well as the population's demand for a better application of resources. In this logic, public power establishes a two-way street with social actors in the discussion and improvement of public policies. In this context, Marinez Navarro (2015) reports that: Collaborative participation affects innovation in Public Management at the moment it opens the possibility of dialogue, communication, legitimacy and trust, calling on the different actors of society to work together, recognizing that citizens have proposals that can be used to solve problems of interest of the collectivity (Marinez Navarro, 2015, p. 99, free translation). The Open Government Partnership (OGP) has been a great inducement in changing the relationship between the State and the Citizen. It is associated with a broad range of goals and functions that includes public participation, open data, improved public services, and government efficiency. It works by disseminating and globally encouraging government practices related to government transparency, access to public information and social participation. (BRAZIL, 2017). In order to be part of this agreement, it is necessary that the participating countries make several commitments regarding transparency and access to public data. Marinez Navarro (2015) argues that the transparency of information is directly linked to the improvement of social control mechanisms. By supplying the public information society, the state allows scrutiny of the allocation of public resources, something unimaginable in societies of which information asymmetry is preponderant. By this mechanism, it is not possible to assert that the public resources will be better allocated from the information sharing, nevertheless, it is consensual that the more complete the information made available, the greater the possibilities of social control. The following will address the legal aspects of Brazil in the area of public transparency, following the agreement of the OGP. Legal aspects of Open Government in Brazil According to Brazil (2017), one of the main initiatives by the government to implement the Brazilian Open Government Action Plan was the promulgation of the Law on Access to Information (Law 12.527/2011). Its main objective was to consolidate the agreements signed by Brazil to open the public information. The right to information must be guaranteed by the public authority. This law covers all public entities of the three Powers (Executive, Legislative and Judicial), all levels of government (federal, state, district and municipal), municipalities, public foundations, public companies, mixedcapital companies and other entities directly or indirectly controlled by the Federal Government, States, Federal District and Municipalities. Law 12.527 regulates a right provided for in the Constitution, from which all have the prerogative to receive information about their personal and collective interest from public agencies. Thus, the Administration performs its obligation when it discloses its actions and services, as well as to receive specific demands. Since then, several legal mechanisms have been carried out, in agreement with the OGP, with the purpose of achieving a more open, participatory, democratic administration and having as a platform the intense use of Information and Communication Technology. Briefly, the following stand out: In order to comply with a request for access to public information, it requires a method to process the request and ensure that the data is delivered to the applicant. As an important tool for citizen service, we have Information and Communication Technology (ICT).In this way, the use of one of the main tools for accessing public data in Brazil will be examined: the transparency portal. The Public Transparency Portal Considered as one of the main mechanisms of public transparency in Brazil, the transparency portal presents itself as a social control tool for citizens, as well as its utility in Public Administration. The portal allows to obtain information on the transfer of resources, direct expenditures of the federal government, revenues -planned and realized -agreements, sanctioned companies, information on the functional status of the public servants, projects and actions within the Federal Executive Branch, other items. It was created in 2004 and gradually expanded the information available, until in 2011, with the enactment of Law 12.527 (Public Information Act), its scope expanded, including the functional information of federal public servants and their respective remunerations. It is important to note that the transparency portal can be reached from different domains on the Internet, among them: transparencia.gov.br; portaldatransparencia.gov.br and portaltransparencia.gov.br. In addition, a myriad of other websites has a link to the site, since it is the source of consultation, including for public agencies. Such centrality contributes to giving greater visibility to it and highlights its importance as a tool for budget management. Figure2 -Interconnection with the Transparency Portal Source: Adapted from Alexa Portal Since 2012, when the law of information access was enacted, Brazil's Transparency Budget Indexes showed a slight improvement, reaching its peak in 2017. With a greater culture of social control of public expenditures and citizen's knowledge about the mechanisms of supervision, the better the possibilities to increase the effectiveness of public policies since the citizen starts to question more intensely the governmental decisions. Figure3 -Budget Transparency Index in Brazil Source: International Budget Partnership: http://bit.ly/2BIDoe7 Based on criteria established internationally by the Open Budget Survey 2017, developed by the International Budget Partnership -which uses 109 weighted indicators to gauge budgetary transparency, the country has the grade 77. For each participating country a score of 0 to 100, of which the higher the grade, the better the classification in the area of budget transparency. With this evaluation, the country is in 7th place in the international ranking, behind New Zealand, South Africa, Sweden, Norway, Georgia and Mexico. Graphic2 -Public Participation Index in South America 2017 Source: Adapted from International Budget Partnership: http://bit.ly/2BIDoe7 The Brazilian public participation is up in the ranking of the countries of South America. Worldwide, it is above the average that is 12/100, while the country has an evaluation of 35/100. In this sense, one of the factors that favoured the country's performance was the legislation that, since 2009, through supplementary law 131 of 2009, determines the availability, in real time, of detailed information on budget execution of the union, states and municipalities. In addition, the Brazilian Transparency Portal launched in 2004operated as a mechanism for accountability of the public administration, in the matter of budget disclosure, encouraging social control. Benefits Ubaldi (2013) recalls that several studies have demonstrated the benefits of the Law on Access to Public Information and Transparency. According to him, such a mechanism improves the decision-making process, improves society's understanding of the functioning of government and increases public participation in the management of public policies. That is, potential benefits provided by the opening of public information are not only monetary and economic, they are mainly social and improvement of public governance. Are they: Figure 4 -Transparency Benefit Cycle Source: Adapted from Ubaldi (2013) a) Improving government accountability, transparency, responsiveness and democratic controlmaking existing information easier to analyze and process allows for a broader level of public scrutiny, which can increase public confidence and improve the responsiveness of government actions. b) It promotes citizens' self-empowerment, social participation and involvement with public policies -a second assumption is that open government allows individuals to increase participation in public affairs. Electronic participation is usually part of a government policy aimed at harnessing the use of IT for openness, transparency and collaboration within the public sector, but also to increase citizen involvement in public life and policymaking. c) Assists in the development of trained civil servants -as important as empowering citizens are the training of the public sector workforce. Open government data demands that public officials participate directly in this guideline to ensure that public administration is open and participatory, and thus meet the needs of users. d) It promotes innovation, efficiency and effectiveness in government services -with the use of open data and the increase of ICT in the public service, the provision of public services is made faster, less paperwork and lower transition costs. In addition, services are improved as people find and claim their rights. e) Creates value for the economy in general -from this perspective, when public information is provided to the public free of charge, private companies use this information to create higher value-added products that can be marketed. Mariñez Navarro (2015) points out that greater transparency is not an automatic mechanism of greater responsibility. In addition, a government can be open, in the sense of being transparent even if it does not adopt new technologies, and a government can provide open data and remain deeply opaque and inexplicable. From this perspective, constant monitoring of the social agents is required, not only requiring the opening of public administration. It is essential that the government be efficient and effective. The first in the sense that, with scarce public resources, it is fundamental to allocate them in tools that meet in the shortest possible time, minimizing costs. The other with the idea that the main purpose is to materialize: a transparent public administration and that communicates with the citizen. Conclusion The study identified the legal developments that have taken place in Brazil in recent years, and the stimulus that the country has been receiving in order to have greater transparency in public administration, especially after the signing of the open government treaty (OGP). According to Ubaldi (2013), very few countries had adopted mechanisms for opening up public information, such as Information Access Laws (LAIs). Regarding the Brazilian context, one of the factors that were decisive for the publication of LAI in 2011, which represents a milestone in the quest for greater transparency in Public Management, was adherence to the aforementioned international agreement. Then, in the same direction, decrees number 8.638/2016, decree 8.777/2016 and 8.789/2016 were published with the purpose of increasing access to public information, using Information and Communication Technology. All these legal provisions are in harmony with the principles proposed by the treaty: Transparency, Citizen Participation, Accountability, Technology and Innovation. One of the main tools used for this process is the electronic portal administered by the Federal Controller's Office (CGU), which since its creation in 2004 had an exponential increase in the number of accesses. This growth was complemented by an increase in the number of Internet users in Brazil and knowledge of the functions of the portal, which resulted in an improvement in the indicators of the country's budget transparency worldwide. In this aspect, it has leadership in Latin America and is in 7th place in the world, according to the International Budget Partnership. Even so, there is much to be done regarding transparency in Brazil. It is not enough to just make the information available on the World Wide Web and create legal support for it. It is necessary to create a culture of population access and awareness of content, as well as broadening access to the internet and digital culture, especially to those economically disadvantaged. Thus, one of the main considerations of the present study is how to extend the digital culture of access to public services and information to the economically disadvantaged.
2019-06-07T23:16:11.982Z
2019-05-07T00:00:00.000
{ "year": 2019, "sha1": "105734ebb2765010669f3ec202324b6432be9e59", "oa_license": "CCBYNC", "oa_url": "https://thescholedge.org/index.php/sijmas/article/download/518/516", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "6a26d7a15bf555c8f7cd7b8923023097ac0cca8e", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Political Science" ] }
207942752
pes2o/s2orc
v3-fos-license
High-throughput sequencing data and antibiotic resistance mechanisms of soil microbial communities in non-irrigated and irrigated soils with raw sewage in African cities High-throughput sequencing data of soil microbial communities in non-irrigated and irrigated soils with raw sewage in African cities are presented in this report. These data were collected to study the potential of wastewater use in urban agriculture to disseminate bacterial resistance in soil. Soil samples were collected in three cities in two African countries. Each city had two sectors (irrigated and non-irrigated). After collection, biomass samples were purified, DNA from soil was extracted, quantified and sequenced using multiplex Illumina high-throughput sequencing. The sequence count of the six metagenome datasets ranges from 3,258,523,350 bp to 4,120,454,250 bp; the mean sequence length post quality control average was 149 ± 3 bp. The mechanisms of resistance encoded by the identified antibiotic resistance genes (ARGs) in the metagenomic data were dominated by antibiotic inactivation enzymes (64.7% and 71.9%), followed by antibiotic target replacement (14.7% and 12.5%), antibiotic target protection (11.8% and 9.4%) and efflux pumps (6.3% and 8.8%) in bacterial DNA isolated from irrigated and non-irrigated fields, respectively. The datasets will be useful for the scientific community working in the area of bacterial resistance dissemination from the environment. They can be used for further understanding of bacterial drug-resistance gene prevalence and acquisition in wastewater irrigated soils. The data reported herein was used for the article, titled “Raw wastewater irrigation for urban agriculture in three African cities increases the abundance of transferable antibiotic resistance genes in soil, including those encoding Extended spectrum β-lactamase (ESBLs)” Bougnom et al. (2020) [1]. resistance dissemination from the environment. They can be used for further understanding of bacterial drug-resistance gene prevalence and acquisition in wastewater irrigated soils. The data reported herein was used for the article, titled "Raw wastewater irrigation for urban agriculture in three African cities increases the abundance of transferable antibiotic resistance genes in soil, including those encoding Extended spectrum b-lactamase (ESBLs)" Bougnom et al. (2020) [1]. © 2019 The Authors. Published by Elsevier Inc. This is an open access article under the CC BY license (http://creativecommons. org/licenses/by/4.0/). Data In the present work, we report DNA sequence read metrics of six metagenomic samples from soil obtained from non-irrigated fields (NIR) and their corresponding irrigated fields (IRI) with raw sewage Specifications Value of the Data The data provides insight into the microbial diversity and functional changes after raw sewage irrigation. The data will be useful for the scientific community working in the area of bacterial drug-resistance gene dissemination in the environment. The data can be used for further understanding of bacterial drug-resistance acquisition in wastewater irrigated soils. Thus, assessing the public health issue of urban agriculture in low-and middle-income countries. in three cities (Table 1), in two African countries ( Fig. 1) [1]. The sequence counts of the metagenome datasets post quality control (QC) ranged from 3,309,468,880 bp to 3,649,105,747 bp and 3,159,665,932 bp to 3,682,552,830 bp in irrigated and non-irrigated fields, respectively. The mean GC content post QC ranged from 60 ± 12% to 65 ± 10% and 62 ± 12% and 66 ± 9% in irrigated and non-irrigated fields, respectively. The mean sequence length post quality control (QC) average was 149 ± 3 bp. The mechanisms of drug-resistance encoded by the identified antibiotic resistance genes (ARGs) in the metagenome data were dominated by antibiotic inactivation enzymes (64.7% and 71.9%), followed by antibiotic target replacement (14.7% and 12.5%), antibiotic target protection (11.8% and 9.4%) and efflux pumps (6.3% and 8.8%) in irrigated and non-irrigated fields, respectively (Fig. 2). The number of ARGs encoding drug-resistance due to antibiotic inactivation enzymes was 6% lower in non-irrigated fields, whereas those encoding the other mechanisms of resistance were 2% higher in irrigated fields. Experimental design, materials, and methods Soil samples were collected in three cities, in two African countries, namely Ouagadougou (46 38 0 N, 11 29 0 ) in Burkina Faso, Ngaoundere (46 38 0 N, 11 29 0 ) and Yaounde (46 38 0 N, 11 29 0 ) in Cameroon (Fig. 1). In each city, there were two sectors comprising three agricultural fields that were irrigated (IRI) with raw wastewater, and as control soils, 500 m away, three non-irrigated agricultural fields (NIR) with comparable soil properties. This gave samples from Ouagadougou (IRI1 and NIR1), Ngaoundere (IRI2 and NIR2), and Yaounde (IRI3 and NIR3). Wastewaters were collected from canals. The canals are natural open-air water drainage canals and collection points of different transects. They receive wastewater from habitations, hospitals, agriculture, markets and slaughterhouses. Salad and tomatoes were the growing plants in the fields. The agricultural fields were approximately 0.2 ha each and watered manually twice per day with watering cans. In each field, 100 g of soil was randomly sampled at 10 different places from 0e20 cm depth, using soil cores. Replicate samples were pooled together, giving 1 kg-composite samples. The samples were transported on ice and stored at À80 C until further analysis. To collect the bacterial cells from the different soils, soil biomass purification was conducted according to Sentchilo et al. (2013) [2]. Briefly, 15 g soil samples were homogenized by magnetic stirring for 15 min, in ice-cold poly (beta-amino) esters (PBAE) buffer (PBAE buffer is 10 mM Na-phosphate, 10 mM ascorbate, 5 mM EDTA, pH 7.0), at 10 mL g À1 of soil. Low speed centrifugation in 50-mL conical tubes at 160 g for 6 min was used to remove coarse particles, big eukaryotic cells and bacterial flocks. The collected supernatants were centrifuged at 10,000 g for 5 min to pellet the microbial biomass for further analysis. Soil DNA was extracted using the DNeasy PowerSoil Kit (Qiagen, Germany) according to the manufacturer's instructions. DNA concentration was determined by using the Quant-iT PicoGreen dsDNA Assay Kit, and the Qubit™ 3.0 Fluorometer (Qubit, Life Technologies, USA). The three DNA samples extracted from each block were pooled together in equal nanogram quantities. Six DNA samples representative of the three cities were sent to Edinburgh Genomics for high-throughput sequencing. [3]. ShortBRED profiles protein family abundance in metagenomes in two-steps: (i) ShortBRED-Identify isolates representative peptide sequences (markers) for the protein families, and (ii) ShortBRED-Quantify maps metagenomic reads against these markers to determine the relative abundance of their corresponding families based on reads per kilobase million (RPKM). Minimum identity of 95% and minimum fragment length of 30 amino acids were considered positive. ARGs were identified with the Comprehensive Antibiotic Resistance Database (CARD) (McArthur et al., 2013) [4]. ARG markers were generated using the comprehensive and non-redundant UniProt reference clusters UniRef50 as a reference protein database. Antibiotic resistance ontology (ARO) numbers in CARD was used to aggregate, annotate and associate the ARGs to the corresponding resistance family.
2019-10-17T09:05:58.104Z
2019-10-15T00:00:00.000
{ "year": 2019, "sha1": "2be2b173dc3edc515f73ed1e15a60c698664ebd6", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.dib.2019.104638", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "46f3d3dd4c333fad35dd445fdf18910af72c4180", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
225549936
pes2o/s2orc
v3-fos-license
An Agent-Based Decision Support Platform for Additive Manufacturing Applications : The e ff ective estimation and consideration of process cost, time, and quality for additive manufacturing operations, when a series of suitable technologies and resources are available, is very important for making informed product design and development decisions. The main objective of this paper is to propose the design, deployment, and use of an agent-based decision support platform, which is capable of proposing alternative additive manufacturing resources and process configurations to design engineers while reducing the number of communication steps among engineering teams and organizations. Di ff erent computer-aided systems are utilised and interfaced for automating the information exchange as well as for accelerating the overall product development process. Introduction Additive Manufacturing (AM) is characterized by the layer-wise production of a part as opposed to conventional manufacturing (CM) of subtractive or forming methods, such as computer numerical control (CNC) milling and injection moulding [1,2]. Additive manufacturing processes can allow for increased design freedom to produce functional freeform products with high geometric complexity, previously not possible with CM, and are further enabled by advances in design for additive manufacturing (DfAM) [3]. AM is also an attractive option as it can provide significant cost savings compared to CM, due to the abatement of tooling and the reduced material usage [4]. AM can complement CM when used to quickly produce injection moulds and tools with conformal cooling for enhanced performance [5]. Further to this, the on-demand production of spare parts using AM enhances supply chain management by lowering warehousing and transport costs as well as by reducing lead times [6][7][8][9][10]. Designs optimised for AM using topological optimisation have significantly reduced weights, leading to substantial reductions in lifecycle fuel costs and emissions, which is of particular importance to aircraft parts [11]. AM parts see a diverse range of applications in aerospace, automotive, defence, medical implants, as well as in toys, and jewellery industries [12]. Various AM technologies are available for producing parts, including stereolithography, powder bed fusion, and directed energy deposition, among others [1,2]. Each AM technology has its own associated material preparation requirements, material phases, and workable materials, from polymers to metals, composites and ceramics, and even biological materials [13]. Cost, quality, and time performance achieved in the production of AM parts are important considerations for the product development process [3] and vary depending on the technologies and the resources used. Indeed, two different machines using the same AM technology can produce parts of different mechanical properties and performance in terms of quality, cost, and process time, depending on a series of factors. These factors include the type of material used as well as process parameters, such as layer table of suitable AM processes and materials for the build. Limitations of this DSS are the omission of consideration of cost and build speed parameters for process time estimation as well as the lack of STL file reading functionality, which would reduce any potential errors in the process dependent selection phase, where many part geometry parameters are input instead. Byun and Lee [29] used a modified TOPSIS method for analysing qualitative and quantitative data, using factors such as accuracy, roughness, strength, elongation, part cost and build time for ranking alternative AM processes in a pairwise comparison matrix. Borille and de Oliveira Gomes [27] used an analytic hierarchy process (AHP) and multiplicative analytic hierarchy process (MAHP) for comparing and ranking appropriate AM processes with consideration to cost, build time, accuracy, roughness, tensile strength, and elongation. Braglia and Petroni [28] used AHP for AM process selection based on cost, time, size, complexity, and surface texture. Zhang et al. [30] used a Multi-Attribute Decision-Making (MADM) approach, taking advantage of a knowledge value measuring method for making decisions involving production cost, time and quality, using a case study on AM system parameters from Rao and Padmanabhan [33]. These system parameters included mechanical quality properties, such as accuracy, surface roughness, tensile strength, and elongation, as well as part cost and build time. Meisel et al. [31] developed a DSS framework for the selection of suitable AM processes in remote or austere environments. Bikas et al. [32] presented a framework for facilitating process selection in AM. This framework included the evaluation of AM technologies, AM process selection, assessment of technical feasibility of AM processes, evaluation of the design, and finally process planning for hybrid manufacturing to reduce costs by combining CM and AM. Watson and Taminger developed a decision support framework for selecting AM vs CM methods, such as CNC milling, based on energy consumption indicators [34]. This framework accounted for the entire manufacturing lifecycle energy consumption required for both AM and CM and determined that there lies a critical value for the fraction of the bounding envelope that contains material whereby the energy consumption for AM and CM is equivalent. For volume fractions below this critical value, AM is more efficient and above this, CM is more efficient. Wang et al. [35] developed a DSS for AM process selection using a hybrid MADM with TOPSIS for ranking possible solutions. This DSS accounted for various performance parameters, including tensile strength, dimensional accuracy, surface finish, and material cost. A comprehensive review of the methods currently used for AM process selection was conducted by Wang et al. [36]. This review could support DfAM using knowledge-based DSS on top of examining and evaluating user preferences and AM process performance. Zhang et al. [37] developed a build orientation optimisation method for multi-part production in AM. They used a feature-based method to constrain the number of possible orientations for each part within a group of parts in a single build, which would ensure build quality was not compromised. Following this, a genetic algorithm was used for optimising the decision index of an integrated MADM model for each alternative orientation in order to minimize cost. Agent-Based Decision Support Systems and Cloud Manufacturing A software agent in the manufacturing domain is typically a computer programme that may act on behalf of an engineer or operator. Agent-based decision support systems (ABDSS) are typically autonomous computer or software systems, which communicate with the environment on behalf of a designer, an engineer or an operator to achieve a predefined goal [38]. Similar platforms have been developed previously for a plethora of applications. For example, an agent-based system (ABSTUR) capable of picking an optimal route to avoid overcrowded and non-profitable tourist routes was devised. The agents in ABSTUR represented the simulator, different categories of tourists and the route manager [39]. ABDSS can be adaptive and intelligent, tailoring their behaviour to environmental changes and can apply a fixed set of rules to enable reasoning, learning, and planning functionality. Multi-agent systems are a group of agents, which work baring similarity to a community of human workers, collaborating with predefined roles, towards a particular goal through effective communication and reasoning [38]. They enable decentralized problem solving [40] and can combine machine learning, simulation, and multi-criteria decision-making features. ABDSS have been used in areas, such as engineering design [41], process planning [42], production planning and resource allocation [43], production scheduling and control [44], process control monitoring and diagnosis [45], enterprise organisation and integration [46], networked production [47], assembly, and life-cycle management [48,49]. A Multi-Agent Systems (MAS) architecture was developed by Legien et al. [21] for supporting technology recommendations, specifically related to material choice support for casting with cost estimations. In the field of DfAM, Dhokia et al. proposed an agent-based generative design tool [50]. In the area of process planning, a number of multi-agent systems and platforms have been proposed for machining prismatic parts utilising STEP-NC [51,52]. MAS for modelling AM processes have also been proposed, without however elaborating on how different AM technologies and equipment may be utilised as a part of the same platform [53]. Recent research works have suggested the integration of MAS in cloud manufacturing platforms for AM [54]. Cloud manufacturing is a paradigm, which is enabled by cloud computing technologies as well as by the integration of networked manufacturing systems, including networked manufacturing, virtual manufacturing, internet of things (IoT), and agile manufacturing technologies [55,56]. In a previous work of this paper's authors, the use of blockchain technology principles was introduced for securely managing AM product development data with a MAS, mainly for storing product development information in a secure way [57]. In this paper, a novel MAS approach is presented for facilitating and accelerating the process of designing and manufacturing a product utilising AM technologies, by integrating CAD, CAM tools, and MES data, and by automatically selecting the most suitable AM process configuration, equipment, and service provider. Comparison with Other AM Platforms Several different AM platforms in the market offer manufacturing execution or AM production capabilities as a service. Manufacturing service providers such as Materialise, Stratasys, Formlabs, Shapeways, Protiq, 3D-Hubs, and others provide similar manufacturing options at different capacities. These companies usually have access to proprietary machines and software. These platforms come with their own features and require a significant amount of time for engineers and designers to understand how they work and how they may be adapted to their own needs. When reviewing the services provided by the likes of Materialise, Protiq, Shapeways (Figure 1), or others, it is to be noted that what is offered to users is more of a marketplace, where the user defines the requirements to print parts and then place an order for having those parts printed. These platforms typically are also capable of providing a delivery date. However, being a web-based service, the downside with such services is that there is no direct integration with CAD software environments. As the designer must exit the design environment and move to a different software application, which requires the designer to upload the file in a specific CAD format, experimentation with a different version of the same design is more error-prone and time-consuming. At the same time, whenever a user wishes to check what the consequences will be in case the quality parameters (such as finishing quality) are changed, the same process will have to be followed again. Additionally, the user typically receives no feedback about the process parameters, such as layer thickness, or the AM machine that will be used for printing the part. In case there are strict requirements regarding layer thickness, density or even the machine to be used, there is usually no straightforward way to input these requirements in these platforms. Furthermore, the cost and time estimations received by the user are typically based on part volume and support information and not on the output of a CAM/slicing software tool, which may mean that cost and time estimations are not as accurate as they could be. the requirements to print parts and then place an order for having those parts printed. These platforms typically are also capable of providing a delivery date. However, being a web-based service, the downside with such services is that there is no direct integration with CAD software environments. As the designer must exit the design environment and move to a different software application, which requires the designer to upload the file in a specific CAD format, experimentation with a different version of the same design is more error-prone and time-consuming. Features of these platforms include: • Automated generation of support structures (not user viewable); • Repair of STL model (not user viewable); • Selection of process parameter (not user viewable); • Quality selection (the user has limited options); • They are oriented towards enterprise or early-adoption customers that have a finished design and need a prototype; • Material selection is limited-they typically use medium to high-end materials. Furthermore, there are other software packages like Netfabb, Repetier ( Figure 2), or Simplify3D, that can cater to a wide range of 3D printing systems, or build platforms by importing machine/system settings directly from their website repository or by specifically feeding in different parameters for specific platforms and materials within the software to create profiles or printer configurations. Such software provides more flexibility to the designer by allowing for more customization options down to the minute details like support density, layer thickness, infill density, material options, extrusion options, and many others depending on the AM technology. The use of this software, however, is not an automated procedure since the designer is required to have intricate knowledge of the slicing/CAM tool software. Specific versions of software packages like Netfabb or Repetier are free-to-use software packages, whereas Simplify3D is a commercial software tool, providing the flexibility to import printers and configuration files through their website with a proprietary slicing engine. The list of features of these platforms include: • They are highly flexible-and therefore the setup with host and server is complex. • The capability of repairing defective STL meshes. • Control of layer thickness, material, density, and other process parameters is allowed. • Custom generation of support structures, patterns, infill density for supports is allowed. • Custom slicers can be imported-for example, newer versions of the slicing engines from Prusa, Cura, Netfabb, or others. This is a feature that is not typically addressed to average users, but it is still useful when special design and production requirements need to be considered. material options, extrusion options, and many others depending on the AM technology. The use of this software, however, is not an automated procedure since the designer is required to have intricate knowledge of the slicing/CAM tool software. Specific versions of software packages like Netfabb or Repetier are free-to-use software packages, whereas Simplify3D is a commercial software tool, providing the flexibility to import printers and configuration files through their website with a proprietary slicing engine. The list of features of these platforms include: • They are highly flexible-and therefore the setup with host and server is complex. • The capability of repairing defective STL meshes. • Control of layer thickness, material, density, and other process parameters is allowed. In this paper a different approach is presented, whereas the selection of AM part process configuration options is invoked directly within a CAD environment. The proposed approach would allow for the accurate simulation of AM processes utilising different process configurations. The proposed software platform is designed to extract the required part information automatically from the CAD model design. The platform is capable of testing different process configurations on a diverse range of AM machines and then allows the consideration of multiple criteria, related to cost, time and to parameters affecting quality, for selecting a suitable AM machine and process configuration. The proposed platform is capable of considering custom machine profiles that have been developed by AM service providers over time, based on their experience, diverse material profiles, as well as vendors from different locations. Since the platform can directly be integrated into a CAD environment, the designer may experiment with different design versions or part features without exiting the CAD system. Scope Designers would welcome receiving feedback regarding the impact of design features-typically modelled in CAD software-is on the production process, as there is no information on the printed product cost as well as on its delivery. A series of part characteristics need to be considered, including their geometry and technical specifications. The first features that need to be checked, before a specific AM is considered for producing a part, are its external dimensions and the minimum wall thickness. In particular, each part will have a maximum x, y, and z dimension, which will dictate the appropriate build volume requirement for that part. This is of importance to AM machine selection, as each machine will have a maximum build envelope and recommended minimum wall thickness. Designers are also typically not aware of the capabilities and performance characteristics of diverse AM equipment and technologies. They usually do not have enough knowledge about the different process configurations of each available machine that are suitable for the part designs. Furthermore, they do not have access to information pertaining to the machines' availability as well as to their cost, time performance capabilities and to their process parameters options. Although many different AM technologies and equipment are available from a number of third-party companies or cooperating suppliers, there is no way of comparing the high number of the resulting process configurations, comprising of different materials, specifications, and supplier/machine options, other than obtaining this information by receiving bids directly from the available suppliers. However, this is a time-consuming process on which product or part designers have little or no control at all. Approach In this paper, an agent-based approach is proposed, whereby different agents undertake the role of representing diverse stakeholders and resources in the process of designing and manufacturing a product utilising AM technology. Design Agents, in the form of desktop software applications, represent product design engineers and their role is to automate the process of identifying and then submitting the technical specifications of a product or part, while Machine Agents represent the process engineers who would analyse the part geometry and technical specifications and would then return to the designer with information about the process cost and time performance as well as about the process configuration itself. Both Design and Machine Agents are in essence software modules that may potentially replace human operators in a series of activities related to the negotiation and handling of 3D printing orders as well as to the selection of a suitable machine and process configuration. The Design Agent may, in the end, select the AM service provider (represented by a Machine Software Agent) who would make the best offer, considering cost, time, and process criteria. The proposed approach allows Machine Agents to communicate with specific CAM systems that correspond to each 3D printer they represent. These CAM systems are typically associated with a number of process profiles that contain special parameters values for different combinations of materials, process accuracy, part density, and support options. In this paper, the Fused Deposition Modelling (FDM) AM processes have been modelled, but other AM technologies, such as Powder Bed Fusion or Direct Energy Deposition, could also be modelled, taking advantage of the same platform and design principles. Each Machine Agent may choose from a multitude of process configuration profiles. Specifically, each AM process profile associated with a specific AM machine and its CAM software contains the following information: • Material to be used, including its properties, such as density, cost per weight unit, chamber, or bed temperature; • Support options, such as support profiles, support on build plate or x-y basis, and parameters per support option; • Part density; • AM layer thickness, which is usually directly related to part quality (finish quality, tensile strength, impact strength, and hardness) [58,59]. Each of these profiles corresponds to a specific combination of material, support option, part density, and quality. The standard and most accurate way to check the print feasibility of a part, for a specific AM machine, is to simulate the execution of its G-code, that is generated by a CAM software programmed for that AM machine. This simulation could provide useful information, such as material usage, cost, and build times. It is assumed that the CAM software utilises the latest firmware update for the 3D printer, as well as the most updated information pertaining to the availability of the machines. This information is typically not known to the organisation that wishes to place an order for their parts. It can become quite challenging for a design engineer working for that organisation to explore possible combinations of available equipment, suitable material, process configurations, and service providers [60]. The proposed platform ( Figure 3) is based on the Java Agent Development Framework (JADE) [61]. Design Agents are software modules representing design engineers and provide engineers with a Graphical User Interface for specifying the CAD file of the part and its technical specifications, including support options, materials that could be used, the minimum density, the maximum layer thickness (minimum 3D printing accuracy), quantity, and the order due date. One of the main reasons why an agent-based approach was selected is that this way it is easier to distribute the overall computational load to all Machine Agents that are available. Since the proposed approach is in principle simulation-based, the overall computational time required for simulating the 3d printing process for a wide range of machines would be quite long. Instead, by using agents, the designer will only have to wait for as long as it is required for the slowest Machine Agent to return its process alternatives. At the same time, the interaction of each agent with operators and other IT systems could also be done independently, i.e., each Agent may be interfaced to a different MES or CAM platform. Each platform (belonging, for instance, to a particular manufacturing OEM) will have to host the main container and can have as many other containers (each one belonging, for instance, to a particular AM service provider) as needed. Different platforms may also be connected, which in principle would allow different networks of service providers to be connected. An Agent Management System (AMS) is also part of the platform, providing an assigning service. The Directory Facilitator (DF) provides a Yellow Pages service, allowing agents to find other agents that are also part of the proposed JADE framework. The sequence of steps followed, and the information contained in the messages exchanged among the Design and Machine Agents and the platform components, including the CAM software and the MES are depicted in the UML sequence diagram of Figure 4. The list below provides more details about each of these steps: • Once a part is designed, the design engineer (Designer) uses the part geometry analysis tool, which launches the Design Agent (a software instance) who then uploads the CAD file to the cloud or to a remote repository and provides the minimum technical specifications (maximum layer thickness, minimum part density), the basic geometric information (external dimensions and minimum wall thickness), the list of alternative materials that can be used, the number of parts to be produced, the order due date, and decision criteria weights. • The Design Agent finds all available Machine Agents (software instances) through the platform DF and then sends a message to all suitable Machine Agents, encapsulating all information provided by the Designer, including the link to the repository containing the CAD file, while asking for a bid. • The Machine Agent selects all CAM process profiles that fulfil all material, density specifications, and support options and calls the CAM software to generate the G-code and simulate the AM process for each one of these profiles. • The output of the CAM software is then received by the corresponding Machine Agent and the processing time and cost per part are calculated. • The Machine Agent then requests information regarding the availability of the machine from its MES and estimates the end date for the specific order. This is done by utilising all available idle time slots for producing the number of parts requested by the Design Agent. A part geometry analysis tool has been developed and integrated with a commercial CAD platform for launching the Design Agent and for automatically passing to it basic design information, such as the CAD file and the maximum external dimensions and the minimum wall thickness of the part. The designer may edit these values, especially in cases where the geometry of the parts is complex and the calculation of the values of certain features, such as the wall thickness, is not straightforward. Each platform (belonging, for instance, to a particular manufacturing OEM) will have to host the main container and can have as many other containers (each one belonging, for instance, to a particular AM service provider) as needed. Different platforms may also be connected, which in principle would allow different networks of service providers to be connected. An Agent Management System (AMS) is also part of the platform, providing an assigning service. The Directory Facilitator (DF) provides a Yellow Pages service, allowing agents to find other agents that are also part of the proposed JADE framework. The sequence of steps followed, and the information contained in the messages exchanged among the Design and Machine Agents and the platform components, including the CAM software and the MES are depicted in the UML sequence diagram of Figure 4. The list below provides more details about each of these steps: • Once a part is designed, the design engineer (Designer) uses the part geometry analysis tool, which launches the Design Agent (a software instance) who then uploads the CAD file to the cloud or to a remote repository and provides the minimum technical specifications (maximum layer thickness, minimum part density), the basic geometric information (external dimensions and minimum wall thickness), the list of alternative materials that can be used, the number of parts to be produced, the order due date, and decision criteria weights. • The Design Agent finds all available Machine Agents (software instances) through the platform DF and then sends a message to all suitable Machine Agents, encapsulating all information provided by the Designer, including the link to the repository containing the CAD file, while asking for a bid. • The Machine Agent selects all CAM process profiles that fulfil all material, density specifications, and support options and calls the CAM software to generate the G-code and simulate the AM process for each one of these profiles. • The output of the CAM software is then received by the corresponding Machine Agent and the processing time and cost per part are calculated. • The Machine Agent then requests information regarding the availability of the machine from its MES and estimates the end date for the specific order. This is done by utilising all available idle time slots for producing the number of parts requested by the Design Agent. • All alternative process configurations are sent by the Machine Agents back to the Design Agent who then estimates their utility by considering the relative importance of all criteria identified by the Designer. • Then the best alternative is identified by the Design Agent, who sends a message with an order placement request to the corresponding Machine Agent. • As soon as the Machine Agent confirms the order, the best alternative process configuration with all pertinent information, regarding the process parameters, the service provider and cost, tardiness performance is presented back to the Designer. • The Machine Agents are then reset so that they can receive new requests from new instances of the same or other Design Agents from the same or another Agents' Platform. Appl. Sci. 2020, 10, x FOR PEER REVIEW 9 of 20 • Then the best alternative is identified by the Design Agent, who sends a message with an order placement request to the corresponding Machine Agent. • As soon as the Machine Agent confirms the order, the best alternative process configuration with all pertinent information, regarding the process parameters, the service provider and cost, tardiness performance is presented back to the Designer. • The Machine Agents are then reset so that they can receive new requests from new instances of the same or other Design Agents from the same or another Agents' Platform. A simplified example of the overall information exchange and alternatives generation process is presented in Figure 5. The Designer submits a part design together with the associated technical specifications to the Design Agent, which are then sent, as part of a 'MessageA' type message, to 2 Machine Agents, representing two 3D printers. Each Machine Agent generates all alternatives that satisfy the technical specifications and then invoke their corresponding CAM tool, providing as input the process parameters for each alternative and the CAD file. As soon as the CAM process simulation (G-code generation) is completed, the output of the simulation together with planning information from the MES are used for calculating the performance indicators for each alternative. This information is sent back to the Design Agent as part of a 'MessageB' type message. The Design Agent ranks all received alternatives, by calculating the utility of each one of them after considering the A simplified example of the overall information exchange and alternatives generation process is presented in Figure 5. The Designer submits a part design together with the associated technical specifications to the Design Agent, which are then sent, as part of a 'MessageA' type message, to 2 Machine Agents, representing two 3D printers. Each Machine Agent generates all alternatives that satisfy the technical specifications and then invoke their corresponding CAM tool, providing as input the process parameters for each alternative and the CAD file. As soon as the CAM process simulation (G-code generation) is completed, the output of the simulation together with planning information from the MES are used for calculating the performance indicators for each alternative. This information is sent back to the Design Agent as part of a 'MessageB' type message. The Design Agent ranks all received alternatives, by calculating the utility of each one of them after considering the relative importance of all decision criteria, using the Simple Additive Weighting method, and presents the best ones to the designer. Appl. Sci. 2020, 10, x FOR PEER REVIEW 10 of 20 Figure 5. Example of the information exchange process. Software Design and Implementation The main components of the proposed platform are the Design and the Machine Agents both implemented in the 'DesignAgent' and 'MachineAgent' classes, respectively ( Figure 6). Both components are implemented in Java, inheriting the initialisation, lifecycle, and communication functions from the JADE core Agent class. Each Agent has its own settings, such as the location of the Agents' facilities, the decision criteria weights for the Design Agent and the process profiles and cost parameters for the Machine Agent. The information sent from the Design Agent to Machine Agents is contained in a 'MessageA' object, while all alternatives generated by each Machine Agent and sent back to the Design Agent are part of a 'MessageB' object. Software Design and Implementation The main components of the proposed platform are the Design and the Machine Agents both implemented in the 'DesignAgent' and 'MachineAgent' classes, respectively ( Figure 6). Both components are implemented in Java, inheriting the initialisation, lifecycle, and communication functions from the JADE core Agent class. Each Agent has its own settings, such as the location of the Agents' facilities, the decision criteria weights for the Design Agent and the process profiles and cost parameters for the Machine Agent. The information sent from the Design Agent to Machine Agents is contained in a 'MessageA' object, while all alternatives generated by each Machine Agent and sent back to the Design Agent are part of a 'MessageB' object. Each Agent can be instantiated in a computer that has a Java Runtime Environment. The platform has been tested and used in networked environments with multiple nodes (computers). A special interface has been built, allowing a commercial CAD system to launch a Design Agent. components are implemented in Java, inheriting the initialisation, lifecycle, and communication functions from the JADE core Agent class. Each Agent has its own settings, such as the location of the Agents' facilities, the decision criteria weights for the Design Agent and the process profiles and cost parameters for the Machine Agent. The information sent from the Design Agent to Machine Agents is contained in a 'MessageA' object, while all alternatives generated by each Machine Agent and sent back to the Design Agent are part of a 'MessageB' object. Cost Function Significant consideration must be given while assessing the cost function for an AM process. The selection criteria of an AM technology largely depend on machines, location, materials, post-processing operations, and many other factors that are deployed in an AM production line. This must also be considered while deriving a cost function for an AM technology. This will determine the most viable AM technology to build a product. The calculation of a cost function will vary significantly in different AM technologies that are currently available in the market. This is because, depending on the AM technology deployed, the values of these factors will vary. For example, the cost factors of a material extrusion process are different when compared with the ones associated with a powder bed fusion process. This could be in the form of energy consumption, labour, overheads, materials cost, capital investment for an AM machine, support volume generated for a build, which in particular can affect the material cost depending on the chosen orientation, or it could be in the form of necessary post-processing operations required for a finished product. The cost per order is calculated by each Machine Agent for every machine and process profile by considering process material and time requirements as well as shipping cost elements [62,63]. Equation (1) shows a generic cost function used for estimating the overall cost. This equation is derived based on existing techno-economical models used for conventional manufacturing processes [63]. where: • s denotes the Machine Agent representing a specific machine of a service provider, • c denotes the process configuration index, • C sc is the overall cost for agent s and configuration c, • Q is the order quantity, • A sc is the cost rate (€/h) for configuration c of the machine represented by agent s, • T sc represents the processing time per piece if configuration c of the machine represented by agent s is selected, • M sc is the total material cost (€/kg) for building the part using configuration c of the machine represented by agent s, • W sc is the overall weight (kg) of the piece if configuration c of the machine represented by agent s is used, including support material, • P sc represents the set-up and post-processing operations cost per part, • K s is the average shipping cost rate from the service provider represented by agent s [€/(km.kg)], • D s is the distance between the service provider represented by agent s and the Designer's location and • F is the fixed cost per order, • d(Q,W sc ) is the discount rate applied, based on the overall cost, excluding material costs, which is in turn a function of the part weight and ordered quantity. Since the cost function is generic enough so that it can be applied across multiple machine profiles and configurations irrespective of the AM technology or machine used, machine and build plate utilisation were assumed to be at their maximum of 100%. Furthermore, the costs regarding labour or energy consumption would be categorized under the cost rate (A sc ). Fixed cost (F) includes overheads and the initial cost of the machines and infrastructure required for the production facility. Test Cases The test cases in this paper were devised for validating the proposed approach, utilising the latest stable version of a platform, which is being used by researchers of the I-Form Advanced Manufacturing Research Centre for designing and planning 3D printing experiments. Part of these experiments are conducted in cooperation with the industry. The cases are related to an injection moulding company in Ireland that wishes to produce several prototype plastic parts for allowing their client (OEM) and their sales representatives to review the part before committing to the final design that will lead to the development of the mould, which is an expensive process. It is assumed that six different service providers are available in five European countries, using three different types of FDM 3D printers. Test Case 1 In the first test case, a relatively simple part was selected, where lower layer thickness, cost, and tardiness are favoured. The criteria weights for evaluating the alternative process configurations from all six service providers were: layer thickness (40%), density (0%), tardiness (30%), and cost (30%). The materials to be considered were Polylactic Acid (PLA) and Acrylonitrile Butadiene Styrene (ABS). Once the product design is completed, a design engineer has the option to use the part geometry analysis tool, which will automatically convert the native CAD file to an STL file and will calculate the external dimensions of the part (Figure 7). Then it will launch the Design Agent ( Figure 8). For all 3D printers, the Prusa Slicer CAM tool [18] was interfaced with the corresponding Machine Agents for generating the G-code, while plain text files were used for representing the information related to the machines' availability as stored in standard MES platforms. The process profiles that are available for each machine correspond to 3 different layer thicknesses, i.e., 0.05 mm, 0.15 mm, and 0.30 mm, 2 materials (PLA, ABS), 4 infill densities (0%, 20%, 50%, 70%), and 3 support options. The overall number of profiles is therefore 72 per machine. More process configuration profiles could be prepared and used but for illustration purposes, the number of profiles in this paper was limited to 72. In this case, as per the user's layer thickness, density and material requirements, 2 layer thicknesses (0.05 mm, 0.15 mm), 2 materials (ABS, PLA), 2 infill densities (50%, 70%), and 1 support option were tested. The overall number of combinations is therefore 8, which is equal to the total number of profiles tested and simulated per agent and machine. Figure 9 presents the least and most expensive process alternatives, as well as the one that was selected as the best. prepared and used but for illustration purposes, the number of profiles in this paper was limited to 72. In this case, as per the user's layer thickness, density and material requirements, 2 layer thicknesses (0.05 mm, 0.15 mm), 2 materials (ABS, PLA), 2 infill densities (50%, 70%), and 1 support option were tested. The overall number of combinations is therefore 8, which is equal to the total number of profiles tested and simulated per agent and machine. Figure 9 presents the least and most expensive process alternatives, as well as the one that was selected as the best. For the test case 1, agent MA6 provided the least expensive configuration. However, due to its lower degree of availability, this alternative was not the one selected. MA4 provided the best-balanced configuration, exhibiting low cost and tardiness. The most expensive process configuration was provided by agent MA2, since the AM service provider the agent represents is the most expensive one and this particular configuration is related to the lowest possible layer thickness and the highest infill density, leading also to very high tardiness. For the test case 1, agent MA6 provided the least expensive configuration. However, due to its lower degree of availability, this alternative was not the one selected. MA4 provided the bestbalanced configuration, exhibiting low cost and tardiness. The most expensive process configuration was provided by agent MA2, since the AM service provider the agent represents is the most expensive one and this particular configuration is related to the lowest possible layer thickness and the highest infill density, leading also to very high tardiness. Test Case 2 In the second test case, a more complex part was selected ( Figure 10). The company would prefer a part with a higher infill density, while cost would not be as important as density, delivery date, and layer thickness. The criteria weights for evaluating the alternative process configurations from all six service providers were: layer thickness (20%), density (50%), tardiness (20%), and cost (10%). The materials to be considered were PLA and ABS. All cost parameters and machines' availability have been assumed to be the same as in test case 1. The parameters used for launching the Design Agent and the information received from the Machine Agents are shown in Figure 11. Test Case 2 In the second test case, a more complex part was selected ( Figure 10). The company would prefer a part with a higher infill density, while cost would not be as important as density, delivery date, and layer thickness. The criteria weights for evaluating the alternative process configurations from all six service providers were: layer thickness (20%), density (50%), tardiness (20%), and cost (10%). The materials to be considered were PLA and ABS. All cost parameters and machines' availability have been assumed to be the same as in test case 1. The parameters used for launching the Design Agent and the information received from the Machine Agents are shown in Figure 11. The best and least expensive alternatives were generated by Agent MA6 (Figure 12). The best and least expensive alternatives were generated by Agent MA6 (Figure 12). The best alternative, in particular, is a balanced solution, where a profile with the highest density and medium layer thickness was chosen so that cost and tardiness could be kept at low levels. The most expensive alternative was produced by Agent MA2, whose cost rates are the highest among the six AM service providers. Cases Results Comparison and Simulation Validation The results obtained from the two cases are summarised in Table 1. The 30% higher volume of the part corresponding to test case 2, together with the fact that a higher density was selected led to higher process times (and therefore tardiness) and cost. The best alternative, in particular, is a balanced solution, where a profile with the highest density and medium layer thickness was chosen so that cost and tardiness could be kept at low levels. The most expensive alternative was produced by Agent MA2, whose cost rates are the highest among the six AM service providers. Cases Results Comparison and Simulation Validation The results obtained from the two cases are summarised in Table 1. The 30% higher volume of the part corresponding to test case 2, together with the fact that a higher density was selected led to higher process times (and therefore tardiness) and cost. For the validation of the test cases, two alternatives were chosen: the best alternative of test case 1 and a randomly selected alternative generated for test case 2. The corresponding parts were printed, using the process parameters suggested by the corresponding Machine Agents in the same 3D printer type, which is associated with these agents. The process time and the weight of the parts were measured. The variations observed from the values simulated versus actual part build time and weight are summarised in Table 2: The differences between simulated and actual process times and part weights are: • Time per part: 1-2%, • Weight per part: 3-4%. Assuming that a 95% accuracy is expected from the simulation process, these variations are well within range. The printed parts are shown in Figure 13. Assuming that a 95% accuracy is expected from the simulation process, these variations are well within range. The printed parts are shown in Figure 13. Conclusions This paper presents an agent-based approach for automating the process of selecting an AM service provider, the corresponding equipment and desired process configuration, while considering Conclusions This paper presents an agent-based approach for automating the process of selecting an AM service provider, the corresponding equipment and desired process configuration, while considering a set of often conflicting criteria. The main goal of this approach is the implementation of a platform that is capable of supporting designers and engineers towards making informed product design and development decisions. The Machine Agents are in principle capable of interfacing open-ended CAM tools that are used with 3D Printers as well as of evaluating quite accurately the performance of several alternative process configurations. One of the main advantages of the proposed approach is that it can handle as many alternative AM service providers, equipment, and configurations as needed since the overall computing load is distributed evenly to the Machine Agents. The proposed approach and platform could in principle be used with any kind of 3D equipment, given that the associated CAM software could be interfaced. However, this cannot always be the case, as, especially in the case of metal AM equipment, the CAM tools used by these machines are often proprietary and do not provide an Application Programming Interface (API) that would allow for the straightforward integration with the corresponding Machine Agents. Recent developments in the domain of robotic process automation (RPA) could provide an alternative way for interfacing and utilising the proprietary CAM systems in a near-automated way. With this technology, the interaction between a human operator and a software system may be replicated and executed on-demand in a fully parameterized manner. This opens possibilities for the integration of different CAM tools and platforms that could be interfaced with the proposed agent-based platform. The proposed approach could also complement existing platforms and approaches by, for instance, providing information regarding cost and time performance of diverse process configurations so that a limited number of configurations be reviewed in these platforms. Further information, such as specific parts' geometry characteristics and feature sets could also be useful for identifying the most suitable process configurations and print profiles, based on the past performance of these profiles in the production runs with parts sharing similar features or characteristics. The platform is planned to be presented to the public utilising the central computer server of the Laboratory for Advanced Manufacturing Simulation and Robotics at UCD. As part of the I-From Advanced Manufacturing Centre, it is also planned to provide further support for different AM manufacturing technologies that are of particular interest for research teams and the industry. Funding: This publication has been supported in part by a research grant from Science Foundation Ireland (SFI) under Grant Number 16/RC/3872 and is co-funded under the European Regional Development Fund. Conflicts of Interest: The authors declare no conflict of interest.
2020-07-23T09:02:19.693Z
2020-07-18T00:00:00.000
{ "year": 2020, "sha1": "c4d04a637fd349f9c76e7ed5a77cc32b40a17dfc", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-3417/10/14/4953/pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "4c2b15465707813b262f3345021f4b279f2e2ff5", "s2fieldsofstudy": [ "Engineering", "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
271490611
pes2o/s2orc
v3-fos-license
Organo NHC catalyzed aqueous synthesis of 4β-isoxazole-podophyllotoxins: in vitro anticancer, caspase activation, tubulin polymerization inhibition and molecular docking studies We present, for the first time, the organo-N-heterocyclic carbene (NHC) catalyzed 1,3-dipolar cycloaddition of 4β-O-propargyl podophyllotoxin (1) with in situ aromatic nitrile oxides to afford regioselective 4β-isoxazolepodophyllotoxin hybrids (6a–n) in benign aqueous-organic media. Preliminary anticancer activity results showed that compound 6e displayed superior activity against MCF-7, HeLa and MIA PaCa2 human cell lines compared with podophyllotoxin. Compounds 6j and 6n showed greater activity against the MCF-7 cell line than the positive control. Caspase activation studies revealed that compound 6e at 20 μg ml−1 concentration had greater caspase 3/7 activation in MCF-7 and MIAPaCa2 cells than podophyllotoxin. Furthermore, in vitro tubulin polymerization inhibition studies revealed that compound 6e showed comparable activity with podophyllotoxin. Finally, in silico molecular docking studies of compounds 6e, 6j, 6n and podophyllotoxin on α,β-tubulin (pdb id 1SA0) revealed that compound 6n showed excellent binding energies and inhibition constants compared with podophyllotoxin. Introduction Nature has consistently offered us a wide variety of bioactive products to treat serious diseases such as cancer, immune system diseases, neurological conditions, and infections. 1 Among these products is podophyllotoxin (1), one of the most prevalent naturally occurring cyclolignans isolated from Podophyllum peltatum L. and Podophyllum hexandrum, 2 which has been broadly used in clinical studies on diverse malignancies.The mechanisms of action of semi-synthetic derivatives etoposide (2) and teniposide (3) are signicantly different from those of the parent podophyllotoxin. 3,4These two semi-synthetic derivatives inhibit DNA topoisomerase II, whereas the parent podophyllotoxin inhibits the assembly in the microtubulin. 5,6ven yet, there have been reports on etoposide's toxicity and limitations, including its moderate efficacy, low solubility in water, potential for drug resistance, metabolic inactivation, and other adverse consequences. 7,8ese ndings have prompted numerous investigations into the structural modication of etoposide, which includes etopophos (4), that addresses the bioavailability aspect.The substitution at the 4b-position resulting in strong inhibition of topoisomerase II was the most signicant change.Aer a study 9,10 on the substitution of heterocycles for etoposide's C-4 sugar unit, MacDonald and colleagues 11 produced a composite pharmacophore model that identied the C-4 molecular area of podophyllotoxin (1) as a potentially variable position for further research.Bulky substituents at C-4 may be advantageous for DNA topo-II inhibition, as further evidenced by comparative molecular eld analysis (CoMFA) models revealed by Lee and the group. 11,12These hypotheses align with the outstanding activity proles of GL-331 (5), TOP-53 (6), and NK 611 (7). 13nterestingly, drug-resistance proles of GL-331 and TOP-53 differed signicantly from that of the parent compound 1, and both compounds demonstrated increased DNA topo-II inhibition and antitumor potential.The activity proles of these classes of compounds suggest that substitution at position C-4 plays a crucial role and rational modications at the C-4 position are feasible. 14ne of the main challenges in medicinal chemistry is the development of anticancer agents, as evidenced by severe side effects of current chemotherapeutics, which include non-specic targeting, low solubility and incapability to enter tumour cells. 15hese factors suggest the need for ongoing efforts to develop desired anti-cancer drug-like candidates with minimal side RSC Advances PAPER effects. 16,17As a result, current research is concentrated on the development of novel, safer therapeutic agents that are crucial for clinical use. 18,19Remarkably, because of their numerous biological uses, N-heterocycles with an oxygen atom are regarded as an important class of compounds in medicinal chemistry. 20In view of this, the selection of an isoxazole ring in the design and synthesis of biologically active compounds is considered to be a better choice, since in isoxazole both O and N atoms are present in adjacent positions and have low bond dissociation energy. 21lso, isoxazole consists weak basic character and due to a weak N-O bond, this ring breaks simply by photolysis and thermolysis.Predominantly, deprotonation of isoxazole leads to ring-opening and additional substitutions would lead to well therapeutic activity. 22Many research groups have been working on the development of new isoxazole-based anticancer active compounds (Fig. 1). 23ynthesis of isoxazole derivatives is extensively carried out through 24 cyclomerization, cycloaddition, condensation and functionalization etc.In particular, 1,3-dipolar cycloaddition of nitrile oxides with alkynes has good synthetic value, since it produces biologically useful isoxazoles. 22,23Thus, Cu(I) and Ru(II) catalyzed 1,3-dipolar cycloaddition of nitrile oxides with terminal alkynes to access regioselective 3,5-di-and 3,4-disubstituted isoxazoles correspondingly have been well-developed. 24,25However, considering the probable disproportionation in metal catalysis and advantages observed with organo-catalysis in homogeneous catalysis, 26 in 2011, our group developed an organo-N-heterocycliccarbenes (NHCs) catalyzed 1,3-dipolar cycloaddition of nitrile oxides with alkynes to obtain regioselective 3,5-di-and 3,4,5-trisubstituted isoxazoles in DCM solvent, 27 since, NHCs are distinct Lewis base (nucleophilic) organocatalysts, they have both s-basicity and p-acidity properties. 28Starting from initial studies on thiamine-derived NHCs in benzoin, 28a and Stetter reactions, 28b the mechanistic variety of NHCs contingent on their properties has led to the progress of numerous extraordinary C-C and C-X (X = heteroatom) bond formations.Certainly, the isolation of the rst stable NHC by Arduengo in 1991 (ref.29) from imidazolium salts revealed their tunable steric and electronic properties by changing N-substituents on the imidazole ring.In addition, the imidazolium salts, precursors to NHCs, remain stable and easy to handle. Conversely, organocatalytic reactions have appeared as a substitute synthetic strategy that may eventually lead to largescale pharmaceutical synthesis applications. 30The development of organocatalytic reactions in aqueous media is especially promising because, generally speaking, organocatalysts are stable in the presence of aqueous media.The use of water as the solvent in N-heterocyclic carbene (NHC)-catalyzed polarity inversion reactions is restricted when compared to enamine catalysis.28c,31 In 2004, Bode revealed the use of aq.organic (THF-H 2 O, 10 : 1) in organo-NHC-catalyzed addition of enals and aldehydes.28p, 32 The similar group then used a stoichiometric amount of water as a reagent or co-solvent in the organo-NHC catalyzed reaction of formylcyclopropanes and a-chloroaldehyde bisulte salts. 33In 2010, Rovis and group developed an organo-NHC catalyzed reaction of a-chloro aldehydes in toluene-water solvent media comprising around 1.0 equivalent of water. 34n another study, Hoveyda and group developed an enantioselective addition between dimethylphenylsilyls and to a,bunsaturated carbonyls in the aqueous medium using organo-NHC-catalysis and results displayed that water is comfortable to organo-NHC catalysis. 35Certainly, traces of water are usually presumed to exist even when anhydrous organic solvents are utilized in organo-NHC-catalyzed reactions.Also, results of the experimental and theoretical studies of Amyes, Diver, Gudat, and Nyulaszi groups show that NHCs are reasonably stable in aqueous environment. 36onsidering that thiamine, an NHC precursor, is indispensable for several biological processes that arise mainly in an aqueous atmosphere. 37In 2013, Y. R. Chi and co-workers rst time utilized catalytic efficacy of organo-NHC to promote the reaction of enals with enones in pure water. 38Following nature's lead, the use of aqueous solvent media would be an appropriate choice in NHC-catalyzed reactions. Based on all the above ndings and in our ongoing effort to develop aqueous organic synthesis 39a-d and anticancer agents, 39e-k we report an organo-NHC catalyzed 1,3-dipolar cycloaddition between in situ nitrile oxides and 4b-O-propargyl podophyllotoxin to construct new C4-modied isoxazole linked podophyllotoxins in aq.MecCN media.As well, we also investigated the in vitro anticancer activity, caspases activation and in vitro tubulin polymerization of newly synthesized isoxazolelinked podophyllotoxins.Finally, molecular docking studies were carried out for the most potent compounds found in in vitro activity studies.To the best of our knowledge, this is the rst report concerning the organo-NHC catalyzed 1,3-dipolar alkyne-nitrile oxide cycloaddition in aqueous organic media to obtain new biologically active isoxazole compounds. Results and discussion At rst, the synthesis of the key starting materials of the current work such as 4b-O-propargyl podophyllotoxin (1) 39f, 40 and chlorooximes (hydroximoyl chlorides) (6a-n) 41 was achieved according to literature procedures. Later, we concentrated on the development of the optimization reaction conditions from 1,3-dipolar cycloaddition of terminal alkyne 1 with N-hydroxybenzimidoyl chloride (6a) as a model reaction using K 2 CO 3 and diverse 5 mol% NHC precatalysts (A-G) at RT in (7 : 3) H 2 O/MeCN solvent media. Finally, we investigated the effect of the solvent system on the yields of 6a, under optimized reaction conditions.The trace amount of 6a was isolated using bi-phasic systems such as aq.DCM ( With the above optimal conditions (5 mol% NHC precatalyst B, K 2 CO 3 and 7 : 3 water/MeCN) in our hand, the current approach was extended to a variety of aromatic aldehydes.In general, aromatic aldehydes containing electronwithdrawing groups provided better yields of the corresponding isoxazoles (Scheme 1, entries 6f-n) than the remaining aldehydes (Scheme 1, entries, 6a-g). Based on the literature survey, 27,42 we propose a plausible mechanism as shown in Scheme 2. At rst, an in situ organo-NHC catalyst (obtained from pre-catalyst B and K 2 CO 3 ) would react with alkyne (1) to form the zwitterion intermediate (H).Later this reactive intermediate interacts with in situ nitrile oxides (obtained from the reaction between chlorooximes (5a-n) and K 2 CO 3 ) through a nucleophilic attack to give additional zwitterion intermediate (I), which, nally undergo C-O hetero-cyclization to give regioselective 3,5-di-substituted 4bisoxazolepodophyllotoxin hybrids (6a-n). The nature of the phenyl ring that is attached to isoxazole moiety, affecting in vitro anticancer activities was revealed using structure-activity relationship (SAR) studies.Introducing a strong electron-releasing group (OCH 3 ) at the 4th position resulted in compound 6e displayed enhanced activity.The compound 6d containing a weak electron-donating group at 3rd and 5th positions (3,5-diCH 3 ) was ranked second in this category.However, simple phenyl ring compound 6a or compounds 6b and 6c bearing mono-CH 3 groups irrespective of their positions had reduced potency as compared to compounds 6d and 6e (Fig. 3). With respect to the electron-withdrawing group series, compounds 6j and 6n bearing 4-NO 2 and 3,5-diCN groups, respectively, displayed improved activity.The next better activity was shown by compound 6m having the 3,5-diCl group.Compounds 6f and 6i with 4-Cl and 4-CN groups, respectively, had slightly weaker activity than the compound 6m.However, compounds 6g, 6h and 6k containing mono-halogen groups such as 4-Br, 4-F and 3-Cl, respectively, or compound 6l with 3-CN group displayed poorer activity than the remaining compounds in this series (Fig. 4). Caspases activation study The caspase protease family of enzymes is essential for both the start and completion of apoptosis.When they become active, they cleave many regulatory and structural proteins, which causes the cell to break down internally. 43The traditional indicators of apoptosis, including nuclear condensation, DNA Scheme 2 A possible mechanism.fragmentation, and plasma membrane blebbing, are caused by these proteolytic processes. 44The elimination of damaged cells and preservation of cellular homeostasis are two processes in which apoptosis is essential. 45The terms "intrinsic pathway" and "extrinsic pathway" refer to the two primary caspase activation mechanisms that lead to apoptosis; they are so named because pressures from the outside or inside of the cell, respectively, oen activate them.Cellular stressors such as end plasmicreticulum stress, metabolic stress, and DNA damage are the main causes of the intrinsic pathway's activation.This route is activated by several chemotherapy medications that are used to treat different types of cancer.When these stimuli come together, the mitochondria experience outer membrane permeabilization (MOMP), which releases cytochrome c from the intermembrane space of the mitochondria into the cytosol. Hence the podophyllotoxin and most potent compounds 6e, 6i, 6j, 6m and 6n observed in the above in vitro anticancer activities were further tested for their caspase activation and results are shown in Table S1 † (Fig. 5-9).Out of all, compound 6e showed paramount caspase activation.In accordance with the results, we noted that the activation of caspases by the compounds was found to be concentration-dependent.Among all, compound 6e highly activated caspase 3/7 in MCF-7 cells with 94.5%, which is even higher than the percentage of the podophyllotoxin 91.9% recorded at 20 mg ml −1 .Subsequently, the 3/7 caspase activation by 6e was also found signicant in MIA PaCa2 cells with 92.3%, which is higher than the percentage value of podophyllotoxin 88.6% recorded at 20 mg ml −1 .On the other hand, the caspase 3/7 activation by 6e in HeLa cells was found slightly lower at 81.3% compared to podophyllotoxin at 86.9% recorded at 20 mg ml −1 .These results suggest that compound 6e is a good selection for the activation of 3/7 caspase.Next to 6e, compounds 6j and 6i also highly induced caspase 3/7 activation in MCF-7 cells with 90.1% and 89.3%, respectively, recorded at 20 mg ml −1 .The results are compared with 91.9% recorded with podophyllotoxin at 20 mg ml −1 .While the different caspase action by the remaining compounds was found average or least.Compound 6m exhibited the least caspase 3/7, 8, and 9 activations with 44.8%, 53.4,65.0%, 63.0%, 41.9% and 51.7%, respectively, in HeLa, MCF-7 and MIA PaCa2 cancer cells. Tubulin polymerization inhibition Targeting the microtubules in the development of anticancer drugs was considered to be an important goal in current medicinal chemistry, 46 as they are involved in a few cellular functions, such as cell division, organelle transport, motility and keeping signal transduction. 47A few antimitotic agents such as Vinblastine, Colchicine, and Vincristine inhibit tubulin polymerization via their binding to the colchicine or vincabinding sites of tubulins. 48Curiously, for the past few years, few anti-mitotic agents such as taxanes and vinca alkaloids were used for clinical trials of varied cancer patients. 46,49However, little solubility and oral bioavailability and slightly high toxicity make these anti-mitotic agents less in clinical trials of cancer, [46][47][48][49][50] which were made a crucial need to develop novel anti-mitotic agents.Interestingly, a few reports of C4-ring-modied podophyllotoxin 51 and a few isoxazole 52 derivatives have been recognised to inhibit tubulin polymerization. Molecular docking studies The molecular docking studies of most potent compounds 6e, 6j and 6n found in the above in vitro studies and positive control podophyllotoxin were carried out by taking a,b-tubulin (pdb id 1SA0) as target 53 and results are shown in Table 4.It was found that compound 6n displayed greater binding energy (−9.39 kcal mol −1 ) and showed 130.79 nM inhibition constant.Compound 6e was ranked second in this study, with its binding energy as −7.99 kcal mol −1 and 1.38 mM as the inhibition constant.Compound 6j displayed binding energy (−7.84 kcal mol −1 ) and showed a 1.84 mM as the inhibition constant.On the other aspect, positive control podophyllotoxin showed binding energy = −9.22kcal mol −1 and 173.47 nM as the inhibition constant. With respect to binding interaction, podophyllotoxin showed p-p stacking with TYR224 residue and compound 6j formed salt bridge with ARG2 residue. Overall results revealed that the compound 6n showed an encouraging binding energy and inhibition constant than the podophyllotoxin (Fig. 10 and 11). Fig. 2 Fig. 2 Structures of NHC pre-catalysts used in the present study. Fig. 5 Fig. 5 Effect of compound 6e against the activation of different caspases in selected cancer cell lines. Fig. 6 Fig. 6 Effect of compound 6i against the activation of different caspases in the selected cancer cell lines. Fig. 7 Fig. 7 Effect of compound 6j against the activation of different caspases in selected cancer cell lines. Fig. 8 Fig. 8 Effect of compound 6m against the activation of different caspases in selected cancer cell lines. Fig. 9 Fig. 9 Effect of compound 6n against the activation of different caspases in selected cancer cell lines. Fig. 10 Fig. 10 Effect of podophyllotoxin against the activation of different caspases in the selected cancer cell lines. Table 4 Molecular docking results of 6e, 6j and 6n and podophyllotoxin
2024-07-28T05:20:02.492Z
2024-07-26T00:00:00.000
{ "year": 2024, "sha1": "7503d4590e973d22a3df94ba3a26a949507a5ad3", "oa_license": "CCBYNC", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "7503d4590e973d22a3df94ba3a26a949507a5ad3", "s2fieldsofstudy": [ "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
259052539
pes2o/s2orc
v3-fos-license
Research on the Mechanism of Transfer Payment Policy on Resource Dependence of Resource-Depleted Cities : The objective of this study is to clarify the impact and mechanism of transfer payment policies on the resource dependence of resource-depleted cities. Based on the panel data of 113 prefecture-level resource-based cities from 2006 to 2017, this study uses a multi-period difference-in-differences model to conduct an empirical study on the impact and mechanism of transfer payment policies on resource-depleted cities. The results are as follows: Firstly, the transfer payment policy can reduce the resource dependence of resource-depleted cities. Secondly, there is a significant difference between the eastern region and the central and western regions in terms of the effects of policy implementation. Thirdly, transfer payment policies reduce local dependence on resources, mainly through upgrading industrial structures and enhancing infrastructure construction and technological progress. The research indicates that providing financial policy support for the transformation of resource-depleted cities, exploring ways to reduce resource dependence in the eastern region, playing an exemplary role, and expanding the intensity of urban industrial transformation are of reference significance for the sustainable development of resource-depleted cities. This study also contributes to the coordinated development of the regional economy and the policy formulation of the sustainable development of resource-depleted cities. Introduction Resource-based cities are cities whose development mainly depends on the exploitation and processing of non-renewable resources such as minerals, forests, and petroleum in the region. There are 262 resource-based cities in China, accounting for 40% of the total number of cities in China. Since 1949, a total of 5.8 billion tons of iron ore, 52.9 billion tons of raw coal, and 5.5 billion tons of crude oil have been mined, and this has made an indelible contribution to the economic development of resource-based cities. However, as the non-renewable resources in these cities began to be exhausted, some of these cities have less than 30% recoverable reserves of natural resources and become resource-depleted cities. Over-reliance on resource-based industries crowded out other local industries and this phenomenon has been known as the "Dutch Disease" or "Resource Curse" [1,2]. Shao and Qi have proved that there exists a serious "Resource Curse" phenomenon in China [3]. James and Aadland pointed out that enterprises with high pollution, high emissions, and low efficiency lead to the emission of a large amount of carbon dioxide and air pollutants, causing serious damage to the ecological environment [4]. Ross showed that over-dependence on natural resources makes resource-based cities fall into the economic development dilemma of a single industrial structure, a high proportion of "three high" industries, and a low innovation capability [5]. During the process of continuous advancement of energy conservation and emission reduction in China, resource-depleted cities not only face the pressure of seeking new economic growth but also face the enormous pressure of energy conservation and emission reduction. The economic and social contradictions of resource-depleted cities are the concentrated embodiment of the problems faced by high-quality development. Therefore, it can be seen that whether resource-depleted cities can reduce their dependence on resources and realize transformation and upgrading is the key to their high-quality economic development, which is of great significance for local social stability and economic development. How to reduce the resource dependence of resource-depleted cities, help them out of the "Resource Curse" dilemma, and complete the economic transformation have always been the focus of scholars. Resource-based cities can improve the efficiency of transformation through economic and social environment construction, the inflow of new industrial factors, the improvement of existing resource allocation levels, and the improvement of allocation efficiency [6]. Political incentives and performance competition affect the transformation efficiency of resource-depleted cities, especially the solution to environmental pollution [7,8]. At the same time, most studies on the transformation of resource-depleted cities take specific cities as case studies. For example, Wen et al. take Benxi as an example to prove that the transformation of the original industry and the elongation of the industrial chain through high-tech and modern low-carbon and environmental protection can contribute to the transformation process [9]. Research based on the carbon emission data of Xuzhou in Jiangsu province showed that there is a decoupling between the urbanization transition and the environment of resource-depleted cities [10]. A study about the ecological transformation of six resource-depleted cities in Jilin province found that the level of science and technology can promote ecological efficiency [11]. In recent years, with the promulgation of the <National Sustainable Development Plan for Resource-Based Cities (2013-2020)> (hereinafter referred to as the <Plan>), the role of policies in the transformation of resource-based cities has gradually attracted scholars' attention. The study evaluated the environmental impact of the <Plan> on resource-based cities and demonstrated through experiments that the introduction of the <Plan> can help reduce the emission intensity of pollutants in resource-based cities [12]. The supportive policies for resource-depleted cities could significantly improve per capita GDP and employment rate [13]. It was proved that transformation policies can promote the economic growth of resource-depleted cities by constructing a heterogeneous time-series difference-in-differences model [14]. By constructing the fixed-effect model and the dynamic system generalized distance model (SYS-GMM), Zhang et al. concluded that the governance ability of resource-based cities has an important influence on the degree of urban resource dependence and proposed different measures in, and effects on, the eastern region, northeast region, and declining cities [15]. The Chinese government has been making continuous efforts to reduce the dependence of resource-depleted cities on resources and promote the transformation and upgrading process of resource-depleted cities. In order to advance the process, the State Council promulgated the <Several Opinions of the State Council on Promoting the Sustainable Development of Resource-depleted cities> in 2007, proposing a policy of direct financial transfer payments from the central government to resource-depleted cities. Subsequently, the National Development and Reform Commission, the Ministry of Land and Resources, and the Ministry of Finance included 69 resource-depleted cities (including county-level cities and municipal districts) in the support list of financial transfer in three batches in 2008, 2009, and 2012 and arranged for the central government to provide annual subsidies to resource-depleted cities, with a total of nearly 160 billion yuan in financial transfer payments. The goal is to help resource-depleted cities establish and improve the compensation mechanism for resource development and the assistance mechanism for declining industries so that they can improve the efficiency of resource utilization, reduce the degree of resource dependence, complete the transformation and innovation, and realize the green and sustainable development of the urban economy. Existing research has made a lot of evaluations of transfer payment policy in economic development, green innovation, and environmental protection in resource-depleted cities. Supporters believed that this policy encouraged enterprises to make capital investments by providing capital subsidies, tax cuts, and fee reductions, so as to promote technological progress and industrial upgrading of production and achieve higher value-added production activities [16,17]. Xu and Tan affirmed the role of transfer payments in increasing per capita GDP in resource-depleted cities [18]. A study found that a transfer payment policy can promote green technology innovation in resource-depleted cities, and the policy effect is strengthened over time [19]. Based on the enterprise data, the transfer payment policy could promote employment and improve output [18]. Based on the differencein-differences model, the results confirmed the effect of the transfer payment policy on urban carbon emission reduction [20]. Critics have different opinions as follows. Due to the information asymmetry between the central government and local government, the transfer payment policy would induce the local government to reduce tax collection and reduce the level of fiscal effort, so as to obtain more benefits from transfer payment [21,22]. López-Laborda and Julio pointed out that the "common pool problem" of transfer payments led to the phenomenon of "fiscal illusion" in local governments [23]. The benefits of local public services financed by transfer payments were enjoyed by regions, while most of the costs were borne by other regions, which ultimately led to deviations in the incentive mechanism for local government behavior. Liu pointed out that more transfer payments would not necessarily lead to faster economic growth, and transfer payments would have a negative impact on the economic growth of backward regions without a good system [24]. They believed that the transfer payment policy would make resource-depleted cities fall into an "incentive trap", which led to slower economic growth [16]. The government was short-sighted in the use of transfer payment, investing, and supporting large state-owned enterprises in order to ensure regional GDP growth and employment, but these enterprises were usually highly polluting and inefficient [25]. Increased government intervention had a significant inhibitory effect on local innovative development [26]. The local area was caught in a vicious circle in which green technology innovation was not valued and the natural environment was continuously damaged. At present, there is no specific literature on the impact of financial transfer payment policy on resource endowment, and the corresponding mechanism is not clear. A few studies only focus on specific urban cases and lack macro-analysis at the overall level. In order to explore the correlation between transfer payments and resource dependence, this study uses the annual data of 113 resource-based cities in China from 2006 to 2017 and uses the difference-in-differences model to conduct a purely empirical study basically on the impact degree and mechanism of transfer payments and resource dependence. The data are from the China Statistical Yearbook 2006-2017. This study also explains the mechanism of transfer payment affecting resource dependence through industrial transformation and explains the different effects of policies in different regions through heterogeneity analysis. Because the financial transfer payment policy only affects resourcedepleted cities, it provides a natural condition for the difference-in-differences model to divide the experimental group and the control group. Meanwhile, the transfer payment policy is carried out in three batches, and the effect time of each policy is more than 3 years. The data structure is suitable for using the multi-period difference-in-differences model. Compared with the general regression method, the biggest advantage of the difference-indifferences model is that it can avoid the endogeneity problem. Since policies are generally exogenously compared with microeconomic subjects, there is no reverse causality problem. The fixed-effect estimation in the difference-in-differences model can also alleviate the missing variable bias problem. Considering the advantages of the difference-in-differences model, we choose it as the experimental method in this paper. From various perspectives, this study provides the following three theoretical contributions. Firstly, the research explores the effect of the transfer payment policy on resource dependence of resource-depleted cities at the level of prefecture-level cities, which fills the gap in the research of related fields and contributes to the sustainable development goals of resource-based cities in China. Secondly, this study discusses the ways in which transfer payments affect local resource dependence and selects control variables to measure Sustainability 2023, 15, 8994 4 of 28 regional heterogeneity, including economic development level parameters (lngdp) and (lngdpp) [27,28], government size parameter (lngov) [29], urbanization level parameter (lnden) [30], industrial development scale parameter (lnqys) [31], urban infrastructure level parameter (lnpas) [32], urban pollutant emission level parameter (lnwat) [33], urban pollutant treatment level parameter (su) [34], science and technology investment level parameter (sci) [35] and the education investment level parameter (edu). This study quantitatively analyzes the function of industrial transformation and puts forward some suggestions for the implementation of transfer payment or similar policies. Finally, the research conducts a spatial heterogeneity analysis of the effect of the transfer payment policy on resource dependence in different regions of China. Based on the results, we recommend the Chinese government formulate policies according to local conditions to ensure that resource-depleted cities in various regions achieve the goal of sustainable development as soon as possible. Research Hypothesis Financial transfer payment policies mainly include general transfer payments, transfer payments for ethnic minority areas, government awards and subsidies for counties and townships, transfer payments for wage adjustments, transfer payments for rural tax and fee reform, and financial subsidies for year-end settlement. The financial transfer payment policy undertakes the important mission of establishing and improving the long-term sustainable development mechanism of resource-depleted cities, cultivating replacement industries, solving social problems such as employment, and strengthening environmental governance and ecological protection. It is one of the important measures taken by the state to promote the sustainable development of resource-based cities. In the context of a freer flow of factors and a more open economy, unbalanced and inadequate financial, resource, and environmental problems in regional economic development have become a key link that restricts China's high-quality economic development. As an important part of regional economic development, the development of resource-depleted cities needs to shift from the traditional factor-driven growth model to a new growth model focusing on improving quality and efficiency urgently [36]. The research results of foreign scholars on regional economic development show that the coordination relationship between regions can be comprehensively considered from the perspective of equilibrium analysis framework, spatial perspective, and government intervention [37][38][39][40][41][42][43][44]. Chinese scholars have also understood the interaction mechanism between increasing returns, knowledge, human capital, innovation, regional disparities [45][46][47][48], etc., from the perspective of regional science and new economic growth theory. This provides a theoretical explanation and policy foundation for the sustainable development of resource-depleted cities to promote regional coordinated development in China. Resource-depleted cities play an important role in China's sustained regional economic growth and ensuring the supply of resources and energy, which is an important link to the high-quality and high-level development of China's economy, and financial transfer payment is an important policy means for the government to regulate regional economy [49]. According to the transmission path of the transfer payment effect, the impact of the transfer payment on regional economic development is transmitted through the local government fiscal policy. Whether transfer payment can reduce the degree of resource dependence of resource-depleted cities is critical to whether it can promote the sustainable economic development of these cities and thus regional economic sustainable development. Therefore, this study focuses on the impact of the transfer payment policy on the resource dependence of resource-depleted cities and its mechanism, and the research hypothesis is put forward accordingly. According to the existing research, economic logic, and historical experience, transfer payments for resource-depleted cities are mainly used to solve the problems in non-resource industry continuation, infrastructure construction, environmental protection, and green technology innovation of resource-depleted cities [19]. Firstly, the finance transfer payment policy should play a guiding role in the adjustment and industrial structure transformation of resource-depleted cities, the vigorous development of tertiary industry, accelerating the replacement and replacement of non-resource-based enterprises, and encouragement of innovative development of small and medium-sized enterprises by optimizing the structure of local budget expenditure, so as to provide a new perspective and ideas for regional industrial transformation. Ma and Yu point out that support policies for resource-depleted cities can promote the transformation and upgrading of the manufacturing industry through the effect of resource reallocation [50]. Secondly, policy needs to be supportive. Transfer payment policies can effectively alleviate the plight of the local government's fiscal deficit and help the government to increase the proportion of scientific and technological expenditure and education expenditure to improve local infrastructure construction, such as the medical and health system and education system [51]. Thus, it is conducive to the introduction and cultivation of local high-tech talents. Finally, the policy has to play an incentive role. By reducing taxes and fees and increasing subsidies, high pollution and high emissions enterprises have more motivation and willingness to innovate green technologies and improve their production efficiency [3]. This can achieve the purpose of reducing the discharge of pollutants, protecting the local environment, enabling resource-depleted cities to enter into healthy development, and reducing resource dependence. Accordingly, this study proposes the following research hypotheses: H1. Transfer payment policy helps reduce the resource dependence of resource-depleted cities. Furthermore, referring to the polarization theory of regional economic development, economic development has a tendency towards nonequilibrium and divergent evolution. The non-homogeneity and illiquidity of production factors in economic development lead to the inability to replace production factors that hinder balanced development in various regions, making the process of economic development not leading to equilibrium but to the strengthening of regional differences [24]. On the one hand, the non-homogeneity and immobility of the production factors in economic development lead to the fact that production factors cannot be completely replaced in various places, so the process of economic development does not lead to equilibrium but strengthens regional differences. On the other hand, markets are not characterized by perfect competition, but by monopoly, oligopoly, and externalities. Important information about technology and innovation is not freely available but needs to be disseminated through the economic system. Myrdal's cyclic accumulation process theory emphasizes the polarization effect [39]. Active local development will inhibit the development of its surrounding areas. For example, prosperous areas attract high-skilled labor forces in stagnant areas, which weakens the innovation potential of surrounding areas, damages the surrounding environment, and intensifies the competition of production factors, making stagnant areas fall into a vicious cycle. Therefore, the stimulus will accumulate over time and form a fixed gap, and the non-equilibrium state will be reinforced with economic development. There are huge differences in endowments (such as infrastructure, economic development, social development, and marketization) among the eastern, central, and western regions of China, and such differences continue to expand with the flow of capital and labor [52,53]. Due to its proximity to trading ports, the resource-based cities in the eastern region are superior to those in the central and western regions in terms of economic aggregate, scientific and technological innovation, talent training and attraction, and the overall development of industrial structure is relatively complete [54]. In terms of environmental protection issues, the eastern region has stronger environmental awareness and higher environmental protection standards. Meanwhile, many high-polluting industries have been relocated to the underdeveloped western region. Most of the cities in the central region are dominated by industry and agriculture, and the service industry started late. They also have many problems in economic development, such as an unreasonable development of resources and environmental pollution. The economic development in western China is Sustainability 2023, 15, 8994 6 of 28 rough, with backward technology and serious environmental pollution [55]. Therefore, if the transfer payment policy is implemented for resource-depleted cities in the three regions, the effects of the policy may be quite different. One view holds that resource-depleted cities in the eastern region have a better policy environment, so they are more motivated to reduce resource dependence than cities in the central and western regions. Another point of view is that the mature development of the eastern region will reduce the marginal effect of policy, but the resource-depleted cities in the central and western regions can reduce resource dependence more effectively with the support of policies. Based on this, the study proposes the second hypothesis: H2. There is regional heterogeneity in the effects of the transfer payment policy. The eastern, central, and western regions will have different responses to the transfer payment policy. Lastly, industrial structure transformation has always been the key factor in promoting urban transformation. Resource-based cities take advantage of their own resource endowment to gather resource-intensive industries and their supporting industries, forming a "heavy" industrial structure [56]. Taking coal resource-based cities as an example, their industries start from coal and gradually form an industrial chain, including coal mining, processing, sales, and other links under the dual influence of the scale and agglomeration effect. Their industrial structure is deeply influenced by coal resource endowment, making economic development constrained by resource endowment and falling into the "dilemma" of the original industrial structure [57]. Resource-based cities rely on local resources and concentrate on the exploitation and rough processing of core resources, which will have a crowding out effect on high-quality human capital and technical elements, solidifying the original industrial structure [58], which is not conducive to the upgrading of the industrial structure of resource-based cities. Industrial structure transformation and upgrading can be achieved through the improvement of industrial structures and rationalization of industrial structures [31]. The improvement of industrial structures is the process of the development of industrial structures from a low level to a high level. In a narrow sense, it refers to the process of industrial structures changing from labor-intensive to capital-intensive structures and then to technology-intensive structures. In a broad sense, it refers to the process of industrial structures changing from primary industries to secondary and tertiary industries. The rationalization of industrial structures represents the enhancement of the coordination ability and the improvement of correlation between different industries, which reflects the aggregation quality among industries. When the evolution of industrial structures synergistically drives the improvement of labor productivity within each industry and the industry with higher labor productivity accounts for a larger share, it belongs to the advanced industrial structures of high quality. Appropriate policies can lead to the innovation and development of enterprises and help enterprises improve their innovation efficiency [22]. As the subsidies are directly paid by the central government to the local government, the transfer payment policy can assist the local government to provide financial support and tax incentives for the technological innovation of enterprises, lead the production factors to flow from low-efficiency enterprises into high-efficiency enterprises, and drive the industrial structures from a low level to a high level. Finally, the transformation of local industrial structures will be promoted, tertiary industries will be the focus of economic development, and the mining of non-renewable resources will be reduced. The industry needs correct guidance in the process of development to avoid resource misallocation and resource waste caused by blind investments and overproduction. The goal of industrial structure rationalization is to help enterprises reduce the friction on the unreasonable industrial structure, reduce the replacement cost of the enterprises, make up for the incomplete market information, and improve the efficiency of resource allocation among industries. At the same time, strengthening infrastructure construction is conducive to cultivating local technical personnel and attracting more high-quality talents [54]. So that strengthening infrastructure and technological progress are also important factors driving urban transformation. Wherein, infrastructure construction includes a series of measures such as local investment in science and technology, improvement of the education system and medical system, construction of urban roads, and urban greening. Technological progress represents the goal of protecting the natural environment and ecology and achieving sustainable development by means of green technology innovation, improving production efficiency, and reducing pollutant emissions. Based on the above analysis, the third hypothesis is proposed: H3. The transfer payment policy affects the resource dependence of resource-depleted cities through industrial transformation, infrastructure construction, and technological progress. There are 12 cities in the first batch, 32 cities in the second batch, and 25 cities in the third batch, making a total of 69 resource-depleted cities, including prefecture-level, county-level, and municipal districts. This policy constructs an experiment for the study of using the difference-in-differences model to assess the impact of transfer payments on regional resource dependence. Considering that the development and resource abundance of different cities in China is quite divergent, this study controls the research scope to resource-based cities identified in <National Sustainable Development Plan for Resource-Based Cities (2013-2020)>. In view of the availability of data, a total of 113 resource-based cities at the prefecture-level city scale are selected as research samples, 24 of which are resource-depleted cities as the experimental group, and the remaining 89 cities as the control group. On this basis, referring to the practice of Song, Liu, and Zhao [19,59], the corresponding multi-period difference-in-differences model is constructed as follows: In Formula (1), subscripts i and t represent the region and year, respectively. The explained variable rd it is the resource dependence degree of city i in year t. DID it is the core explanatory variable, namely, the multi-period difference-in-differences variable (DID it = treatment i × post it ). treatment i represents whether it is a resource-depleted city (if so, then treatment i = 1) (otherwise, treatment i = 0). post it represents whether the policy is implemented in city i in year t (if so, then post it = 1) (otherwise, post it = 0). control it represents the control variables that affect the degree of resource dependence with respect to i and t. η i represents the individual fixed effect, which controls the individual factors that affect the degree of resource dependence but does not change with time. γ t represents the time effect, which controls the time factor affecting all regions with the time change, and ε it is the error term. The positive and negative signs and numeric values of β 1 reflect the direction and degree of influence of the transfer payment policy in resource-depleted cities on local resource dependence. Explained Variable and Core Explanatory Variable The explained variable is regional resources dependence (rd). Resource dependence refers to the amounts of natural resources required for regional economic development, and it is usually a ratio indicator (such as the proportion of resource industry output value in GDP [60] and the proportion of employees in the resource industry) that accounts for the total number of employees [61]. This study refers to the ratio of the mining industry employees to the total population at the end of the year [62] to measure the resource dependence of the region. Because the mining industry covers a wide range, including coal, oil, natural gas, metal and non-metallic mineral processing, wood harvesting and other industries directly related to natural resources, it can comprehensively measure the economy's dependence on natural resources. At the same time, the total population at the end of the year is a more stable statistical indicator than the employment population at the end of the year, which can make the data more robust. DID is the core explanatory variable, representing whether the locality is classified as a resource-depleted city and provided with corresponding transfer payments. Control Variables Since the degree of resource dependence is closely related to the regional economic and social development, in order to make the degree of resource dependence of the experimental group and the control group comparable, it is necessary to control the characteristics that affect the economic and social development of the experimental group and the control group. The control variables selected in this study to measure regional characteristics include (1) the level of economic development, specifically the logarithm of real GDP per capita (lngdp) and its quadratic term (lngdpp), of which the GDP value is deflated with 2006 as the base year. Per capita, GDP can directly reflect regional economic conditions and economic development levels [27]. Meanwhile, the quadratic term of per capita GDP is often used to test the classical Environmental Kuznets Curve (EKC) and analyze the "inverted U-shaped" relationship between economic development level and environmental indicators [28], so as to control the influence of the economic development degree on the resource dependence degree. (2) The government's size, specifically the local public budget expenditure (lngov), will affect the degree of environmental emphasis of local governments in the process of economic development. Reasonable fiscal expenditure can improve the level of public services and help reduce resource dependence [29]. (3) The level of urbanization, specifically the logarithm of population density (lnden), reflects the level of urbanization to some extent. The more concentrated the population density, the higher the level of economic development of a city, thus influencing the degree of resource dependence [30]. (4) The industrial development scale is the number of industrial enterprises above the designated size (lnqys). The more industrial enterprises there are, the heavier the local dependence on the secondary industry will be and the higher the dependence on non-renewable resources will be [31]. (5) The level of urban infrastructure is the logarithm value of road area per capita (lnpas). The better the level of urban infrastructure, the higher the level of local economic and technological development, so it will affect the degree of resource dependence. We choose per capita road area to represent the level of urban infrastructure according to Yuan et al. [32]. (6) The emission level of pollutants in the city is represented by lnwat [33]. (7) The urban pollutant treatment level, specifically selected as the representative general industrial solid waste comprehensive utilization rate (su), can effectively reflect the local pollution status of an area [34]. (8) The science and technology investment level (sci), which refers to the proportion of science and technology expenditure in government expenditure and the continuous improvement of the regional technology level, will create a good development environment for industrial transformation and technological innovation, thus reducing regional resource dependence [35]. (9) The education input level (edu) refers to the proportion of education expenditure to government expenditure. The education input level can reflect the education status of the local population. The higher the education level, the more support they have for industrial transformation and green development, thus affecting the resource dependence of the region. Data Description The data used in this study are all from the <China Urban Statistical Yearbook> from 2006 to 2017. Some of the cities have data missing in some years, which are supplemented by searching the statistical yearbooks of various cities and linear interpolation and finally obtaining the balanced panel data of 113 resource-based cities in China over 12 years. The descriptive statistics of each variable are shown in Table 1. Table 2 shows the baseline regression results of Formula (1), where column (1) is the regression results without control variables, and columns (2)-(8) are the regression results with the gradually introducing control variables. The results show that in the process of gradually introducing control variables, the DID coefficients are all significantly negative at the 1% level. When there are no control variables, the coefficient value is −0.0148, and after introducing all control variables, the coefficient value is −0.0133, which indicates that the implementation of the transfer payment policy has significantly reduced the resource dependence of resource-depleted cities. Each additional unit of transfer payment can reduce the proportion of workers in the extractive industry to 1.67% of the total population at the end of the year; that is, the implementation of a transfer payment policy can reduce the number of workers in the extractive industry, thus reducing the degree of local resource dependence. Therefore, Hypothesis 1 is confirmed. Baseline Regression Result From the perspective of control variables, in columns (2)-(8), the per capita GDP terms are positively correlated with resource dependence, while the quadratic terms of the per capita GDP are negatively correlated. This shows that with the improvement of the level of economic development, the degree of dependence on resources has a trend of first rising and then falling, that is, an inverted "U"-shaped relationship. The absolute value of the quadratic term coefficient is small, indicating that it is a very long process to rely on the improvement of economic development to reduce local resource dependence. The coefficient of population density terms in columns (3)-(8) are positive, which means the improvement of the urbanization level will increase local dependence on resources. This may be due to the coexistence of the population agglomeration dividend brought about by urbanization and the social survival pressure brought about by population increases, but the pressure is significantly greater than the demographic dividend, which makes the development of resource-based cities more dependent on the exploitation of local resources. The coefficients of the number of industrial enterprises above the designated size in columns (4)-(8) are significantly negative, indicating that the increase in the number of urban industrial enterprises is conducive to reducing the dependence on resources. This is because the industrial enterprises above the designated size are not only mining enterprises that rely on natural resources but also assembly-type and modulation-type enterprises that do not rely on natural resources for development. The increase in the number of such enterprises can share the pressure of the employment rate of some state-owned mining enterprises, thereby reducing the dependence on resources. This result preliminarily shows that the transformation of industrial structures has a positive effect on resource dependence. The coefficients of per capita urban road area are significantly positive in columns (5)- (7), but the value is small, indicating that the level of urban infrastructure construction has little impact on resource dependence. The effect of the remaining control variables is not obvious, so they will not be explained. Parallel Trend Test To test the hypothesis of parallel trends and analyze the dynamic effects of policies, we referred to the practice of Wu et al. [31] and conducted the test based on event analysis. In order to reflect the implementation of the three transfer payment policies at different times better, we draw a time trend graph for each policy implementation separately. Firstly, China identified the first batch of resource-depleted cities in 2008 and proposed to continue to provide transfer payments to resource-depleted cities except for Panjin city in 2011, so here we consider the first batch of resource-depleted cities as having received two policy shocks. We exclude data from the second and third batches of resourcedepleted cities as well as Panjin city after 2011, and plot Figure 1 only uses the first batch of resource-depleted cities as the experimental group and the remaining resource-based cities as the control group. The diagnostic statistics of the first batch of resource-depleted cities (experimental group) and other cities (control group) are shown in Table 3. Table 3 and Figure 1 present that the policy in 2008 began to show effects in 2010, and the resource dependence of the experimental group began to decline. The 2011 policy showed a completely different trend from the experimental group during the implementation period, with a slight increase at first and then a significant decrease. The effect of the policy in 2008 was weak and accompanied by a delay. On the one hand, the first batch of resource-depleted cities faced high learning costs, and there was no previous successful transformation experience to learn from. The road to transformation is still in the exploratory stage, so the policy effect is not obvious. On the other hand, as the first batch of resource-depleted cities faced the most serious urban problems, with serious industrial consolidation and a high proportion of secondary industries, resourcedependent enterprises were unable to complete the transformation and upgrading of their own industries in a short period, thus creating a policy of delay. Secondly, China identified the second batch of resource-depleted cities in 2009. We exclude data from the first and third batches of resource-depleted cities, and plot Figure 2 only uses the second batch of resource-depleted cities as the experimental group and the remaining resource-based cities as the control group. The diagnostic statistics of the first batch of resource-depleted cities (experimental group) and other cities (control group) are shown in Table 4. Figure 1 present that the policy in 2008 began to show effects in 2010, and the resource dependence of the experimental group began to decline. The 2011 policy showed a completely different trend from the experimental group during the implementation period, with a slight increase at first and then a significant decrease. The effect of the policy in 2008 was weak and accompanied by a delay. On the one hand, the first batch of resource-depleted cities faced high learning costs, and there was no previous successful transformation experience to learn from. The road to transformation is still in the exploratory stage, so the policy effect is not obvious. On the other hand, as the first batch of resource-depleted cities faced the most serious urban problems, with serious industrial consolidation and a high proportion of secondary industries, resource-dependent enterprises were unable to complete the transformation and upgrading of their own industries in a short period, thus creating a policy of delay. Secondly, China identified the second batch of resource-depleted cities in 2009. We exclude data from the first and third batches of resource-depleted cities, and plot Figure 2 only uses the second batch of resource-depleted cities as the experimental group and the remaining resource-based cities as the control group. The diagnostic statistics of the first batch of resource-depleted cities (experimental group) and other cities (control group) are shown in Table 4. From Table 4 and Figure 2, it can be seen that after the implementation of the policy, the trend of the experimental group is no longer the same as that of the control group, with a slight increase at first and then a downward trend since 2011. Firstly, the slight increase may be due to the fact that the enterprises are dealing with the closing of the extractive industry, and some of the holes that have been developed in the early stage need to continue to be mined in a certain amount to reduce the development loss. Secondly, the policy effect of the second batch of resource-depleted cities is also temporarily delayed, but the delay was shorter compared to the first batch of resourcedepleted cities. This is because there are precedents of the first batch of resourcedepleted cities that can be used for reference, so the second batch of resource-depleted cities has a relative advantage in the transition. In addition, China identified the third batch of resource-depleted cities in 2012. We plot Figure 3 to show the trend of the third batch of resource-depleted cities and other resource-based cities. The diagnostic statistics of the first batch of resource-depleted cities (experimental group) and other cities (control group) are shown in Table 5. It can be seen from Table 5 and Figure 3 that the resource dependence of the experimental group appears to rise since the first batch of resource-depleted cities was identified in 2008, while the resource dependence started to decrease after the declaration process ended in 2011. The rise is due to the intentional upgrading of resource extraction by the cities concerned in the hope of passing the audit of the third resource-depleted cities in order to obtain the transfer payments issued by the central government. After the declaration ended and the policy started to be implemented, the cities started to focus on industrial transformation efforts. The decline in resource dependence here is a joint effect of the re- From Table 4 and Figure 2, it can be seen that after the implementation of the policy, the trend of the experimental group is no longer the same as that of the control group, with a slight increase at first and then a downward trend since 2011. Firstly, the slight increase may be due to the fact that the enterprises are dealing with the closing of the extractive industry, and some of the holes that have been developed in the early stage need to continue to be mined in a certain amount to reduce the development loss. Secondly, the policy effect of the second batch of resource-depleted cities is also temporarily delayed, but the delay was shorter compared to the first batch of resource-depleted cities. This is because there are precedents of the first batch of resource-depleted cities that can be used for reference, so the second batch of resource-depleted cities has a relative advantage in the transition. In addition, China identified the third batch of resource-depleted cities in 2012. We plot Figure 3 to show the trend of the third batch of resource-depleted cities and other resource-based cities. The diagnostic statistics of the first batch of resource-depleted cities (experimental group) and other cities (control group) are shown in Table 5. It can be seen from Table 5 and Figure 3 that the resource dependence of the experimental group appears to rise since the first batch of resource-depleted cities was identified in 2008, while the resource dependence started to decrease after the declaration process ended in 2011. The rise is due to the intentional upgrading of resource extraction by the cities concerned in the hope of passing the audit of the third resource-depleted cities in order to obtain the transfer payments issued by the central government. After the declaration ended and the policy started to be implemented, the cities started to focus on industrial transformation efforts. The decline in resource dependence here is a joint effect of the return to normal levels of extraction and the policy itself. Based on the above analysis, we believe that the data used in the article satisfy the parallel trend test. Next, to further examine parallel trends, we conduct a dynamic effect analysis based on the event study method. Due to the large difference in the time points of the three divisions of resource-depleted cities, the number of years before and after the implementation of the policy is different. Therefore, we analyze the dynamic effects of the three policies, respectively. We construct the following model using the year after policy implementation as a comparison benchmark. In Formula (2), , , , respectively, represent the multiplication term of the year dummy variable and the corresponding policy dummy variable before, when, and after the policy implementation, and , , is the corresponding coefficient. If it is positive, it means that the influence of the policy is positive; otherwise, the influence is negative. The regression results are shown in Table 6, and Figures 4-6 are drawn according to the regression results. Next, to further examine parallel trends, we conduct a dynamic effect analysis based on the event study method. Due to the large difference in the time points of the three divisions of resource-depleted cities, the number of years before and after the implementation of the policy is different. Therefore, we analyze the dynamic effects of the three policies, respectively. We construct the following model using the year after policy implementation as a comparison benchmark. In Formula (2), D pre s , D policy , D post s , respectively, represent the multiplication term of the year dummy variable and the corresponding policy dummy variable before, when, and after the policy implementation, and α pre s , α policy , α post s is the corresponding coefficient. If it is positive, it means that the influence of the policy is positive; otherwise, the influence is negative. The regression results are shown in Table 6, and Figures 4-6 are drawn according to the regression results. Table 6 shows the value of the regression coefficient , , in different batches. Figures 4-6 show the dynamic effect coefficient of three batches of resourcedepleted cities. Since our data interval is from 2006 to 2017, we can see the effect of two years before and nine years after the establishment of the first batch of resource-depleted cities in 2008 (here we consider the 2011 policy as a supplement to the 2008 policy, and it will not be discussed separately). The second and third batches of resource-depleted cities are excluded from the data, and the dynamic effect analysis is carried out on the first batch of resource-depleted cities. The results are shown in columns (1) and (2) in Table 6 and the dynamic effect coefficient is shown in Figure 4. We can see that the coefficients before the policy implementation are not significant, and the coefficients are significantly negative from the 5th year after the policy implementation, indicating that the data pass the parallel trend test. Table 6 shows the value of the regression coefficient , , in different batches. Figures 4-6 show the dynamic effect coefficient of three batches of resourcedepleted cities. Since our data interval is from 2006 to 2017, we can see the effect of two years before and nine years after the establishment of the first batch of resource-depleted cities in 2008 (here we consider the 2011 policy as a supplement to the 2008 policy, and it will not be discussed separately). The second and third batches of resource-depleted cities are excluded from the data, and the dynamic effect analysis is carried out on the first batch of resource-depleted cities. The results are shown in columns (1) and (2) in Table 6 and the dynamic effect coefficient is shown in Figure 4. We can see that the coefficients before the policy implementation are not significant, and the coefficients are significantly negative from the 5th year after the policy implementation, indicating that the data pass the parallel trend test. Table 6 shows the value of the regression coefficient α pre s , α policy , α post s in different batches. Figures 4-6 show the dynamic effect coefficient of three batches of resourcedepleted cities. Since our data interval is from 2006 to 2017, we can see the effect of two years before and nine years after the establishment of the first batch of resource-depleted cities in 2008 (here we consider the 2011 policy as a supplement to the 2008 policy, and it will not be discussed separately). The second and third batches of resource-depleted cities are excluded from the data, and the dynamic effect analysis is carried out on the first batch of resource-depleted cities. The results are shown in columns (1) and (2) in Table 6 and the dynamic effect coefficient is shown in Figure 4. We can see that the coefficients before the policy implementation are not significant, and the coefficients are significantly negative from the 5th year after the policy implementation, indicating that the data pass the parallel trend test. It should be noted here that the results show that the policy effect in 2008 was delayed for a long time, indicating that the effect of the initial implementation of transfer payments is not ideal. This corroborates the statement of why China introduced the continuation of transfer payments to the first batch of resource-depleted cities in 2011. At the same time, it also shows that our data results are highly consistent with reality. From columns (3) and (4) in Table 6 and Figure 5, we can see the effect three years before and eight years after the establishment of the second batch of resource-depleted cities in 2009. The coefficients before the policy implementation are not significant, and the coefficients are significantly negative from the 4th year after the policy implementation, so the data also pass the parallel trend test. Compared with the 2008 policy, the 2009 policy delay is shorter, indicating that the second batch of resource-depleted cities is more sensitive and active in responding to the transfer payment policy. From columns (5) and (6) in Table 6 and Figure 6, we can see the effect of four years before and five years after the establishment of the second batch of resource-depleted cities in 2012. After the implementation of this policy, the coefficient has been significantly negative, and there is no delay effect. This shows that the third batch of resource-depleted cities already have certain expectations and plans for transformation, so they respond very quickly to policies. Combining the above figures and table, we further illustrate that the data used in the article pass the parallel trend test. Propensity Score Matching and Difference-in-Differences Analysis Referring to Shi et al., this study uses a propensity score matching and difference-indifferences model to test the robustness of the baseline regression results and uses radius matching, proximity matching, and kernel matching methods to verify [63]. Considering that the earliest implementation year of the policy was 2008, the policy may affect the changes in the relevant economic variables in the pilot areas [31], so this study carries out propensity score matching on the samples only in 2006 and 2007. We use propensity score matching to find the control group with the characteristics closest to the experimental group and retain the sample points within the common value range in the aforementioned two years. The matched experimental group and control group are used for regression. The specific model is as follows: The regression results are shown in Table 7. No matter what matching method is adopted, the regression coefficient of the DID terms is negative at the significance level of 1%, and the value is relatively close to that obtained in the baseline regression. Therefore, the baseline regression results in this study are robust. Placebo Test To further verify the robustness of the baseline regression results, a placebo test is performed. First, we carry out random sampling on all resource-based cities and policy time within the sample, and 24 prefecture-level resource-based cities and their correspond-ing random policy implementation time points are selected each time; that is, 24 cities correspond to 24 policy implementation time points. Then, take the 24 cities as the virtual experimental group and the remaining cities as the virtual control group for regression. Repeat the above process 500 times to obtain the DID regression coefficients of interaction between 500 virtual processing groups and virtual policy times (see Figure 7). In Figure 7, the regression coefficients of 500 times are mainly concentrated around 0, and the p values of most estimated values are greater than 0.1; that is, they are not significant at the significance level of 10%. Meanwhile, the vertical dotted line of the reference regression results is shown in the figure, which is located at the low tail of the distribution of regression coefficients, indicating that the reference regression results are not obtained by chance. Figure 7 illustrates that the results pass the placebo test; that is, the baseline regression results are robust. To further verify the robustness of the baseline regression results, a placebo test is performed. First, we carry out random sampling on all resource-based cities and policy time within the sample, and 24 prefecture-level resource-based cities and their corresponding random policy implementation time points are selected each time; that is, 24 cities correspond to 24 policy implementation time points. Then, take the 24 cities as the virtual experimental group and the remaining cities as the virtual control group for regression. Repeat the above process 500 times to obtain the DID regression coefficients of interaction between 500 virtual processing groups and virtual policy times (see Figure 7). In Figure 7, the regression coefficients of 500 times are mainly concentrated around 0, and the p values of most estimated values are greater than 0.1; that is, they are not significant at the significance level of 10%. Meanwhile, the vertical dotted line of the reference regression results is shown in the figure, which is located at the low tail of the distribution of regression coefficients, indicating that the reference regression results are not obtained by chance. Figure 7 illustrates that the results pass the placebo test; that is, the baseline regression results are robust. Change the Explained Variables To eliminate the statistical bias, robustness tests are carried out by using alternative explained variables. In the baseline regression Formula (1), the explained variable is calculated as the ratio of the number of employees in the mining industry to the total urban population. We use the ratio of the number of employees in the mining industry to the number of employees at the end of the year (Alter1) and the ratio of the number of employees in the mining industry to the number of employees in the secondary industry (Alter2) to eliminate the error caused by variable selection. After substituting the dependent variable, the regression is performed according to the following model: The results are shown in column (1)-(4) in Table 8. It can be found that whether the control variables are added or not, the regression coefficient of the DID term is negative at a 1% significance level, indicating that the transfer payment policy can reduce the resource dependence of resource-depleted cities. The baseline regression results are robust. Change the Explained Variables To eliminate the statistical bias, robustness tests are carried out by using alternative explained variables. In the baseline regression Formula (1), the explained variable Y it is calculated as the ratio of the number of employees in the mining industry to the total urban population. We use the ratio of the number of employees in the mining industry to the number of employees at the end of the year (Alter1) and the ratio of the number of employees in the mining industry to the number of employees in the secondary industry (Alter2) to eliminate the error caused by variable selection. After substituting the dependent variable, the regression is performed according to the following model: The results are shown in column (1)-(4) in Table 8. It can be found that whether the control variables are added or not, the regression coefficient of the DID term is negative at a 1% significance level, indicating that the transfer payment policy can reduce the resource dependence of resource-depleted cities. The baseline regression results are robust. Consider Province-Year Fixed Effect In the baseline regression Formula (1), we only consider city-individual fixed effects and time fixed effects, but there are still many unobservable factors that will affect the regression results. Referring to Li and Shi [20,63], we add a province-year fixed effect to control the effect of unobservable factors that vary over time in each city. The specific model is as follows: wherein δ j t = Province j × Year t , Province j represents the jth province and Year t represents the tth year. The regression result is shown in column (5)-(6) in Table 8. It can be seen from the results that the coefficient of the DID term is still significantly negative after the provincial-year effect is fixed, indicating that the transfer payment policy can effectively reduce the resource dependence of resource-depleted cities, and it also shows that the baseline regression results are robust. Exclude Other Policy Interference To further test the robustness of baseline regression, the effects of other relevant policies during the same period are considered. Since other environmental policies in the same period may also affect the resource dependence of resource-depleted cities, the results obtained by the baseline regression may be formed by multiple policies. The low-carbon city pilot policy was first implemented in 2010, and the carbon market pilot policy was first implemented in 2012. Both policies can reduce carbon emissions in the corresponding pilot areas. The exploitation and processing of natural resources will generate a large amount of carbon dioxide, so the low-carbon city pilot policy and carbon market pilot policy will inhibit the exploitation of natural resources to a certain extent, which can interfere with the baseline regression results. Although the <Air Pollution Prevention and Control Action Plan> enacted in 2013 and other environmental policies will have an impact on the exploitation of local natural resources, these policies affect all cities in China and cannot be excluded from our research. The selection of resource-based cities in the research scope of our research has eliminated the endogeneity problem to the greatest extent. After excluding low-carbon city pilot policy and carbon market pilot policy areas, the regression is performed according to Formula (1), and the results are shown in Table 9. It can be seen from the results that the implementation of a transfer payment policy can still significantly reduce the resource dependence of resource-depleted cities after excluding the role of low-carbon city pilot policies and carbon market policies, so the baseline regression results are robust. Heterogeneity Analysis To verify Hypothesis 2 and examine regional heterogeneity, the city samples are divided into the eastern region, the central region, and the western region. The eastern region contains 28 resource-based cities including 5 resource-depleted cities, the central region contains 55 resource-based cities including 15 resource-depleted cities, and the western region contains 30 resource-based cities including 4 resource-based cities. The regression shown in Formula (1) is carried out under the sample ranges of eastern, central, and western regions, respectively. The results are shown in Table 10. It can be seen from the geographical location columns that the regression coefficients of the DID term in the central and western regions are negative at a 5% significance level, and the numerical values are less than the baseline regression results. This indicates that the implementation of the transfer payment policy in central and western regions can effectively help local resource-depleted cities to reduce resource dependence, and this effect is more prominent in the western region. The regression coefficient in the eastern region is not significant and is close to 0, which to a certain extent indicates that the transfer payment policy is not ideal for resource-depleted cities in the eastern region. The differences in original resource endowment and economic and social development in eastern, central, and western regions result in different responses to the transfer payment policy. The eastern regions are relatively developed, and, overall, industrial structures are advanced and complete. Moreover, the eastern region has more social resources such as funds, talents, and infrastructure, so the marginal effect of the transfer payment policy on reducing local resource dependence is small, or even insufficient. On the contrary, the central and western regions are relatively backward in terms of economic development, social development, and urban infrastructure construction. The transfer payment policy can play a better role in the structural upgrading of enterprises, green technological innovation, and the reduction of resource exploitation. From the regression results, it can be seen that the western region has the largest policy role space. So far, Hypothesis 2 has been verified. To explore more heterogeneity, we considered the effect of heterogeneity on city size and resource type on the degree of resource dependence. The difference in city size means that each city has a different economic agglomeration effect and different efficiency in resource allocation and utilization, which ultimately leads to a larger difference in the degree of response to transfer policies. The 113 resource-based cities in the article are divided in conjunction with the <Notice on Adjusting the Criteria for the Division of Urban Scale> issued by the State Council in November 2014. Since the State Council divides cities into five categories, among which the samples of small cities and super-large cities are scarce, we refer to Li and Li to integrate cities into three categories, small and medium-sized cities, large cities, megacities (super and super-large cities) [64]. The regression results are shown in the city size column in Table 10. We find that the DID terms of small and medium-sized cities and large cities are significantly negative at the 1% level, while the regression coefficients of megacities are not significant. Since only one city was identified as a resource-depleted city in the megacity category, the results obtained are not universal, and more attention should be paid to the results of small and medium-sized cities and large cities. The results show that small and medium-sized cities have a stronger effect of reducing resource dependence than large cities, which is counter-intuitive. The common perception is that large cities have stronger economic agglomeration effects than small cities and should respond more positively to policies so that there should be a greater degree of reduction in resource dependence. By comparing the number of employees per unit of industrial enterprises in the two types of cities, we believe that the reason for this result is that in the process of industrial transformation, large cities will face more corporate personnel flows, such as layoffs, job transfer, and re-employment after training, etc. On the one hand, frequent staff turnover is a big cost for enterprises. On the other hand, and more importantly, the unemployment rate is a key concern for the local government, and many local extractive enterprises are state-owned enterprises facing the situation of being unable to lay off staff and the difficulty of transformation. Small and medium-sized cities face relatively low silent costs in the process of industrial structure transformation, so their response to transfer payment policies is more active and effective than large cities. The main resources of different resource-depleted cities are different, which will also make them respond differently to the effect of transfer payment policies. Referring to Li and Wang [22], we divide resource-based cities into four categories according to their resources: coal, forestry, metal, and petroleum. Forestry cities account for a minority of resource-based cities, with resources mainly in forests and trees, and the regions are distributed in Jilin, Heilongjiang, and Yunnan. Metal city resources include non-ferrous and ferrous metals, mostly distributed in Liaoning, Henan, Jiangxi, and Gansu provinces. The regression results are shown in the resource type column in Table 10. The results show that the DID term is significantly negative for both coal and oil types, indicating that the transfer payment policy has a positive effect on resource-depleted cities with coal and oil as the main extracted resources. Meanwhile, the regression coefficients of the DID coefficients for the two resources of forestry and metals fail to pass the significance test, although they are negative, indicating that resource-depleted cities dominated by these two resources do not respond significantly to the transfer payment policy. The reason for this result is that coal resources are mostly used for energy conversion after mining, such as electricity and heating. The current research and exploration of new energy generation is becoming more and more mature, which to some extent relieves the demand for coal resources in various regions. This provides a positive role in the transformation of resource-depleted cities with coal mining as the mainstay industry, so the effect of the transfer payment policy is most significant. In contrast, wood is mostly used for building houses and making furniture after mining, while metals are mostly used for manufacturing large machines and precision instruments. Due to the irreplaceable nature of wood and metal resources, the demand for both resources remains high, and resource-depleted cities face the dual pressure of meeting market demands and addressing resource depletion. Therefore, resource-depleted cities with wood and metal as their core industries face more restrictions and challenges in the process of industrial transformation than coal-based resource-depleted cities, which makes the transfer payment policy not so effective. The development of China's resource-depleted cities faces some dilemmas. If the economy continues to rely on resources, it will be completely depleted and the city will also face more serious air pollution problems. If we want to get rid of the dependence on resources, we must change the mode of production, which requires a lot of capital investment, talent introduction, and technological innovation, and the change in the mode of production will make workers face the risk of unemployment. According to the research results of this study, the empirical study of the impact of transfer payments on the resource dependence degree is conducted by using the difference-in-differences model. It proves that the transfer payment policy can help reduce the resource dependence of resourcedepleted cities, which plays an important guiding role in the financial policy governance of resource-depleted cities in China. At the same time, according to the results of Hypothesis 2, the effects of transfer payment policies have regional heterogeneity, and the eastern, central, and western regions will have different responses to transfer payment policies, which also puts forward different challenges and requirements for the financial policies of resource-depleted cities in different regions of China. The eastern, central, and western regions need to take appropriate financial measures in light of their actual conditions to reduce the dependence of their cities on resources, which is of great economic significance for maintaining energy security, safeguarding local people's livelihood, and ensuring high-quality and sustainable economic development. Mechanism Analysis In order to investigate the mechanism of transfer payment policy on resource dependence of resource-depleted cities, this study introduces the rationalization of industrial structures (RIS) and the optimization of industrial structures (OIS) as mechanism variables Sustainability 2023, 15, 8994 22 of 28 from the path of regional industrial structure transformation, the government expenditure on science and technology education (INF) as a mechanism variable from the path of infrastructure construction, and technological progress (TEC) as a mechanism variable from the path of improving green innovation, thereby improving production efficiency. The rationalization of industrial structures is used to measure the degree of aggregation between different industries. The optimization of industrial structures is used to measure the upgrading of industrial structures [62]. The expenditure on education of science and technology is used to measure the level of infrastructure construction in the region, and technological progress is used to measure the improvement of resource utilization efficiency. Drawing on the research method of Wu et al., we first examine the influence of policy and time multiplication terms (DID) on the mechanism variable and then examine the influence of the mechanism variable on the explained variable [31]. The specific model is as follows: wherein MEC it represents the mechanism variable of city i in the tth year. When MEC it is RIS it , it represents the rationalization of the industrial structure of city i in the tth year. When it is OIS it, it represents the optimization of the industrial structure of city i in the tth year. When it is I NF it , it represents the infrastructure construction level of city i in the tth year. When it is TEC it , it represents the technological progress of city i in the tth year. The calculation method of the index is as follows: The INF variable is represented by the logarithmic value of government expenditure on science, technology, and education, and the TEC variable is represented by the sewage treatment rate. As shown in Formulas (8) and (9), Y i represents the output value of the ith industry, L i represents the number of employees in the ith industry, and Y and L represent the region's total output value and the total number of employees, respectively. The calculation of the rationalization of the industrial structures adopts the Theil index measurement method. According to economic theory, when the economy is in the final equilibrium state, the production efficiency between the industries should be the same (namely, Y i /L i = Y/L, and the closer RIS is to 0, the more stable the aggregation between the industries is). In order to facilitate the analysis of the directionality of the effect, the absolute value of RIS is used in the corresponding regression. The optimization of industrial structures is measured by the ratio of the GDP of the tertiary industry to the GDP of the secondary industry. The higher the OIS value, the more important the tertiary industry is in the local economic development. When MEC it is RIS it , the regression coefficient θ 1 in Formula (6) reflects the influence of transfer payment policy on the rationalization of industrial structure. If it is negative, it indicates that the transfer payment policy helps to promote coordination between various industries and makes the local industrial structures more stable and reasonable. In Formula (7), DID it * RIS it is an interaction term and the coefficient β 1 represents the effect of transfer payment policy on resource dependence through rationalization of industrial structure. If it is negative, it means that the transfer payment policy can help enhance resource dependence reduction in resource-depleted cities by coordinating the resource utilization efficiency of various industries. When MEC it is OIS it , the regression coefficient θ 1 in Formula (6) reflects the influence of transfer payment policy on the optimization of industrial structure. If it is positive, it indicates that the transfer payment policy helps to promote the transformation of the local industrial structures from the secondary industry to the tertiary industry; otherwise, it shows a negative impact. In Formula (7), DID it * OIS it is an interaction term and the coefficient β 1 represents the degree to which the transfer payment policy affects resource dependence through the optimization of industrial structures. If it is negative, it means that the transfer payment policy can help resource-depleted cities to reduce resource dependence by means of industrial structure transformation at an advanced level. When MEC it is I NF it , the regression coefficient θ 1 in Formula (6) reflects the impact of the transfer payment policy on the infrastructure construction level. If it is positive, it indicates that the policy has a positive impact on local infrastructure construction; otherwise, it has a negative impact. In Formula (7), DID it * I NF it is an interaction term, and if the coefficient β 1 is negative, it means that the transfer payment policy can reduce the resource dependence of resource-depleted cities by strengthening infrastructure construction. When MEC it is TEC it , the regression coefficient θ 1 Formula (6) reflects the impact of the transfer payment policy on technological progress; otherwise, it has a negative impact. In Formula (7), DID it * TEC it is an interaction term, and if the coefficient β 1 is negative, it means that the transfer payment policy can reduce the resource dependence of resource-depleted cities through technological progress. Regression is performed according to Formulas (6) and (7), and the results are reported in Table 11. Columns (1)-(4) in Table 11 show the results of the impact of the transfer payment policy on mechanism variables, and columns (5)- (8) show the estimated results of interactive items between the transfer payment policy and the mechanism variables' effects on resource dependence. It can be seen that the regression coefficient in column (1) is positive, which means that the transfer payment policy has a negative effect on the rationalization of industrial structure; that is, it reduces the coordination degree between different industrial structures in resource-depleted cities. Regression coefficients in column (2) are positive at the 1% significance level, showing that the transfer payment policy plays a significant role in promoting the industrial structure optimization index. This shows that the optimization of industrial structures is conducive to the evolution of the secondary industry to the tertiary industry in resource-depleted cities, and it also shows that the evolution of the industrial structures to the advanced level sacrifices the coordination between different industries. The regression coefficients in columns (3) and (4) are not significant, indicating that the effect of the transfer payment policy on local infrastructure construction and tech-nological progress is not obvious. Furthermore, the regression coefficients of the interaction terms in columns (5)- (8) are all significantly negative, which shows that transfer payment policy can reduce the resource dependence of resource-depleted cities by promoting the transformation of the regional industrial structures to a more rational and advanced level, strengthening the construction of local infrastructure levels and improving the efficiency of resource utilization. Through numerical analysis, the reduction of resource dependence by means of the rationalization of industrial structures is very small, so it can be concluded that the industrial structure transformation is mainly based on optimization and supplemented by rationalization. Based on the above analysis, it can be concluded that the optimization of industrial structures, infrastructure construction, and technological progress is the primary way that the transfer payment policy can reduce resource dependence, and the rationalization of industrial structures is the secondary way. So far, Hypothesis 3 has been verified. Discussion On the one hand, the transfer payment policy brings more opportunities to enterprises in resource-depleted cities. Whether it provides tax subsidies or technical assistance directly to some enterprises with high pollution and high dependence on resources, it enables local enterprises to have more capital and opportunities to seek transformation, promotes the incubation of new industries through green technology innovation and other means, and optimizes the original industrial structure of cities. It also helps improve the utilization efficiency of resources and reduce their dependence on local resources. Therefore, the transfer payment policy can affect the resource dependence of resource-depleted cities through industrial transformation. On the other hand, reducing the degree of resource dependence also means protecting local non-renewable resources, protecting the natural environment, and reducing the emission of pollutants. Especially in the context of the national vigorous implementation of air governance, the transfer payment policy provides support and motivation for these resource-depleted cities. This ensures environmental benefits and economic benefits. At the same time, this study makes a corresponding contribution to the research on the impact of transfer payment policies on local resource dependence. Firstly, our study supplements the existing literature on the impact of transfer payment policies on urban resource dependence and the relationship between transfer payment policies and regional development [45][46][47][48][49]. We reviewed the existing research methods [15,59], reconstructed the model for research, and selected ten control variables (such as lngdp, lngov, and lnden [27][28][29][30][31][32][33][34][35]61,62]) for testing and obtained the results. We emphasize that there is an inverted U-shaped relationship between the degree of resource dependence and economic development, and transfer payment policies significantly reduce the degree of resource dependence of resource-depleted cities. In addition, a series of analyses, including a baseline regression test, placebo test, elimination of variable bias test, and interference policy impact test, were carried out to enrich our study, indicating the robustness of the baseline regression results. The implementation of the transfer payment policy can still significantly reduce the degree of resource dependence of resource-depleted cities. This helps further improve our understanding of the relationship between resource dependence and economic development. This test will guide similar follow-up studies. Secondly, the study added the analysis of influencing factors and excluding interfering factors for regional differences in dependency degree and revealed the correlation between dependency degree and variables such as regional enterprise structural adjustment and technological innovation [54,55]. The results indicate that the transfer payment policy has a more obvious effect on the central and western regions of China, while the impact on the resource dependence of the eastern region is not significant. It is helpful for each region of China to adopt different financial measures according to local factual conditions to reduce the dependence on the resources of cities and promote sustainable economic development in each region. It enriches the theory of regional development and is of great significance to the regional coordinated development of the regional economy. Finally, the study has a certain reference significance for the transfer payment policy to promote the transformation of industrial structure. Through mechanism analysis, it is found that the resource dependence of resource-depleted cities can be reduced through the rationalization of industrial structures, the upgrading of industrial structures, and the enhancement of infrastructure construction and technological progress. This puts forward new requirements for the transformation of industrial structures in resource-depleted cities. Under the influence of transfer payment policies, industrial structure transformation can help cities to reduce their dependence on resources, promote the sustainable development of resources, and then feed the sustainable development of society. Therefore, this study can provide new guidance for sustainable development. Conclusions This study is an empirical study of the impact of transfer payment policies on resourcedepleted cities using the difference-in-differences model based on annual data from 113 resource-based cities in China from 2006 to 2017. This study is not directly identified by a special theory, but it is an empirical one. Therefore, through the experimental evaluation of the effect and mechanism, the study draws the following conclusions about the correlation between transfer payment policy and resource dependence of resourcedepleted cities: Firstly, transfer payment policy significantly reduces the degree of resource dependence of resource-depleted cities, and the degree of resource dependence presents an inverted U-shaped relationship with economic development. The expansion of the number of industrial enterprises also helps to reduce the degree of resource dependence to some extent. Secondly, numerous analyses show that the baseline regression results are robust. Thirdly, through heterogeneity analysis, it is found that due to the heterogeneity of economic development, transfer payment policies have more obvious effects on the central and western regions of China, while the impact on resource dependence in the eastern region is not significant. In addition, mechanism analysis shows that the transfer payment policy has an impact on the promotion of the transformation of industrial structures through the rationalization of industrial structures, the upgrading of industrial structures, and the enhancement of infrastructure construction and technological progress. Based on the above findings, this study puts forward the following five policy proposals: Firstly, the government should continue to improve the financial policies provided to resource-depleted cities, continuously enhance the positive effects of financial policies on resource-depleted cities by reducing resource dependence, and design a more systematic method of capital utilization to balance the contradiction between industrial transformation and social stability. Secondly, the government should increase its support to the resourcedepleted cities in the central and western regions. Although the central and western regions have weak basic conditions for their own development due to their geographical location, the transfer payment policy has positive effects on these regions. Therefore, it is necessary to continue to expand the policy effect, establish a more stable long-term mechanism, and accelerate the economic development of the central and western regions. Thirdly, re-exploring the way to reduce resource dependence in the eastern region is of great importance. The economic development of the eastern region is rapid, but transfer payments cannot provide the impetus to reduce local resource dependence. It is necessary to find a more appropriate assistance model for the eastern region to help improve other cities. The problem of resource depletion is what many resource-based cities face and will face. The transformation mode and path of resource-depleted cities are worthy of other resourcebased cities to learn and imitate. Moreover, the central government should summarize the individual problems existing in the transformation of resource-depleted cities in different regions, so as to avoid the same problems from recurring in resource-based cities. Last but not least, the local government should continue to expand the intensity of industrial transformation, promote the evolution of the secondary industry to the tertiary industry, continue to carry out green technology innovation, introduce high-quality talents, improve the efficiency of resource utilization, and reduce urban pollution. Resource-depleted cities can get rid of the "curse of resources" and enter a benign development mode.
2023-06-04T15:06:32.080Z
2023-06-02T00:00:00.000
{ "year": 2023, "sha1": "b5b9c9058421c3aa0d4af239c40387412939400b", "oa_license": "CCBY", "oa_url": "https://doi.org/10.3390/su15118994", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "04eac02b7f699197f60bbc2697b795a350406762", "s2fieldsofstudy": [ "Economics", "Environmental Science" ], "extfieldsofstudy": [] }
38563144
pes2o/s2orc
v3-fos-license
Spectrum of anemia associated with chronic liver disease Anemia of diverse etiology is a common complication of chronic liver diseases. The causes of anemia include acute or chronic gastrointestinal hemorrhage, and hypersplenism secondary to portal hypertension. Severe hepatocellular disease predisposes to hemorrhage because of impaired blood coagulation caused by deficiency of blood coagulation factors synthesized by hepatocytes, and/or thrombocytopenia. Aplastic anemia, which is characterized by pancytopenia and hypocellular bone marrow, may follow the development of hepatitis. Its presentation includes progressive anemia and hemorrhagic manifestations. Hematological complications of combination therapy for chronic viral hepatitis include clinically significant anemia, secondary to treatment with ribavirin and/or interferon. Ribavirin-induced hemolysis can be reversed by reducing the dose of the drug or discontinuing it altogether. Interferons may contribute to anemia by inducing bone marrow suppression. Alcohol ingestion is implicated in the pathogenesis of chronic liver disease and may contribute to associated anemia. In patients with chronic liver disease, anemia may be exacerbated by deficiency of folic acid and/or vitamin B12 that can occur secondary to inadequate dietary intake or malabsorption. INTRODUCTION Chronic liver diseases frequently are associated with hematological abnormalities. Anemia of diverse etiology occurs in about 75% of patients with chronic liver disease [1] . A major cause of anemia associated with chronic liver disease is hemorrhage, especially into the gastrointestinal tract. Patients with severe hepatocellular disease develop defects of blood coagulation as a consequence of endothelial dysfunction, thrombocytopenia, deficiencies of coagulation factors and various associated disorders [2] . In severe hepatocellular disease, decreased synthesis of liver-produced plasma proteins leads to reduced serum levels of several blood clotting factors. Hemorrhage may occur as a complication of chronic liver disease because of a lack of one or more liver-produced blood clotting factors, thrombocytopenia, and/or defective platelet function. Hemorrhage in such patients may also occur from esophageal or gastric varices secondary to portal hypertension. The biosynthetic pathways of blood coagulation factors Ⅱ, Ⅶ, Ⅸ and Ⅹ are within the hepatocyte and are dependent on vitamin K [3] . Low serum levels of these factors are associated with prolongation of the prothrombin time (PT). When attributable to hepatocellular disease, they are not improved by administration of vitamin K; correction of the associated impaired blood coagulation necessitates infusion of preparations of the deficient factors. Splenomegaly, which is usually caused by portal hypertension in patients with chronic liver disease, may lead to secondary hemolysis, an increase in plasma volume, macrocytosis and megaloblastic anemia. Alcohol, a common etiologic factor of chronic liver disease, is toxic to the bone marrow. Alcoholics often develop secondary malnutrition, a manifestation of which may be anemia caused by folic acid deficiency. In some patients, bone marrow failure and aplastic anemia develop after an episode of hepatitis. Finally, anemia is a recognized complication of treatment of chronic hepatitis C with a combination of interferon and ribavirin: anemia in this context is predominantly caused by ribavirin-induced hemolysis [4] . The frequent association of anemia with chronic liver disease and/or hepatocellular failure provides a rationale for examining the role of the liver in the formation and destruction of erythrocytes. Indeed, the liver itself may be implicated in a variety of different mechanisms that contribute to the development of anemia in patients with chronic liver disease. This paper provides an overview of anemia that may complicate chronic liver diseases and the mechanisms responsible. The frequent association of anemia with chronic liver disease and/or hepatocellular failure provides a rationale for examining the role of the liver in the formation and destruction of red blood cells. Indeed, a variety of different mechanisms may be implicated in the development of anemia in patients with liver disease. PORTAL HYPERTENSION AND ANEMIA Acute gastrointestinal hemorrhage is a common and potentially serious complication of portal hypertension [5][6][7][8] . It is usually caused by rupture of an esophageal varix. Hemorrhage caused by this mechanism is the second most common cause of mortality in patients with cirrhosis. In such patients, a ruptured esophageal varix is the cause of approximately 70% of episodes of upper gastrointestinal hemorrhage [6] . Acute hemorrhage may induce severe hypovolemia and subsequently secondary iron deficiency anemia. The initial aim of treatment is correction of hypovolemia and restoration of stable hemodynamic function; minimal values for mean arterial pressure and for hemoglobin of 80 mmHg and 8 g/100 mL, respectively, should be maintained. Initially, gelatin-based colloids or solutions of human albumin may be infused to correct hypovolemia. However, infusions of packed erythrocytes in plasma are ideal in this context since such infusions have the potential of correcting, not only hypovolemia, but also secondary anemia. First-line management involves institution of both medical and endoscopic treatments (Table 1) [6] . Medical therapy includes administration of vasoactive drugs, such as somatostatin, octreotide or terlipressin. Optimal endoscopic treatment involves ligation of esophageal varices and obturation of gastric varices with tissue adhesives. In some patients with cirrhosis, chronic hemorrhage into the gastrointestinal tract occurs. Esophageal and gastric varices and/or portal hypertensive gastropathy may be associated with slow chronic loss of blood into the gut and development of chronic iron deficiency anemia. The most important approach to management is prevention of variceal hemorrhage [5,7,8] . The annual incidence of initial variceal hemorrhage in patients with cirrhosis is estimated to be about 4%, but for the group with medium-sized or large varices, the incidence is about 15% [6] . β-blockers or isosorbide 5-mononitrate may reduce the rate of transformation of small varices into large varices and decrease the incidence of variceal hemorrhage in patients with small varices [5] . In patients who survive a first episode of variceal hemorrhage, the risk of recurrent hemorrhage is > 60%. Accordingly, all patients surviving variceal hemorrhage should receive active treatment aimed at preventing recurrence. Nonselective β-blockers or isosorbide 5-mononitrate and endoscopic therapy, including ligation of and/or sclerotherapy of varices, are the first-line treatments for preventing recurrence of variceal hemorrhage [5,8] ; a combination of both these approaches constitutes optimal management. Additional treatment with oral iron supplementation is indicated for iron deficiency anemia caused by chronic blood loss. In some cases of advanced chronic liver disease, intravenous iron formulations may be administrated to increase plasma levels and tissue deposits of iron. Hypersplenism secondary to portal hypertension is another mechanism of anemia in patients with chronic liver disease. Hypersplenism is associated with splenomegaly. In addition to chronic liver disease, thrombosis of the splenic vein may also be a cause of an increase in pressure within the portal venous system, which can lead to secondary hypersplenism. The main characteristics of hypersplenism are those attributable to pancytopenia. Hemolytic anemia occurs because of intrasplenic destruction of erythrocytes. Destruction of megakaryocytes and leukocyte precursors results in thrombocytopenia and leukopenia [9] . Symptoms and signs of hypersplenism are influenced by the primary underlying disease; they include abdominal pain and/or discomfort, and, in advanced cases, gastrointestinal hemorrhage secondary to portal hypertension. There may be hyperplasia of the progenitor cells in the bone marrow. It is important to determine the cause of hypersplenism. The main therapeutic approach for this syndrome is management directed at the underlying primary disease, usually chronic liver disease. When chronic liver disease is advanced, additional therapeutic options may need to be adopted. After assessing the severity of impaired hepatocellular function in a patient with advanced chronic liver disease, splenectomy may be considered if the splenic vein is thrombosed. An alternative approach is partial or total embolization of the splenic artery, which, in some recent studies, has been associated with good results, in particular, lower morbidity and mor tality rates than those associated with surgery. Partial embolization preserves the immunological function of the spleen and is the preferred option for patients with cirrhosis [10] . IMPAIRED BLOOD COAGULATION The liver plays a central role in blood coagulation. Acute and chronic hepatocellular diseases are usually associated with defective blood coagulation due to a variety of different causes. These include: decreased hepatic synthesis of factors Ⅱ, Ⅶ, Ⅸ and Ⅹ; the presence of inhibitors of these factors; decreased clearance of activated coagulation factors; thrombocytopenia; impaired platelet function; hyperfibrinolysis; and disseminated intravascular coagulation [11,12] . Coagulation defects complicating liver disease predispose to an increased bleeding tendency, which increases both morbidity and mortality [11][12][13] . Defective blood coagulation associated with hepatocellular disease may be monitored using global screening tests, such as the PT and the activated partial thromboplastin time. In mild hepatocellular disease, PT usually is within the normal range or only modestly prolonged. In more advanced hepatocellular disease, prolongation of PT tends to reflect the severity of hepatocellular failure. Vitamin K routinely is administered parenterally (usually only once) to patients with liver disease and a prolonged PT, to exclude vitamin K deficiency as a cause of the prolonged PT [11] . Thrombocytopenia (platelet count < 150 000/L) is common in patients with chronic liver disease; it has been reported in as many as 76% of patients with cirrhosis [4,12] . The pathogenesis of the thrombocytopenia is complex; it includes splenic pooling, and increased destruction and impaired production of platelets (Figure 1). Impaired production of platelets is caused, at least in part, by low levels of thrombopoietin. Prolonged bleeding time, and impaired aggregation, reduced adhesiveness and abnor mal ultrastr ucture of platelets ref lect abnormal platelet function; these abnormalities have been attributed to an intrinsic platelet defect. Specific treatments to attempt to reverse the effects of this defect are not usually given, but platelet transfusions or plateletstimulating agents have been administered in some cases. An important coagulation defect associated with chronic liver disease is low levels of factor Ⅶa. In recent years, the hemostatic agent recombinant factor Ⅶa has become available as a potentially new therapeutic agent for use in the management of coagulopathy in patients with cirrhosis. This agent may enhance initial control of acute variceal bleeding [14] . However, such therapy is associated with significant side effects, such as vascular injury and thrombosis. Hyperfibrinolysis is another cause of impaired hemostasis in patients with liver disease. In a nonrandomized trial [15] , antifibrinolytic amino acids were administered to patients with acute or chronic liver disease, who had upper gastrointestinal bleeding and acquired defects of blood coagulation. However, administration of such amino acids does not have an established place in therapy. Thrombotic events, although rare in patients with cirrhosis, may occur. They tend to involve particularly the portal and/or mesenteric veins. A rational approach to managing disorders of blood coagulation in patients with liver disease is important because of the high risk of associated secondary hemorrhage. APLASTIC ANEMIA Aplastic anemia associated with liver disease is characterized by development of pancytopenia and hypocellular bone marrow in relation to the occurrence of hepatitis [16] . The main feature of this syndrome is injury to or loss of pluripotent hematopoietic stem cells, in the absence of infiltrative disease of the bone marrow [16][17][18][19] . Hepatitis-associated aplastic anemia (HAA) has been defined as a variant of aplastic anemia, which occurs concurrently with or within 6 mo of an increase in the serum level of alanine aminotransferase to at least five times the upper limit of the reference range. Severe marrow aplasia may be induced by hepatitis viruses, such as hepatitis B virus and hepatitis C virus (HCV), and also by other viruses, such as human immunodeficiency virus, Epstein-Barr virus, transfusion-transmitted virus and echovirus [16,20] . Parvovirus B19 commonly infects pro-erythroblasts and may induce transient red-cell aplasia, particularly in patients with chronic hemolytic anemia. It has been postulated that viruses and/or antigens, through the mediation of γ interferon or the cytokine cascade, induce lymphocyte activation and ultimately apoptotic death of hematopoietic cells in the bone marrow [17] . Clinical presentation includes symptoms and signs related to pancytopenia, such as pallor, fatigue, hemorrhagic manifestations, progressive anemia, and bacterial infections. The diagnosis of HAA is suggested by a complete blood count, which reveals pancytopenia (including anemia) together with absolute reticulocytopenia [16] . A bone marrow biopsy typically reveals hypocellularity that affects red and white cell precursors and megakaryocytes; residual hematopoietic cells appear morphologically normal [19] . The two major options for treating severe HAA are hematopoietic cell transplantation and immunosuppressive therapy. According to recent reviews, response rates to these approaches are 75%-88% and 75%-80%, respectively [16,18] . Blood and platelet infusions are often necessary before instituting specific treatment; before administration blood products should be irradiated to avoid sensitization. OF HEPATITIS Currently, optimal treatment for chronic infection with HCV infection is a combination of therapy with pegylated interferon and ribavirin. Of hematological abnormalities that may be associated with such combination therapy, the most common is anemia [21] . Significant anemia (hemoglobin < 10 g/dL) has been observed in 9%-13% of patients receiving interferon and ribavirin; moderate anemia (hemoglobin < 11 g/dL) occurs in about 30% of patients undergoing such treatment [20][21][22] . There are several mechanisms by which anemia may occur during combination therapy for HCV infection, and ribavirin and/or interferons may contribute to anemia. In this context, hemoglobin concentrations decrease mainly as a result of ribavirin-induced hemolysis [20] . Anemia caused to ribavirin leads to modifications of the dose in up to 25% of patients, and this type of anemia may be problematic in patients with HCV infection, especially those who also have renal or cardiovascular disorders. Adherence to ribavirin therapy is one factor that is critically important in the treatment of HCV infection. Although ribavirin-associated anemia can be reversed by reducing the dose of ribavirin or by discontinuing the drug altogether, this approach compromises outcomes by significantly decreasing rates of sustained virological response. A recent study reviewed the predictors of anemia in patients undergoing treatment for HCV infection [21] . Patients with impaired renal function may be at an increased risk of ribavirin-related anemia and, accordingly, should be monitored carefully. Furthermore, a decrease in hemoglobin concentration of ≥ 1.5 g/dL by week 2 of treatment has been found to be an excellent early predictor of subsequent substantial decreases in hemoglobin. This predictor might be applied to identify candidates for early intervention for management of anemia to facilitate maintenance of the dose of ribavirin. One of the specific approaches to manage ribavirinassociated anemia is administration of recombinant human erythropoietin [21] . After 16 wk of ribavirin therapy, patients who had also been given erythropoietin alfa had significantly higher mean hemoglobin levels than patients in a control group. In patients with chronic hepatitis C, viramidine, a prodrug of ribavirin that is selectively taken up by the liver, has the potential of maintaining the antiviral efficacy of ribavirin, while decreasing the risk of hemolytic anemia [4] . Interferons may also contribute to anemia. Their main relevant action is induction of bone marrow suppression. This effect of interferon results in suppression of compensatory reticulocytosis associated with ribavirininduced hemolytic anemia. Thus, the bone-marrowsuppressive effect of interferon may contribute to anemia, which complicates therapy with combination of interferon and ribavirin [4] . ALCOHOL, LIVER DISEASE AND ANEMIA Alcohol is implicated in the pathogenesis of chronic liver disease; it may contribute to anemia secondary to its direct effects on the liver and also to other diverse mechanisms (Figure 2) [23] . Markers of iron overload tend to be higher among those who consume more than two alcoholic drinks per day than among non-drinkers, after adjusting for potential confounding factors [24] . Consumption of alcohol appears to be associated with an approximately 40% reduction in the risk of development of iron deficiency anemia. Folic acid and vitamin B12 deficiencies develop frequently in patients with cirrhosis. These deficiencies may be related to inadequate food intake or intestinal malabsorption. They are suspected when examination of a blood film reveals hypersegmented cells and oval macrocytes, in addition to round macrocytes characteristic of chronic liver disease. When anemia is caused by these deficiencies, the mean corpuscular volume is increased and bone marrow shows megaloblastic erythropoiesis. Anemia due to folic acid deficiency may result, not only from a lack of folic acid in the diet, but also the weak antifolate action of ethanol. Folic acid deficiency is the most common cause of a low hematocrit in hospitalized patients who are alcoholics [25,26] . Parenterally administered vitamin B12 not only corrects anemia caused by vitamin B12 deficiency, but may also induce improvement in the peripheral neuropathy that are associated with this deficiency [23] . Supplements of vitamins A, B and C may be administered empirically to patients with advanced alcoholic disease. Anemia in an alcoholic may also arise as a consequence of the direct toxic effects of alcohol on erythrocyte precursors in the bone marrow. Management of alcoholinduced suppression of erythropoiesis includes abstinence from alcohol and a nutritious diet with appropriate supplements. Other factors that may contribute to anemia and a low hematocrit in alcoholic patients are given in Table 2. CONCLUSION Liver diseases are frequently associated with hematological abnormalities. Anemia of diverse etiology occurs in many of these patients. Bleeding is one of the most severe causes of anemia, with a high mortality, and defective blood coagulation contributes to the anemia. Other mechanisms of anemia include aplastic anemia secondary to previous hepatitis, or side effects of treatment of hepatitis with interferon and ribavirin. In patients with alcoholic liver disease, different effects of alcohol may contribute to anemia, such as malabsorption, malnutrition or direct toxic effect. The pathogenesis of the anemia in each case is different and it is important to begin the correct therapy. Figure 2 Scheme, in patients with alcoholic liver disease, depicting how different effects of alcohol may contribute to anemia (adapted from Moreno Otero et al [23] ).
2018-04-03T04:19:18.707Z
2009-10-07T00:00:00.000
{ "year": 2009, "sha1": "0e921891ec68dbb2043ede56e9b9d195cb09de61", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3748/wjg.15.4653", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "5535b61b58b4cf9cbb5944c7c109dfe994c0329d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
13882196
pes2o/s2orc
v3-fos-license
A parallel approximation algorithm for mixed packing and covering semidefinite programs We present a parallel approximation algorithm for a class of mixed packing and covering semidefinite programs which generalize on the class of positive semidefinite programs as considered by Jain and Yao [2011]. As a corollary we get a faster approximation algorithm for positive semidefinite programs with better dependence of the parallel running time on the approximation factor, as compared to that of Jain and Yao [2011]. Our algorithm and analysis is on similar lines as that of Young [2001] who considered analogous linear programs. Introduction Fast parallel approximation algorithms for semidefinite programs have been the focus of study of many recent works (e.g. [1,2,7,5,4,3]) and have resulted in many interesting applications including the well known QIP = PSPACE [3] result. In many of the previous works, the running time of the algorithms had polylog dependence on the size of the input program but in addition also had polynomial dependence of some width parameter (which varied for different algorithms). Sometimes (for specific instances of input programs) the width parameter could be as large as the size of the program making it an important bottleneck. Recently Jain and Yao [6] presented a fast parallel approximation algorithm for an important subclass of semidefinite programs, called as positive semidefinite programs, and their algorithm had no dependence on any width parameter. Their algorithm was inspired by an algorithm by Luby and Nisan [8] for positive linear programs. In this work we consider a more general mixed packing and covering optimization problem. We first consider the following feasibility task Q1. Q1: Given n × n positive semidefinite matrices P 1 , . . . , P m , P and non-negative diagonal matrices C 1 , . . . , C m , C and ε ∈ (0, 1), find an ε-approximate feasible vector x ≥ 0 such that (while comparing matrices we let ≥, ≤ represent the Löwner order), We present an algorithm for Q1 running in parallel time polylog(n, m) · 1 ε 4 · log 1 ε . Using this and standard binary search, a multiplicative (1 − ε) approximate solution can be obtained for the following optimization task Q2 in parallel time polylog(n, m) · 1 ε 4 · log 1 ε . Q2: Given n × n positive semidefinite matrices P 1 , . . . , P m , P and non-negative diagonal matrices C 1 , . . . , C m , C, The following special case of Q2 is referred to as a positive semidefinite program. Q3: Given n × n positive semidefinite matrices P 1 , . . . , P m , P and non-negative scalars c 1 , . . . , c m , maximize: Our algorithm for Q1 and its analysis is on similar lines as the algorithm and analysis of Young [10] who had considered analogous questions for linear programs. As a corollary we get an algorithm for approximating positive semidefinite programs (Q3) with better dependence of the parallel running time on ε as compared to that of Jain and Yao [6] (and arguably with simpler analysis). Very recently, in an independent work, Peng and Tangwongsan [9] also presented a fast parallel algorithm for positive semidefinite programs. Their work is also inspired by Young [10]. Algorithm and analysis We mention without elaborating that using standard arguments the feasibility question Q1 can be easily transformed, in parallel time polylog(mn), to the special case when P and C are identity matrices and we consider this special case from now on. Our algorithm is presented in Figure 1 . Idea of the algorithm The algorithm starts with an initial value for x such that ∑ m i=1 x i P i ≤ 1. It makes increments to the vector x such that with each increment, the increase in ∑ m i=1 x i P i is not more than We argue that it is always possible to increment x in this manner if the input instance is feasible, hence the algorithm outputs infeasible if it cannot find such an increment to x. The algorithm stops when the minimum eigenvalue of ∑ m i=1 x i C i has exceeded 1. Due to our condition on the increments, at the end of the algorithm we also have ∑ m i=1 x i P i ≤ (1 + O(ε))1. We obtain handle on the largest and smallest eigenvalues of concerned matrices via their soft versions, which are more easily handled functions of those matrices (see definition in the next section). (e) Choose increment vector α ≥ 0 and scalar δ > 0 such that We show the following lemma in the appendix, which shows that if a small increment is made in the vector x, then changes in Imax(∑ m j=1 x j A j ) and Imin(∑ m j=1 x j A j ) can be bounded appropriately. Proof. Consider any execution of step 3(e) of the algorithm. Fix j such α j > 0. Note that, . We will show that global(x) ≥ g throughout the algorithm and this will show the desired since that local j (x) ≤ (1 + ε)g ≤ (1 + ε)global(x). At step 3(b) of the algorithm, g can be equal to global(x). Since x never decreases during the algorithm, at step 3(a), global(x) can only increase. At step 3(d), the modification of C j s only decreases Tr(exp(− ∑ m i=1 x i C i )) and hence again global(x) can only increase. Lemma 4. For each increment of x at step 3(f) of the algorithm, This shows the desired. Lemma 5. If the input instance P 1 , . . . , P m , C 1 , . . . , C m is feasible, that is there exists vector y ∈ R m such that then always at step 3(c) of the algorithm, min Hence the algorithm will return some x * . If the algorithm outputs infeasible, then the input instance is not feasible. Proof. Consider some execution of step 3(c) of the algorithm. Let C ′ 1 , . . . , C ′ m be the current values of C 1 , . . . , C m . Note that if the input is feasible with vector y, then we will also have . Therefore there exists j ∈ [m] such that , and hence local j (x) ≤ global(x). If the algorithm outputs infeasible, then at that point min j {local j (x)} > global(x) and hence from the argument above P 1 , . . . , P m , C ′ 1 , . . . , C ′ m is infeasible which in turn implies that P 1 , . . . , P m , C 1 , . . . , C m is infeasible. Lemma 6. If the algorithm returns some x * , then Proof. Because of the condition of the while loop, it is clear that Note that the update of C j 's at step 3(d) only increase Imin(∑ m j=1 x j C j ). Hence using Lemma 4, we conclude that Φ(x) is non-decreasing during the algorithm. At step 1 of the algorithm, x i P i )) = ln n + 1. Hence just before the last increment, In the last increment, because of the condition on step 3(e) of the algorithm, Running time analysis Lemma 7. Assume that the algorithm does not return infeasible for some input instance. The number of times g is increased at step 3(b) of the algorithm is O(N/ε). Proof. At the beginning of the algorithm Tr(exp At the end of the algorithm Also (using Lemma 6) Hence g ≤ global(x) ≤ exp(13N). Whenever g is updated at step 3(b) of the algorithm, we have just before the update and global(x) = g just after the update. Thus g increases by at least (1 + ε) multiplicative factor. Hence the number of times g increases is O(N/ε). Lemma 8. Assume that the algorithm does not return infeasible for some input instance. The number of iterations of the while loop in the algorithm for a fixed value of g is O(N log(mN)/ε). Proof. From Lemma 6 and step 3(d) of the algorithm we have max{ . Hence δ = Ω(ε/N) throughout the algorithm. Let x j be increased in the last iteration of the while loop for a fixed value of g. Note that x j is initially 1/(m P j ) and at the end x j is at most 10N/ P j (since, using Lemma 6, x j P j ≤ ∑ m i=1 x j P j ≤ 10N). Hence the algorithm makes at most O(log(mN)/δ) = O(N log(mN)/ε) increments for each x j . Note that local j (x) only increases throughout the algorithm (easily seen for steps 3(d) and 3(e) of the algorithm). Hence since the last iteration of the while loop (for this fixed g) increases x j , it must be that each iteration of the while loop increases x j . Hence, the number of iterations of the while loop (for this fixed g) is O(N log(mN)/ε). We claim (without further justification) that each individual step in the algorithm can be performed in parallel time polylog(mn). Hence combining the above lemmas and using N = O( ln(mn) ε ), we get Corollary 9. The parallel running time of the algorithm is upper bounded by polylog(mn) · 1 ε 4 · log 1 ε . [10] N. E. Young. Sequential and parallel algorithms for mixed packing and covering. A Deferred proofs Proof of Lemma 2: We will use the following Golden-Thompson inequality. We will also need the following fact. Consider, (since ln(1 + a) ≤ a for all real a) The desired bound on Imin(∑ m j=1 (x j + α j )A j ) − Imin(∑ m j=1 x j A j ) follows by analogous calculations.
2012-01-29T15:00:30.000Z
2012-01-29T00:00:00.000
{ "year": 2012, "sha1": "149ff1a0db065bd67ff554c52f47c0ba29f56e2b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "2eb6ae1a080d89cb1d2e81dd79de3c5768ed6483", "s2fieldsofstudy": [ "Computer Science", "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }